Sample records for combined density-equalizing mapping

  1. FEM: Feature-enhanced map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat

    A method is presented that modifies a 2 m F obs- D F modelσ A-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretabilitymore » and decreased model bias compared with the starting 2 m F obs- D F modelσ A-weighted map.« less

  2. FEM: feature-enhanced map

    PubMed Central

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat; Sobolev, Oleg V.; Terwilliger, Thomas C.; Turk, Dusan; Urzhumtsev, Alexandre; Adams, Paul D.

    2015-01-01

    A method is presented that modifies a 2m F obs − D F model σA-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretability and decreased model bias compared with the starting 2m F obs − D F model σA-weighted map. PMID:25760612

  3. FEM: Feature-enhanced map

    DOE PAGES

    Afonine, Pavel V.; Moriarty, Nigel W.; Mustyakimov, Marat; ...

    2015-02-26

    A method is presented that modifies a 2 m F obs- D F modelσ A-weighted map such that the resulting map can strengthen a weak signal, if present, and can reduce model bias and noise. The method consists of first randomizing the starting map and filling in missing reflections using multiple methods. This is followed by restricting the map to regions with convincing density and the application of sharpening. The final map is then created by combining a series of histogram-equalized intermediate maps. In the test cases shown, the maps produced in this way are found to have increased interpretabilitymore » and decreased model bias compared with the starting 2 m F obs- D F modelσ A-weighted map.« less

  4. Modeling the efficacy of triplet antimicrobial combinations: yeast suppression by lauric arginate, cinnamic acid, and sodium benzoate or potassium sorbate as a case study.

    PubMed

    Dai, Yumei; Normand, Mark D; Weiss, Jochen; Peleg, Micha

    2010-03-01

    The growth of four spoilage yeasts, Saccharomyces cerevisiae, Zygosaccharomyces bailii, Brettanomyces bruxellensis, and Brettanomyces naardenensis, was inhibited with three-agent (triplet) combinations of lauric arginate, cinnamic acid, and sodium benzoate or potassium sorbate. The inhibition efficacy was determined by monitoring the optical density of yeast cultures grown in microtiter plates for 7 days. The relationship between the optical density and the sodium benzoate and potassium sorbate concentrations followed a single-term exponential decay model. The critical effective concentration was defined as the concentration at which the optical density was 0.05, which became an efficacy criterion for the mixtures. Critical concentrations of sodium benzoate or potassium sorbate as a function of the lauric arginate and cinnamic acid concentrations were then fitted with an empirical model that mapped three-agent combinations of equal efficacy. The contours of this function are presented in tabulated form and as two- and three-dimensional plots. Triplet combinations were highly effective against all four spoilage yeasts at three practical pH levels, especially at pH 3.0. The triplet combinations were particularly effective for inhibiting growth of Z. bailii, and combinations containing potassium sorbate had synergistic activities. The equal efficacy concentration model also allowed tabulation of the cost of the various combinations of agents and identification of those most economically feasible.

  5. Use of density equalizing map projections (DEMP) in the analysis of childhood cancer in four California counties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrill, D.W.; Selvin, S.; Close, E.R.

    In studying geographic disease distributions, one normally compares rates of arbitrarily defined geographic subareas (e.g. census tracts), thereby sacrificing the geographic detail of the original data. The sparser the data, the larger the subareas must be in order to calculate stable rates. This dilemma is avoided with the technique of Density Equalizing Map Projections (DEMP). Boundaries of geographic subregions are adjusted to equalize population density over the entire study area. Case locations plotted on the transformed map should have a uniform distribution if the underlying disease-rates are constant. On the transformed map, the statistical analysis of the observed distribution ismore » greatly simplified. Even for sparse distributions, the statistical significance of a supposed disease cluster can be reliably calculated. The present report describes the first successful application of the DEMP technique to a sizeable ``real-world`` data set of epidemiologic interest. An improved DEMP algorithm [GUSE93, CLOS94] was applied to a data set previously analyzed with conventional techniques [SATA90, REYN91]. The results from the DEMP analysis and a conventional analysis are compared.« less

  6. Caesarean Section--A Density-Equalizing Mapping Study to Depict Its Global Research Architecture.

    PubMed

    Brüggmann, Dörthe; Löhlein, Lena-Katharina; Louwen, Frank; Quarcoo, David; Jaque, Jenny; Klingelhöfer, Doris; Groneberg, David A

    2015-11-17

    Caesarean section (CS) is a common surgical procedure. Although it has been performed in a modern context for about 100 years, there is no concise analysis of the international architecture of caesarean section research output available so far. Therefore, the present study characterizes the global pattern of the related publications by using the NewQIS (New Quality and Quantity Indices in Science) platform, which combines scientometric methods with density equalizing mapping algorithms. The Web of Science was used as a database. 12,608 publications were identified that originated from 131 countries. The leading nations concerning research activity, overall citations and country-specific h-Index were the USA and the United Kingdom. Relation of the research activity to epidemiologic data indicated that Scandinavian countries including Sweden and Finland were leading the field, whereas, in relation to economic data, countries such as Israel and Ireland led. Semi-qualitative indices such as country-specific citation rates ranked Sweden, Norway and Finland in the top positions. International caesarean section research output continues to grow annually in an era where caesarean section rates increased dramatically over the past decades. With regard to increasing employment of scientometric indicators in performance assessment, these findings should provide useful information for those tasked with the improvement of scientific achievements.

  7. World-wide architecture of osteoporosis research: density-equalizing mapping studies and gender analysis.

    PubMed

    Brüggmann, D; Mäule, L-S; Klingelhöfer, D; Schöffel, N; Gerber, A; Jaque, J M; Groneberg, D A

    2016-10-01

    While research activities on osteoporosis grow constantly, no concise description of the global research architecture exists. Hence, we aim to analyze and depict the world-wide scientific output on osteoporosis combining bibliometric tools, density-equalizing mapping projections and gender analysis. Using the NewQIS platform, we analyzed all osteoporosis-related publications authored from 1900 to 2012 and indexed by the Web of Science. Bibliometric details were analyzed related to quantitative and semi-qualitative aspects. The majority of 57 453 identified publications were original research articles. The USA and Western Europe dominated the field regarding cooperation activity, publication and citation performance. Asia, Africa and South America played a minimal role. Gender analysis revealed a dominance of male scientists in almost all countries except Brazil. Although the scientific performance on osteoporosis is increasing world-wide, a significant disparity in terms of research output was visible between developed and low-income countries. This finding is particularly concerning since epidemiologic evaluations of future osteoporosis prevalences predict enormous challenges for the health-care systems in low-resource countries. Hence, our study underscores the need to address these disparities by fostering future research endeavors in these nations with the aim to successfully prevent a growing global burden related to osteoporosis.

  8. Caesarean Section—A Density-Equalizing Mapping Study to Depict Its Global Research Architecture

    PubMed Central

    Brüggmann, Dörthe; Löhlein, Lena-Katharina; Louwen, Frank; Quarcoo, David; Jaque, Jenny; Klingelhöfer, Doris; Groneberg, David A.

    2015-01-01

    Caesarean section (CS) is a common surgical procedure. Although it has been performed in a modern context for about 100 years, there is no concise analysis of the international architecture of caesarean section research output available so far. Therefore, the present study characterizes the global pattern of the related publications by using the NewQIS (New Quality and Quantity Indices in Science) platform, which combines scientometric methods with density equalizing mapping algorithms. The Web of Science was used as a database. 12,608 publications were identified that originated from 131 countries. The leading nations concerning research activity, overall citations and country-specific h-Index were the USA and the United Kingdom. Relation of the research activity to epidemiologic data indicated that Scandinavian countries including Sweden and Finland were leading the field, whereas, in relation to economic data, countries such as Israel and Ireland led. Semi-qualitative indices such as country-specific citation rates ranked Sweden, Norway and Finland in the top positions. International caesarean section research output continues to grow annually in an era where caesarean section rates increased dramatically over the past decades. With regard to increasing employment of scientometric indicators in performance assessment, these findings should provide useful information for those tasked with the improvement of scientific achievements. PMID:26593932

  9. Diffusion Cartograms for the Display of Periodic Table Data

    ERIC Educational Resources Information Center

    Winter, Mark J.

    2011-01-01

    Mapping methods employed by geographers, known as diffusion cartograms (diffusion-based density-equalizing maps), are used to present visually interesting and informative plots for data such as income, health, voting patterns, and resource availability. The algorithm involves changing the sizes of geographic regions such as countries or provinces…

  10. Visualizing statistical significance of disease clusters using cartograms.

    PubMed

    Kronenfeld, Barry J; Wong, David W S

    2017-05-15

    Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.

  11. Biometric recognition via fixation density maps

    NASA Astrophysics Data System (ADS)

    Rigas, Ioannis; Komogortsev, Oleg V.

    2014-05-01

    This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.

  12. Interfacial structure, bonding and composition of InAs and GaSb thin films determined using coherent Bragg rod analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cionca, C.; Walko, D. A.; Yacoby, Y.

    2007-01-01

    We have used Bragg rod x-ray diffraction combined with a direct method of phase retrieval to extract atomic resolution electron-density maps of a complementary series of heteroepitaxial III-V semiconductor samples. From the three-dimensional electron-density maps we derive the monolayer spacings, the chemical compositions, and the characteristics of the bonding for all atomic planes in the film and across the film-substrate interface. InAs films grown on GaSb(001) under two different As conditions (using dimer or tetramer forms) both showed conformal roughness and mixed GaAs/InSb interfacial bonding character. The As tetramer conditions favored InSb bonding at the interface while, in the casemore » of the dimer, the percentages corresponding to GaAs and InSb bonding were equal within the experimental error. The GaSb film grown on InAs(001) displayed significant In and As interdiffusion and had a relatively large fraction of GaAs-like bonds at the interface.« less

  13. Hirschsprung Disease: Critical Evaluation of the Global Research Architecture Employing Scientometrics and Density-Equalizing Mapping.

    PubMed

    Schöffel, Norman; Gfroerer, Stefan; Rolle, Udo; Bendels, Michael H K; Klingelhöfer, Doris; Groneberg-Kloft, Beatrix

    2017-04-01

    Introduction  Hirschsprung disease (HD) is a congenital bowel innervation disorder that involves several clinical specialties. There is an increasing interest on the topic reflected by the number of annually published items. It is therefore difficult for a single scientist to survey all published items and to gauge their scientific importance or value. Thus, tremendous efforts were made to establish sustainable parameters to evaluate scientific work within the past decades. It was the birth of scientometrics. Materials and Methods  To quantify the global research activity in this field, a scientometric analysis was conducted. We analyzed the research output of countries, individual institutions, authors, and their collaborative networks by using the Web of Science database. Density-equalizing maps and network diagrams were employed as state of the art visualization techniques. Results  The United States is the leading country in terms of published items ( n  = 685), institutions ( n  = 347), and cooperation ( n  = 112). However, although there is dominance in quantity, the most intensive international networks between authors and institutions are not linked to the United States. By contrast, most of the European countries combine the highest impact of publications. Further analysis reveal the influence of international cooperation and associated phenomena on the research field HD. Conclusion  We conclude that the field of HD is constantly progressing. The importance of international cooperation in the scientific community is continuously growing. Georg Thieme Verlag KG Stuttgart · New York.

  14. Low-amplitude clustering in low-redshift 21-cm intensity maps cross-correlated with 2dF galaxy densities

    NASA Astrophysics Data System (ADS)

    Anderson, C. J.; Luciw, N. J.; Li, Y.-C.; Kuo, C. Y.; Yadav, J.; Masui, K. W.; Chang, T.-C.; Chen, X.; Oppermann, N.; Liao, Y.-W.; Pen, U.-L.; Price, D. C.; Staveley-Smith, L.; Switzer, E. R.; Timbie, P. T.; Wolz, L.

    2018-05-01

    We report results from 21-cm intensity maps acquired from the Parkes radio telescope and cross-correlated with galaxy maps from the 2dF galaxy survey. The data span the redshift range 0.057 < z < 0.098 and cover approximately 1300 deg2 over two long fields. Cross-correlation is detected at a significance of 5.7 σ. The amplitude of the cross-power spectrum is low relative to the expected dark matter power spectrum, assuming a neutral hydrogen (H I) bias and mass density equal to measurements from the ALFALFA survey. The decrement is pronounced and statistically significant at small scales. At k ˜ 1.5 h Mpc-1, the cross-power spectrum is more than a factor of 6 lower than expected, with a significance of 15.3 σ. This decrement indicates a lack of clustering of neutral hydrogen (H I), a small correlation coefficient between optical galaxies and H I, or some combination of the two. Separating 2dF into red and blue galaxies, we find that red galaxies are much more weakly correlated with H I on k ˜ 1.5 h Mpc-1 scales, suggesting that H I is more associated with blue star-forming galaxies and tends to avoid red galaxies.

  15. Using a Mach-Zehnder interferometer to deduce nitrogen density mapping

    NASA Astrophysics Data System (ADS)

    Boudaoud, F.; Lemerini, M.

    2015-07-01

    This work presents an optical method using the Mach-Zehnder interferometer. We especially diagnose a pure nitrogen gas subjected to a point to plane corona discharge, and visualize the density spatial map. The interelectrode distance equals 6 mm and the variation of the optical path has been measured at different pressures: 220 Torr, 400 Torr, and 760 Torr. The interferograms are recorded with a CCD camera, and the numerical analysis of these interferograms is assured by the inverse Abel transformation. The nitrogen density is extracted through the Gladstone-Dale relation. The obtained results are in close agreement with values available in the literature.

  16. The energetics and mass structure of regions of star formation: S201

    NASA Technical Reports Server (NTRS)

    Thronson, H. A., Jr.; Smith, H. A.; Lada, C. J.; Glaccum, W.; Harper, D. A.; Loewenstein, R. F.; Smith, J.

    1984-01-01

    Theoretical predictions about dust and gas in star forming regions are tested by observing a 4 arcmin region surrounding the radio continuum source in 5201. The object was mapped in two far infrared wavelengths and found to show significant extended emission. Under the assumption that the molecular gas is heated solely via thermal coupling with the dust, the volume density was mapped in 5201. The ratios of infrared optical depth to CO column density were calculated for a number of positions in the source. Near the center of the cloud the values are found to be in good agreement with other determinations for regions with lower column density. In addition, the observations suggest significant molecular destruction in the outer parts of the object. Current models of gas heating were used to calculate a strong limit for the radius of the far infrared emitting grains, equal to or less than 0.15 micron. Grains of about this size are required by the observation of high temperature (T equal to or greater than 20 K) gas in many sources.

  17. Domestic well locations and populations served in the contiguous U.S.: 1990

    USGS Publications Warehouse

    Johnson, Tyler; Belitz, Kenneth

    2017-01-01

    We estimate the location and population served by domestic wells in the contiguous United States in two ways: (1) the “Block Group Method” or BGM, uses data from the 1990 census, and (2) the “Road-Enhanced Method” or REM, refines the locations by using a buffer expansion and shrinkage technique along roadways to define areas where domestic wells exist. The fundamental assumption is that houses (and therefore domestic wells) are located near a named road. The results are presented as two nationally-consistent domestic-well population datasets.While both methods can be considered valid, the REM map is more precise in locating domestic wells; the REM map has a smaller amount of spatial bias (Type 1 and Type 2 errors nearly equal vs biased in Type 1), total error (10.9% vs 23.7%), and distance error (2.0 km vs 2.7 km), when comparing the REM and BGM maps to a calibration map in California. However, the BGM map is more inclusive of all potential locations for domestic wells. Independent domestic well datasets from the USGS, and the States of MN, NV, and TX show that the BGM captures about 5 to 10% more wells than the REM.One key difference between the BGM and the REM is the mapping of low density areas. The REM reduces areas mapped as low density by 57%, concentrating populations into denser regions. Therefore, if one is trying to capture all of the potential areas of domestic-well usage, then the BGM map may be more applicable. If location is more imperative, then the REM map is better at identifying areas of the landscape with the highest probability of finding a domestic well. Depending on the purpose of a study, a combination of both maps can be used.

  18. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  19. Type 2 Diabetes Research Yield, 1951-2012: Bibliometrics Analysis and Density-Equalizing Mapping

    PubMed Central

    Geaney, Fiona; Scutaru, Cristian; Kelly, Clare; Glynn, Ronan W.; Perry, Ivan J.

    2015-01-01

    The objective of this paper is to provide a detailed evaluation of type 2 diabetes mellitus research output from 1951-2012, using large-scale data analysis, bibliometric indicators and density-equalizing mapping. Data were retrieved from the Science Citation Index Expanded database, one of the seven curated databases within Web of Science. Using Boolean operators "OR", "AND" and "NOT", a search strategy was developed to estimate the total number of published items. Only studies with an English abstract were eligible. Type 1 diabetes and gestational diabetes items were excluded. Specific software developed for the database analysed the data. Information including titles, authors’ affiliations and publication years were extracted from all files and exported to excel. Density-equalizing mapping was conducted as described by Groenberg-Kloft et al, 2008. A total of 24,783 items were published and cited 476,002 times. The greatest number of outputs were published in 2010 (n=2,139). The United States contributed 28.8% to the overall output, followed by the United Kingdom (8.2%) and Japan (7.7%). Bilateral cooperation was most common between the United States and United Kingdom (n=237). Harvard University produced 2% of all publications, followed by the University of California (1.1%). The leading journals were Diabetes, Diabetologia and Diabetes Care and they contributed 9.3%, 7.3% and 4.0% of the research yield, respectively. In conclusion, the volume of research is rising in parallel with the increasing global burden of disease due to type 2 diabetes mellitus. Bibliometrics analysis provides useful information to scientists and funding agencies involved in the development and implementation of research strategies to address global health issues. PMID:26208117

  20. Mapping surface energy balance components by combining Landsat Thematic Mapper and ground-based meteorological data

    NASA Technical Reports Server (NTRS)

    Moran, M. Susan; Jackson, Ray D.; Raymond, Lee H.; Gay, Lloyd W.; Slater, Philip N.

    1989-01-01

    Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density and net radiant flux density were produced using Landsat TM data for three dates. The TM-based estimates differed from Bowen-ratio and aircraft-based estimates by less than 12 percent over mature fields of cotton, wheat, and alfalfa.

  1. Cartograms Facilitate Communication of Climate Change Risks and Responsibilities

    NASA Astrophysics Data System (ADS)

    Döll, Petra

    2017-12-01

    Communication of climate change (CC) risks is challenging, in particular if global-scale spatially resolved quantitative information is to be conveyed. Typically, visualization of CC risks, which arise from the combination of hazard, exposure and vulnerability, is confined to showing only the hazards in the form of global thematic maps. This paper explores the potential of contiguous value-by-area cartograms, that is, distorted density-equalizing maps, for improving communication of CC risks and the countries' differentiated responsibilities for CC. Two global-scale cartogram sets visualize, as an example, groundwater-related CC risks in 0.5° grid cells, another one the correlation of (cumulative) fossil-fuel carbon dioxide emissions with the countries' population and gross domestic product. Viewers of the latter set visually recognize the lack of global equity and that the countries' wealth has been built on harmful emissions. I recommend that CC risks are communicated by bivariate gridded cartograms showing the hazard in color and population, or a combination of population and a vulnerability indicator, by distortion of grid cells. Gridded cartograms are also appropriate for visualizing the availability of natural resources to humans. For communicating complex information, sets of cartograms should be carefully designed instead of presenting single cartograms. Inclusion of a conventionally distorted map enhances the viewers' capability to take up the information represented by distortion. Empirical studies about the capability of global cartograms to convey complex information and to trigger moral emotions should be conducted, with a special focus on risk communication.

  2. Talbot-Lau x-ray deflectometry phase-retrieval methods for electron density diagnostics in high-energy density experiments.

    PubMed

    Valdivia, Maria Pia; Stutman, Dan; Stoeckl, Christian; Mileham, Chad; Begishev, Ildar A; Bromage, Jake; Regan, Sean P

    2018-01-10

    Talbot-Lau x-ray interferometry uses incoherent x-ray sources to measure refraction index changes in matter. These measurements can provide accurate electron density mapping through phase retrieval. An adaptation of the interferometer has been developed in order to meet the specific requirements of high-energy density experiments. This adaptation is known as a moiré deflectometer, which allows for single-shot capabilities in the form of interferometric fringe patterns. The moiré x-ray deflectometry technique requires a set of object and reference images in order to provide electron density maps, which can be costly in the high-energy density environment. In particular, synthetic reference phase images obtained ex situ through a phase-scan procedure, can provide a feasible solution. To test this procedure, an object phase map was retrieved from a single-shot moiré image obtained from a plasma-produced x-ray source. A reference phase map was then obtained from phase-stepping measurements using a continuous x-ray tube source in a small laboratory setting. The two phase maps were used to retrieve an electron density map. A comparison of the moiré and phase-stepping phase-retrieval methods was performed to evaluate single-exposure plasma electron density mapping for high-energy density and other transient plasma experiments. It was found that a combination of phase-retrieval methods can deliver accurate refraction angle mapping. Once x-ray backlighter quality is optimized, the ex situ method is expected to deliver electron density mapping with improved resolution. The steps necessary for improved diagnostic performance are discussed.

  3. An improved consensus linkage map of barley based on flow-sorted chromosomes and SNP markers

    USDA-ARS?s Scientific Manuscript database

    Recent advances in high-throughput genotyping have made it easier to combine information from different mapping populations into consensus genetic maps, which provide increased marker density and genome coverage compared to individual maps. Previously, a SNP-based genotyping platform was developed a...

  4. Map Projections and the Visual Detective: How to Tell if a Map Is Equal-Area, Conformal, or Neither

    ERIC Educational Resources Information Center

    Olson, Judy M.

    2006-01-01

    The ability to see whether a map is equal-area, conformal, or neither is useful for looking intelligently at large-area maps. For example, only if a map is equal-area can reliable judgments of relative size be made. If a map is equal-area, latitude-longitude cells are equal in size between a given pair of parallels, the cells between a given pair…

  5. Recurrence quantification analysis applied to spatiotemporal pattern analysis in high-density mapping of human atrial fibrillation.

    PubMed

    Zeemering, Stef; Bonizzi, Pietro; Maesen, Bart; Peeters, Ralf; Schotten, Ulrich

    2015-01-01

    Spatiotemporal complexity of atrial fibrillation (AF) patterns is often quantified by annotated intracardiac contact mapping. We introduce a new approach that applies recurrence plot (RP) construction followed by recurrence quantification analysis (RQA) to epicardial atrial electrograms, recorded with a high-density grid of electrodes. In 32 patients with no history of AF (aAF, n=11), paroxysmal AF (PAF, n=12) and persistent AF (persAF, n=9), RPs were constructed using a phase space electrogram embedding dimension equal to the estimated AF cycle length. Spatial information was incorporated by 1) averaging the recurrence over all electrodes, and 2) by applying principal component analysis (PCA) to the matrix of embedded electrograms and selecting the first principal component as a representation of spatial diversity. Standard RQA parameters were computed on the constructed RPs and correlated to the number of fibrillation waves per AF cycle (NW). Averaged RP RQA parameters showed no correlation with NW. Correlations improved when applying PCA, with maximum correlation achieved between RP threshold and NW (RR1%, r=0.68, p <; 0.001) and RP determinism (DET, r=-0.64, p <; 0.001). All studied RQA parameters based on the PCA RP were able to discriminate between persAF and aAF/PAF (DET persAF 0.40 ± 0.11 vs. 0.59 ± 0.14/0.62 ± 0.16, p <; 0.01). RP construction and RQA combined with PCA provide a quick and reliable tool to visualize dynamical behaviour and to assess the complexity of contact mapping patterns in AF.

  6. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy

    PubMed Central

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-01-01

    Local surface charge density of lipid membranes influences membrane–protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values. PMID:27561322

  7. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy

    NASA Astrophysics Data System (ADS)

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-08-01

    Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.

  8. Mapping surface charge density of lipid bilayers by quantitative surface conductivity microscopy.

    PubMed

    Klausen, Lasse Hyldgaard; Fuhs, Thomas; Dong, Mingdong

    2016-08-26

    Local surface charge density of lipid membranes influences membrane-protein interactions leading to distinct functions in all living cells, and it is a vital parameter in understanding membrane-binding mechanisms, liposome design and drug delivery. Despite the significance, no method has so far been capable of mapping surface charge densities under physiologically relevant conditions. Here, we use a scanning nanopipette setup (scanning ion-conductance microscope) combined with a novel algorithm to investigate the surface conductivity near supported lipid bilayers, and we present a new approach, quantitative surface conductivity microscopy (QSCM), capable of mapping surface charge density with high-quantitative precision and nanoscale resolution. The method is validated through an extensive theoretical analysis of the ionic current at the nanopipette tip, and we demonstrate the capacity of QSCM by mapping the surface charge density of model cationic, anionic and zwitterionic lipids with results accurately matching theoretical values.

  9. Symplectic Propagation of the Map, Tangent Map and Tangent Map Derivative through Quadrupole and Combined-Function Dipole Magnets without Truncation

    NASA Astrophysics Data System (ADS)

    Bruhwiler, D. L.; Cary, J. R.; Shasharina, S.

    1998-04-01

    The MAPA accelerator modeling code symplectically advances the full nonlinear map, tangent map and tangent map derivative through all accelerator elements. The tangent map and its derivative are nonlinear generalizations of Browns first- and second-order matrices(K. Brown, SLAC-75, Rev. 4 (1982), pp. 107-118.), and they are valid even near the edges of the dynamic aperture, which may be beyond the radius of convergence for a truncated Taylor series. In order to avoid truncation of the map and its derivatives, the Hamiltonian is split into pieces for which the map can be obtained analytically. Yoshidas method(H. Yoshida, Phys. Lett. A 150 (1990), pp. 262-268.) is then used to obtain a symplectic approximation to the map, while the tangent map and its derivative are appropriately composed at each step to obtain them with equal accuracy. We discuss our splitting of the quadrupole and combined-function dipole Hamiltonians and show that typically few steps are required for a high-energy accelerator.

  10. Correlations Between the Cosmic X-Ray and Microwave Backgrounds: Constraints on a Cosmological Constant

    NASA Technical Reports Server (NTRS)

    Boughn, S. P.; Crittenden, R. G.; Turok, N. G.

    1998-01-01

    In universes with significant curvature or cosmological constant, cosmic microwave background (CMB) anisotropies are created very recently via the Rees-Sciama or integrated Sachs-Wolfe effects. This causes the CMB anisotropies to become partially correlated with the local matter density (z less than 4). We examine the prospects of using the hard (2- 10 keV) X-ray background as a probe of the local density and the measured correlation between the HEAO1 A2 X-ray survey and the 4-year COBE-DMR map to obtain a constraint on the cosmological constant. The 95% confidence level upper limit on the cosmological constant is OMega(sub Lambda) less than or equal to 0.5, assuming that the observed fluctuations in the X-ray map result entirely from large scale structure. (This would also imply that the X-rays trace matter with a bias factor of b(sub x) approx. = 5.6 Omega(sub m, sup 0.53)). This bound is weakened considerably if a large portion of the X-ray fluctuations arise from Poisson noise from unresolved sources. For example, if one assumes that the X-ray bias is b(sub x) = 2, then the 95% confidence level upper limit is weaker, Omega(sub Lambda) less than or equal to 0.7. More stringent limits should be attainable with data from the next generation of CMB and X-ray background maps.

  11. Construction of a high-density genetic map and the X/Y sex-determining gene mapping in spinach based on large-scale markers developed by specific-locus amplified fragment sequencing (SLAF-seq).

    PubMed

    Qian, Wei; Fan, Guiyan; Liu, Dandan; Zhang, Helong; Wang, Xiaowu; Wu, Jian; Xu, Zhaosheng

    2017-04-04

    Cultivated spinach (Spinacia oleracea L.) is one of the most widely cultivated types of leafy vegetable in the world, and it has a high nutritional value. Spinach is also an ideal plant for investigating the mechanism of sex determination because it is a dioecious species with separate male and female plants. Some reports on the sex labeling and localization of spinach in the study of molecular markers have surfaced. However, there have only been two reports completed on the genetic map of spinach. The lack of rich and reliable molecular markers and the shortage of high-density linkage maps are important constraints in spinach research work. In this study, a high-density genetic map of spinach based on the Specific-locus Amplified Fragment Sequencing (SLAF-seq) technique was constructed; the sex-determining gene was also finely mapped. Through bio-information analysis, 50.75 Gb of data in total was obtained, including 207.58 million paired-end reads. Finally, 145,456 high-quality SLAF markers were obtained, with 27,800 polymorphic markers and 4080 SLAF markers were finally mapped onto the genetic map after linkage analysis. The map spanned 1,125.97 cM with an average distance of 0.31 cM between the adjacent marker loci. It was divided into 6 linkage groups corresponding to the number of spinach chromosomes. Besides, the combination of Bulked Segregation Analysis (BSA) with SLAF-seq technology(super-BSA) was employed to generate the linkage markers with the sex-determining gene. Combined with the high-density genetic map of spinach, the sex-determining gene X/Y was located at the position of the linkage group (LG) 4 (66.98 cM-69.72 cM and 75.48 cM-92.96 cM), which may be the ideal region for the sex-determining gene. A high-density genetic map of spinach based on the SLAF-seq technique was constructed with a backcross (BC 1 ) population (which is the highest density genetic map of spinach reported at present). At the same time, the sex-determining gene X/Y was mapped to LG4 with super-BSA. This map will offer a suitable basis for further study of spinach, such as gene mapping, map-based cloning of Specific genes, quantitative trait locus (QTL) mapping and marker-assisted selection (MAS). It will also provide an efficient reference for studies on the mechanism of sex determination in other dioecious plants.

  12. Geodesy- and geology-based slip-rate models for the Western United States (excluding California) national seismic hazard maps

    USGS Publications Warehouse

    Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne

    2014-01-01

    The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.

  13. Shape information from a critical point analysis of calculated electron density maps: application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, L.; Allen, F. H.; Vercauteren, D. P.

    1995-04-01

    A computational method is described for mapping the volume within the DNA double helix accessible to a groove-binding antibiotic, netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to be a good representation of the electron density function at various resolutions; while at the atomic level the ellipsoid method gives results which are in close agreement with those from the conventional, spherical, van der Waals approach.

  14. Shape information from a critical point analysis of calculated electron density maps: Application to DNA-drug systems

    NASA Astrophysics Data System (ADS)

    Leherte, Laurence; Allen, Frank H.

    1994-06-01

    A computational method is described for mapping the volume within the DNA double helix accessible to the groove-binding antibiotic netropsin. Topological critical point analysis is used to locate maxima in electron density maps reconstructed from crystallographically determined atomic coordinates. The peaks obtained in this way are represented as ellipsoids with axes related to local curvature of the electron density function. Combining the ellipsoids produces a single electron density function which can be probed to estimate effective volumes of the interacting species. Close complementarity between host and ligand in this example shows the method to give a good representation of the electron density function at various resolutions. At the atomic level, the ellipsoid method gives results which are in close agreement with those from the conventional spherical van der Waals approach.

  15. A GIS-based automated procedure for landslide susceptibility mapping by the Conditional Analysis method: the Baganza valley case study (Italian Northern Apennines)

    NASA Astrophysics Data System (ADS)

    Clerici, Aldo; Perego, Susanna; Tellini, Claudio; Vescovi, Paolo

    2006-08-01

    Among the many GIS based multivariate statistical methods for landslide susceptibility zonation, the so called “Conditional Analysis method” holds a special place for its conceptual simplicity. In fact, in this method landslide susceptibility is simply expressed as landslide density in correspondence with different combinations of instability-factor classes. To overcome the operational complexity connected to the long, tedious and error prone sequence of commands required by the procedure, a shell script mainly based on the GRASS GIS was created. The script, starting from a landslide inventory map and a number of factor maps, automatically carries out the whole procedure resulting in the construction of a map with five landslide susceptibility classes. A validation procedure allows to assess the reliability of the resulting model, while the simple mean deviation of the density values in the factor class combinations, helps to evaluate the goodness of landslide density distribution. The procedure was applied to a relatively small basin (167 km2) in the Italian Northern Apennines considering three landslide types, namely rotational slides, flows and complex landslides, for a total of 1,137 landslides, and five factors, namely lithology, slope angle and aspect, elevation and slope/bedding relations. The analysis of the resulting 31 different models obtained combining the five factors, confirms the role of lithology, slope angle and slope/bedding relations in influencing slope stability.

  16. Universal calculational recipe for solvent-mediated potential: based on a combination of integral equation theory and density functional theory

    NASA Astrophysics Data System (ADS)

    Zhou, Shiqi

    2004-07-01

    A universal formalism, which enables calculation of solvent-mediated potential (SMP) between two equal or non-equal solute particles with any shape immersed in solvent reservior consisting of atomic particle and/or polymer chain or their mixture, is proposed by importing a density functional theory externally into OZ equation systems. Only if size asymmetry of the solvent bath components is moderate, the present formalism can calculate the SMP in any complex fluids at the present development stage of statistical mechanics, and therefore avoids all of limitations of previous approaches for SMP. Preliminary calculation indicates the reliability of the present formalism.

  17. Prior-knowledge-based spectral mixture analysis for impervious surface mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jinshui; He, Chunyang; Zhou, Yuyu

    2014-01-03

    In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, lowmore » albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.« less

  18. Mapping out the QCD phase transition in multiparticle production

    NASA Astrophysics Data System (ADS)

    Kabana, Sonja; Minkowski, Peter

    2001-04-01

    We analyse multiparticle production in a thermal framework for seven central nucleus + nucleus collisions, e+ + e- annihilation into hadrons on the Z resonance and four hadronic reactions p + p and p + pbar with partial centrality selection), with centre of mass energies ranging from √(s) = 2.6 GeV (per nucleon pair) to 1.8 TeV. Thermodynamic parameters at chemical freeze-out (temperature and baryon and strangeness fugacities) are obtained from appropriate fits, generally improving in quality for reactions subjected to centrality cuts. All systems with non-vanishing fugacities are extrapolated along trajectories of equal energy density, density and entropy density to zero fugacities. The so-obtained temperatures extrapolated to zero fugacities as a function of initial energy density ɛin universally show a strong rise followed by a saturating limit of Tlim = 155 +/- 6 +/- 20 MeV. We interpret this behaviour as mapping out the boundary between quark gluon plasma and hadronic phases. The ratio of strange antiquarks to light ones as a function of the initial energy density ɛin shows the same behaviour as the temperature, saturating at a value of 0.365 +/- 0.033 +/- 0.07. No distinctive feature of `strangeness enhancement' is seen for heavy ion collisions relative to hadronic and leptonic reactions, when compared at the same initial energy density.

  19. Comparison of SOM point densities based on different criteria.

    PubMed

    Kohonen, T

    1999-11-15

    Point densities of model (codebook) vectors in self-organizing maps (SOMs) are evaluated in this article. For a few one-dimensional SOMs with finite grid lengths and a given probability density function of the input, the numerically exact point densities have been computed. The point density derived from the SOM algorithm turned out to be different from that minimizing the SOM distortion measure, showing that the model vectors produced by the basic SOM algorithm in general do not exactly coincide with the optimum of the distortion measure. A new computing technique based on the calculus of variations has been introduced. It was applied to the computation of point densities derived from the distortion measure for both the classical vector quantization and the SOM with general but equal dimensionality of the input vectors and the grid, respectively. The power laws in the continuum limit obtained in these cases were found to be identical.

  20. Five-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Data Processing, Sky Maps, and Basic Results

    NASA Technical Reports Server (NTRS)

    Weiland, J.L.; Hill, R.S.; Odegard, 3.; Larson, D.; Bennett, C.L.; Dunkley, J.; Jarosik, N.; Page, L.; Spergel, D.N.; Halpern, M.; hide

    2008-01-01

    The Wilkinson Microwave Anisotropy Probe (WMAP) is a Medium-Class Explorer (MIDEX) satellite aimed at elucidating cosmology through full-sky observations of the cosmic microwave background (CMB). The WMAP full-sky maps of the temperature and polarization anisotropy in five frequency bands provide our most accurate view to date of conditions in the early universe. The multi-frequency data facilitate the separation of the CMB signal from foreground emission arising both from our Galaxy and from extragalactic sources. The CMB angular power spectrum derived from these maps exhibits a highly coherent acoustic peak structure which makes it possible to extract a wealth of information about the composition and history of the universe. as well as the processes that seeded the fluctuations. WMAP data have played a key role in establishing ACDM as the new standard model of cosmology (Bennett et al. 2003: Spergel et al. 2003; Hinshaw et al. 2007: Spergel et al. 2007): a flat universe dominated by dark energy, supplemented by dark matter and atoms with density fluctuations seeded by a Gaussian, adiabatic, nearly scale invariant process. The basic properties of this universe are determined by five numbers: the density of matter, the density of atoms. the age of the universe (or equivalently, the Hubble constant today), the amplitude of the initial fluctuations, and their scale dependence. By accurately measuring the first few peaks in the angular power spectrum, WMAP data have enabled the following accomplishments: Showing the dark matter must be non-baryonic and interact only weakly with atoms and radiation. The WMAP measurement of the dark matter density puts important constraints on supersymmetric dark matter models and on the properties of other dark matter candidates. With five years of data and a better determination of our beam response, this measurement has been significantly improved. Precise determination of the density of atoms in the universe. The agreement between the atomic density derived from WMAP and the density inferred from the deuterium abundance is an important test of the standard big bang model. Determination of the acoustic scale at redshift z = 1090. Similarly, the recent measurement of baryon acoustic oscillations (BAO) in the galaxy power spectrum (Eisenstein et al. 2005) has determined the acoustic scale at redshift z approx. 0.35. When combined, these standard rulers accurately measure the geometry of the universe and the properties of the dark energy. These data require a nearly flat universe dominated by dark energy consistent with a cosmological constant. Precise determination of the Hubble Constant, in conjunction with BAO observations. Even when allowing curvature (Omega(sub 0) does not equal 1) and a free dark energy equation of state (w does not equal -1), the acoustic data determine the Hubble constant to within 3%. The measured value is in excellent agreement with independent results from the Hubble Key Project (Freedman et al. 2001), providing yet another important consistency test for the standard model. Significant constraint of the basic properties of the primordial fluctuations. The anti-correlation seen in the temperature/polarization (TE) correlation spectrum on 4deg scales implies that the fluctuations are primarily adiabatic and rule out defect models and isocurvature models as the primary source of fluctuations (Peiris et al. 2003).

  1. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast‑growing Eucalyptus forest plantation using airborne LiDAR data

    Treesearch

    Carlos Alberto Silva; Andrew Thomas Hudak; Carine Klauberg; Lee Alexandre Vierling; Carlos Gonzalez‑Benecke; Samuel de Padua Chaves Carvalho; Luiz Carlos Estraviz Rodriguez; Adrian Cardil

    2017-01-01

    LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m− 2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations...

  2. Investigation of a complete sample of flat spectrum radio sources from the S5 survey

    NASA Astrophysics Data System (ADS)

    Eckart, A.; Witzel, A.; Biermann, P.; Johnston, K. J.; Simon, R.; Schalinski, C.; Kuhr, H.

    1986-11-01

    An analysis of 13 extragalactic sources of the S5 survey with flux densities greater than or equal to 1 Jy at 4990 MHz, mapped with milliarcsecond resolution at 1.6 and 5 GHz by means of VLBI, is presented. All sources appear to display multiple components dominated in flux density at 6 cm by a core component which is self-absorbed at 18 cm. Comparison of the measured to predicted X-ray flux density of the core radio components suggests that all sources should display bulk relativistic motion with small angles to the line of sight, and four sources show rapid changes in their radio structures which can be interpreted as apparent superliminal motion.

  3. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  4. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    NASA Astrophysics Data System (ADS)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.

  5. ERTS-1 image enhancement by optically combining density slices

    NASA Technical Reports Server (NTRS)

    Tapper, G. O.; Pease, R. W.

    1973-01-01

    The technique of density slicing using a photographic film and its application to enhancement of ERTS-1 imagery has proved to be useful for mapping varigated areal phenomena and provides a useful supplement ot the I2S MiniAddcol viewing system. The intial experiments conducted with this film were encouraging, and indicated that this technique of density slicing using readily accessible darkroom facilities and simple darkroom procedures allows rapid, accurate, and facile interpretation of certain areal phenomena to be made from the imagery. The distribution of the tree yucca, Yucca brevifolia Jaegeriana, in the eastern Mojave Desert of Southern California and southern Nevada was used as an example to test the accuracy of the technique for mapping purposes. The distribution was mapped at a relatively high level of accuracy.

  6. A generalization of algebraic surface drawing

    NASA Technical Reports Server (NTRS)

    Blinn, J. F.

    1982-01-01

    An implicit surface mathematical description of three-dimensional space is defined in terms of all points which satisfy some equation F(x, y, z) equals 0. This form is ideal for space-shaded picture drawing, where the coordinates are substituted for x and y and the equation is solved for z. A new algorithm is presented which is applicable to functional forms other than those of first- and second-order polynomial functions, such as the summation of several Gaussian density distributions. The algorithm was created in order to model electron density maps of molecular structures, but is shown to be capable of generating shapes of esthetic interest.

  7. A high density physical map of chromosome 1BL supports evolutionary studies, map-based cloning and sequencing in wheat

    PubMed Central

    2013-01-01

    Background As for other major crops, achieving a complete wheat genome sequence is essential for the application of genomics to breeding new and improved varieties. To overcome the complexities of the large, highly repetitive and hexaploid wheat genome, the International Wheat Genome Sequencing Consortium established a chromosome-based strategy that was validated by the construction of the physical map of chromosome 3B. Here, we present improved strategies for the construction of highly integrated and ordered wheat physical maps, using chromosome 1BL as a template, and illustrate their potential for evolutionary studies and map-based cloning. Results Using a combination of novel high throughput marker assays and an assembly program, we developed a high quality physical map representing 93% of wheat chromosome 1BL, anchored and ordered with 5,489 markers including 1,161 genes. Analysis of the gene space organization and evolution revealed that gene distribution and conservation along the chromosome results from the superimposition of the ancestral grass and recent wheat evolutionary patterns, leading to a peak of synteny in the central part of the chromosome arm and an increased density of non-collinear genes towards the telomere. With a density of about 11 markers per Mb, the 1BL physical map provides 916 markers, including 193 genes, for fine mapping the 40 QTLs mapped on this chromosome. Conclusions Here, we demonstrate that high marker density physical maps can be developed in complex genomes such as wheat to accelerate map-based cloning, gain new insights into genome evolution, and provide a foundation for reference sequencing. PMID:23800011

  8. THE EINSTEIN CROSS: CONSTRAINT ON DARK MATTER FROM STELLAR DYNAMICS AND GRAVITATIONAL LENSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van de Ven, Glenn; Falcon-Barroso, Jesus; McDermid, Richard M.

    2010-08-20

    We present two-dimensional line-of-sight stellar kinematics of the lens galaxy in the Einstein Cross, obtained with the GEMINI 8 m telescope, using the GMOS integral-field spectrograph. The stellar kinematics extend to a radius of 4'' (with 0.''2 spaxels), covering about two-thirds of the effective (or half-light) radius R{sub e} {approx_equal} 6'' of this early-type spiral galaxy at redshift z{sub l} {approx_equal} 0.04, of which the bulge is lensing a background quasar at redshift z{sub s} {approx_equal} 1.7. The velocity map shows regular rotation up to {approx}100 km s{sup -1} around the minor axis of the bulge, consistent with axisymmetry. Themore » velocity dispersion map shows a weak gradient increasing toward a central (R < 1'') value of {sigma}{sub 0} = 170 {+-} 9 km s{sup -1}. We deproject the observed surface brightness from Hubble Space Telescope imaging to obtain a realistic luminosity density of the lens galaxy, which in turn is used to build axisymmetric dynamical models that fit the observed kinematic maps. We also construct a gravitational lens model that accurately fits the positions and relative fluxes of the four quasar images. We combine these independent constraints from stellar dynamics and gravitational lensing to study the total mass distribution in the inner parts of the lens galaxy. We find that the resulting luminous and total mass distribution are nearly identical around the Einstein radius R{sub E} = 0.''89, with a slope that is close to isothermal, but which becomes shallower toward the center if indeed mass follows light. The dynamical model fits to the observed kinematic maps result in a total mass-to-light ratio Y{sub dyn} = 3.7 {+-} 0.5 Y{sub sun,I} (in the I band). This is consistent with the Einstein mass M{sub E} = 1.54 x 10{sup 10} M {sub sun} divided by the (projected) luminosity within R{sub E} , which yields a total mass-to-light ratio of Y {sub E} = 3.4 Y{sub sun,I}, with an error of at most a few percent. We estimate from stellar population model fits to colors of the lens galaxy a stellar mass-to-light ratio Y{sub *} from 2.8 to 4.1 Y{sub sun,I}. Although a constant dark matter fraction of 20% is not excluded, dark matter may play no significant role in the bulge of this {approx}L {sub *} early-type spiral galaxy.« less

  9. Global architecture of gestational diabetes research: density-equalizing mapping studies and gender analysis.

    PubMed

    Brüggmann, Dörthe; Richter, Theresa; Klingelhöfer, Doris; Gerber, Alexander; Bundschuh, Matthias; Jaque, Jenny; Groneberg, David A

    2016-04-04

    Gestational diabetes mellitus (GDM) is associated with substantial morbidity for mothers and their offspring. While clinical and basic research activities on this important disease grow constantly, there is no concise analysis of global architecture of GDM research. Hence, it was the objective of this study to assess the global scientific performance chronologically, geographically and in relation to existing research networks and gender distribution of publishing authors. On the basis of the New Quality and Quantity Indices in Science (NewQIS) platform, scientometric methods were combined with modern visualizing techniques such as density equalizing mapping, and the Web of Science database was used to assess GDM-related entries from 1900 to 2012. Twelve thousand five hundred four GDM-related publications were identified and analyzed. The USA (4295 publications) and the UK (1354 publications) dominated the field concerning research activity, overall citations and country-specific Hirsch-Index, which quantified the impact of a country's published research on the scientific community. Semi-qualitative indices such as country-specific citation rates ranked New Zealand and the UK at top positions. Annual collaborative publications increased steeply between the years 1990 and 2012 (71 to 1157 respectively). Subject category analysis pointed to a minor interest of public health issues in GDM research. Gender analysis in terms of publication authorship revealed a clear dominance of the male gender until 2005; then a trend towards gender equity started and the activity of female scientists grew visibly in many countries. The country-specific gender analysis revealed large differences, i.e. female scientists dominated the scientific output in the USA, whereas the majority of research was published by male authors in countries such as Japan. This study provides the first global sketch of GDM research architecture. While North-American and Western-European countries were dominating the GDM-related scientific landscape, a disparity exists in terms of research output between developed and low-resource countries. Since GDM is linked to considerable mortality and morbidity of mothers and their offspring and constitutes a tremendous burden for the healthcare systems in underserved countries, our findings emphasize the need to address disparities by fostering research endeavors, public health programs and collaborative efforts in these nations.

  10. A high-density transcript linkage map with 1,845 expressed genes positioned by microarray-based Single Feature Polymorphisms (SFP) in Eucalyptus

    PubMed Central

    2011-01-01

    Background Technological advances are progressively increasing the application of genomics to a wider array of economically and ecologically important species. High-density maps enriched for transcribed genes facilitate the discovery of connections between genes and phenotypes. We report the construction of a high-density linkage map of expressed genes for the heterozygous genome of Eucalyptus using Single Feature Polymorphism (SFP) markers. Results SFP discovery and mapping was achieved using pseudo-testcross screening and selective mapping to simultaneously optimize linkage mapping and microarray costs. SFP genotyping was carried out by hybridizing complementary RNA prepared from 4.5 year-old trees xylem to an SFP array containing 103,000 25-mer oligonucleotide probes representing 20,726 unigenes derived from a modest size expressed sequence tags collection. An SFP-mapping microarray with 43,777 selected candidate SFP probes representing 15,698 genes was subsequently designed and used to genotype SFPs in a larger subset of the segregating population drawn by selective mapping. A total of 1,845 genes were mapped, with 884 of them ordered with high likelihood support on a framework map anchored to 180 microsatellites with average density of 1.2 cM. Using more probes per unigene increased by two-fold the likelihood of detecting segregating SFPs eventually resulting in more genes mapped. In silico validation showed that 87% of the SFPs map to the expected location on the 4.5X draft sequence of the Eucalyptus grandis genome. Conclusions The Eucalyptus 1,845 gene map is the most highly enriched map for transcriptional information for any forest tree species to date. It represents a major improvement on the number of genes previously positioned on Eucalyptus maps and provides an initial glimpse at the gene space for this global tree genome. A general protocol is proposed to build high-density transcript linkage maps in less characterized plant species by SFP genotyping with a concurrent objective of reducing microarray costs. HIgh-density gene-rich maps represent a powerful resource to assist gene discovery endeavors when used in combination with QTL and association mapping and should be especially valuable to assist the assembly of reference genome sequences soon to come for several plant and animal species. PMID:21492453

  11. High-resolution carbon mapping on the million-hectare Island of Hawaii

    Treesearch

    Gregory P. Asner; R. Flint Hughes; Joseph Mascaro; Amanda L. Uowolo; David E. Knapp; James Jacobson; Ty Kennedy-Bowdoin; John K . Clark

    2011-01-01

    Current markets and international agreements for reducing emissions from deforestation and forest degradation (REDD) rely on carbon (C) monitoring techniques. Combining field measurements, airborne light detection and ranging (LiDAR)-based observations, and satellite-based imagery, we developed a 30-meter-resolution map of aboveground C density spanning 40 vegetation...

  12. Snow survey and vegetation growth in high mountains (Swiss Alps)

    NASA Technical Reports Server (NTRS)

    Haefner, H. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. A method for mapping snow over large areas was developed combining the possibilities of a Quantimet (QTM 72) to evaluate the exact density level of the snow cover for each individual image (or a selected section of the photo) with the higher resolution of photographic techniques. The density level established on the monitor by visual control is used as reference for the exposure time of a lithographic film, producing a clear tonal separation of all snow- and ice-covered areas from uncovered land in black and white. The data is projected onto special maps 1:500,000 or 1:100,000 showing the contour lines and the hydrographic features only. The areal extent of the snow cover may be calculated directly with the QTM 720 or on the map. Bands 4 and 5 provide the most accurate results for mapping snow. Using all four bands a separation of an old melting snow cover from a new one is possible. Regional meteorological studies combining ERTS-1 imagery and conventional sources describe synoptical evolution of meteorological systems over the Alps.

  13. Prioritizing landscapes for longleaf pine conservation

    USGS Publications Warehouse

    Grand, James B.; Kleiner, Kevin J.

    2016-01-01

    We developed a spatially explicit model and map, as a decision support tool (DST), to aid conservation agencies creating or maintaining open pine ecosystems. The tool identified areas that are likely to provide the greatest benefit to focal bird populations based on a comprehensive landscape analysis. We used NLCD 2011, SSURGO, and SEGAP data to map the density of desired resources for open pine ecosystems and six focal species of birds and 2 reptiles within the historic range of longleaf pine east of the Mississippi River. Binary rasters were created of sites with desired characteristics such as land form, hydrology, land use and land cover, soils, potential habitat for focal species, and putative source populations of focal species. Each raster was smoothed using a kernel density estimator. Rasters were combined and scaled to map priority locations for the management of each focal species. Species’ rasters were combined and scaled to provide maps of overall priority for birds and for birds and reptiles. The spatial data can be used to identify high priority areas for conservation or to compare areas under consideration for maintenance or creation of open pine ecosystems.

  14. A novel method to create high density stratification with matching refractive index for optical flow investigations

    NASA Astrophysics Data System (ADS)

    Krohn, Benedikt; Manera, Annalisa; Petrov, Victor

    2018-04-01

    Turbulent mixing in stratified environments represents a challenging task in experimental turbulence research, especially when large density gradients are desired. When optical measurement techniques like particle image velocimetry (PIV) are applied to stratified liquids, it is common practice to combine two aqueous solutions with different density but equal refractive index, to suppress particle image deflections. While refractive image matching (RIM) has been developed in the late 1970s, the achieved limit of 4% density ratio was not rivalled up to day. In the present work, we report a methodology, based on the behavior of excess properties and their change in a multicomponent system while mixing, that allows RIM for solutions with higher density differences. The methodology is then successfully demonstrated using a ternary combination of water, isopropanol and glycerol, for which RIM in presence of a density ratio of 8.6% has been achieved. Qualitative PIV results of a turbulent buoyant jet with 8.6% density ratio are shown.

  15. Spatial and Global Sensory Suppression Mapping Encompassing the Central 10° Field in Anisometropic Amblyopia.

    PubMed

    Li, Jingjing; Li, Jinrong; Chen, Zidong; Liu, Jing; Yuan, Junpeng; Cai, Xiaoxiao; Deng, Daming; Yu, Minbin

    2017-01-01

    We investigate the efficacy of a novel dichoptic mapping paradigm in evaluating visual function of anisometropic amblyopes. Using standard clinical measures of visual function (visual acuity, stereo acuity, Bagolini lenses, and neutral density filters) and a novel quantitative mapping technique, 26 patients with anisometropic amblyopia (mean age = 19.15 ± 4.42 years) were assessed. Two additional psychophysical interocular suppression measurements were tested with dichoptic global motion coherence and binocular phase combination tasks. Luminance reduction was achieved by placing neutral density filters in front of the normal eye. Our study revealed that suppression changes across the central 10° visual field by mean luminance modulation in amblyopes as well as normal controls. Using simulation and an elimination of interocular suppression, we identified a novel method to effectively reflect the distribution of suppression in anisometropic amblyopia. Additionally, the new quantitative mapping technique was in good agreement with conventional clinical measures, such as interocular acuity difference (P < 0.001) and stereo acuity (P = 0.005). There was a good consistency between the results of interocular suppression with dichoptic mapping paradigm and the results of the other two psychophysical methods (suppression mapping versus binocular phase combination, P < 0.001; suppression mapping versus global motion coherence, P = 0.005). The dichoptic suppression mapping technique is an effective method to represent impaired visual function in patients with anisometropic amblyopia. It offers a potential in "micro-"antisuppression mapping tests and therapies for amblyopia.

  16. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  17. Effects of hydrogen peroxide, modified atmosphere and their combination on quality of minimally processed cluster beans.

    PubMed

    Waghmare, Roji B; Annapure, Uday S

    2017-10-01

    The aim of this study was to determine the potential of hydrogen peroxide (H 2 O 2 ) and modified atmosphere packaging (MAP) on quality of fresh-cut cluster beans. Fresh-cut cluster beans were dipped in a solution of 2% H 2 O 2 for 2 min, packed in an atmosphere of (5% O 2 , 10% CO 2 , 85% N 2 ) and stored in polypropylene bags at 5 °C for 35 days. Passive MAP was created by consuming O 2 and producing CO 2 by fresh-cut cluster beans. The combined effect of H 2 O 2 and MAP on physico-chemical analysis (Headspace gas, weight loss, chlorophyll, hardness and color), microbial quality (mesophilic aerobics and yeasts and molds) and sensory analysis were studied. Chemical treatment and MAP both are equally effective in extending the shelf life at 5 °C for 28 days. Hence, MAP can be an alternative for chemical treatment to achieve a shelf life of 28 days for fresh-cut cluster beans. Control samples, without chemical treatment and modified atmosphere, stored at 5 °C were spoiled after 14 days. Chemical treatment followed by MAP underwent minimum changes in weight, chlorophyll, hardness and color of fresh-cut cluster beans. Combination treatment gives a storage life of 35 days.

  18. LOFAR observations of the quiet solar corona

    NASA Astrophysics Data System (ADS)

    Vocks, C.; Mann, G.; Breitling, F.; Bisi, M. M.; Dąbrowski, B.; Fallows, R.; Gallagher, P. T.; Krankowski, A.; Magdalenić, J.; Marqué, C.; Morosan, D.; Rucker, H.

    2018-06-01

    Context. The quiet solar corona emits meter-wave thermal bremsstrahlung. Coronal radio emission can only propagate above that radius, Rω, where the local plasma frequency equals the observing frequency. The radio interferometer LOw Frequency ARray (LOFAR) observes in its low band (10-90 MHz) solar radio emission originating from the middle and upper corona. Aims: We present the first solar aperture synthesis imaging observations in the low band of LOFAR in 12 frequencies each separated by 5 MHz. From each of these radio maps we infer Rω, and a scale height temperature, T. These results can be combined into coronal density and temperature profiles. Methods: We derived radial intensity profiles from the radio images. We focus on polar directions with simpler, radial magnetic field structure. Intensity profiles were modeled by ray-tracing simulations, following wave paths through the refractive solar corona, and including free-free emission and absorption. We fitted model profiles to observations with Rω and T as fitting parameters. Results: In the low corona, Rω < 1.5 solar radii, we find high scale height temperatures up to 2.2 × 106 K, much more than the brightness temperatures usually found there. But if all Rω values are combined into a density profile, this profile can be fitted by a hydrostatic model with the same temperature, thereby confirming this with two independent methods. The density profile deviates from the hydrostatic model above 1.5 solar radii, indicating the transition into the solar wind. Conclusions: These results demonstrate what information can be gleaned from solar low-frequency radio images. The scale height temperatures we find are not only higher than brightness temperatures, but also than temperatures derived from coronograph or extreme ultraviolet (EUV) data. Future observations will provide continuous frequency coverage. This continuous coverage eliminates the need for local hydrostatic density models in the data analysis and enables the analysis of more complex coronal structures such as those with closed magnetic fields.

  19. Map projections for global and continental data sets and an analysis of pixel distortion caused by reprojection

    USGS Publications Warehouse

    Steinwand, Daniel R.; Hutchinson, John A.; Snyder, J.P.

    1995-01-01

    In global change studies the effects of map projection properties on data quality are apparent, and the choice of projection is significant. To aid compilers of global and continental data sets, six equal-area projections were chosen: the interrupted Goode Homolosine, the interrupted Mollweide, the Wagner IV, and the Wagner VII for global maps; the Lambert Azimuthal Equal-Area for hemisphere maps; and the Oblated Equal-Area and the Lambert Azimuthal Equal-Area for continental maps. Distortions in small-scale maps caused by reprojection, and the additional distortions incurred when reprojecting raster images, were quantified and graphically depicted. For raster images, the errors caused by the usual resampling methods (pixel brightness level interpolation) were responsible for much of the additional error where the local resolution and scale change were the greatest.

  20. A High-Density Consensus Map of Common Wheat Integrating Four Mapping Populations Scanned by the 90K SNP Array

    PubMed Central

    Wen, Weie; He, Zhonghu; Gao, Fengmei; Liu, Jindong; Jin, Hui; Zhai, Shengnan; Qu, Yanying; Xia, Xianchun

    2017-01-01

    A high-density consensus map is a powerful tool for gene mapping, cloning and molecular marker-assisted selection in wheat breeding. The objective of this study was to construct a high-density, single nucleotide polymorphism (SNP)-based consensus map of common wheat (Triticum aestivum L.) by integrating genetic maps from four recombinant inbred line populations. The populations were each genotyped using the wheat 90K Infinium iSelect SNP assay. A total of 29,692 SNP markers were mapped on 21 linkage groups corresponding to 21 hexaploid wheat chromosomes, covering 2,906.86 cM, with an overall marker density of 10.21 markers/cM. Compared with the previous maps based on the wheat 90K SNP chip detected 22,736 (76.6%) of the SNPs with consistent chromosomal locations, whereas 1,974 (6.7%) showed different chromosomal locations, and 4,982 (16.8%) were newly mapped. Alignment of the present consensus map and the wheat expressed sequence tags (ESTs) Chromosome Bin Map enabled assignment of 1,221 SNP markers to specific chromosome bins and 819 ESTs were integrated into the consensus map. The marker orders of the consensus map were validated based on physical positions on the wheat genome with Spearman rank correlation coefficients ranging from 0.69 (4D) to 0.97 (1A, 4B, 5B, and 6A), and were also confirmed by comparison with genetic position on the previously 40K SNP consensus map with Spearman rank correlation coefficients ranging from 0.84 (6D) to 0.99 (6A). Chromosomal rearrangements reported previously were confirmed in the present consensus map and new putative rearrangements were identified. In addition, an integrated consensus map was developed through the combination of five published maps with ours, containing 52,607 molecular markers. The consensus map described here provided a high-density SNP marker map and a reliable order of SNPs, representing a step forward in mapping and validation of chromosomal locations of SNPs on the wheat 90K array. Moreover, it can be used as a reference for quantitative trait loci (QTL) mapping to facilitate exploitation of genes and QTL in wheat breeding. PMID:28848588

  1. Hidden explosives detector employing pulsed neutron and x-ray interrogation

    DOEpatents

    Schultz, F.J.; Caldwell, J.T.

    1993-04-06

    Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.

  2. Hidden explosives detector employing pulsed neutron and x-ray interrogation

    DOEpatents

    Schultz, Frederick J.; Caldwell, John T.

    1993-01-01

    Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.

  3. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    PubMed

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P < 0.0001) and in vivo (porcine RA pre-ablation 0.45 ± 0.13 vs. post-ablation 0.78 ± 0.17 points/cm2, P = 0.0008). Increasing chamber geometry was also associated with increased optimal sampling density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of Cardiology

  4. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.

  5. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392

  6. The Development of a High Density Linkage Map for Black Tiger Shrimp (Penaeus monodon) Based on cSNPs

    PubMed Central

    Baranski, Matthew; Gopikrishna, Gopalapillay; Robinson, Nicholas A.; Katneni, Vinaya Kumar; Shekhar, Mudagandur S.; Shanmugakarthik, Jayakani; Jothivel, Sarangapani; Gopal, Chavali; Ravichandran, Pitchaiyappan; Kent, Matthew; Arnyasi, Mariann; Ponniah, Alphis G.

    2014-01-01

    Transcriptome sequencing using Illumina RNA-seq was performed on populations of black tiger shrimp from India. Samples were collected from (i) four landing centres around the east coastline (EC) of India, (ii) survivors of a severe WSSV infection during pond culture (SUR) and (iii) the Andaman Islands (AI) in the Bay of Bengal. Equal quantities of purified total RNA from homogenates of hepatopancreas, muscle, nervous tissue, intestinal tract, heart, gonad, gills, pleopod and lymphoid organs were combined to create AI, EC and SUR pools for RNA sequencing. De novo transcriptome assembly resulted in 136,223 contigs (minimum size 100 base pairs, bp) with a total length 61 Mb, an average length of 446 bp and an average coverage of 163× across all pools. Approximately 16% of contigs were annotated with BLAST hit information and gene ontology annotations. A total of 473,620 putative SNPs/indels were identified. An Illumina iSelect genotyping array containing 6,000 SNPs was developed and used to genotype 1024 offspring belonging to seven full-sibling families. A total of 3959 SNPs were mapped to 44 linkage groups. The linkage groups consisted of between 16–129 and 13–130 markers, of length between 139–10.8 and 109.1–10.5 cM and with intervals averaging between 1.2 and 0.9 cM for the female and male maps respectively. The female map was 28% longer than the male map (4060 and 2917 cM respectively) with a 1.6 higher recombination rate observed for female compared to male meioses. This approach has substantially increased expressed sequence and DNA marker resources for tiger shrimp and is a useful resource for QTL mapping and association studies for evolutionarily and commercially important traits. PMID:24465553

  7. A Larger Chocolate Chip-Development of a 15K Theobroma cacao L. SNP Array to Create High-Density Linkage Maps.

    PubMed

    Livingstone, Donald; Stack, Conrad; Mustiga, Guiliana M; Rodezno, Dayana C; Suarez, Carmen; Amores, Freddy; Feltus, Frank A; Mockaitis, Keithanne; Cornejo, Omar E; Motamayor, Juan C

    2017-01-01

    Cacao ( Theobroma cacao L.) is an important cash crop in tropical regions around the world and has a rich agronomic history in South America. As a key component in the cosmetic and confectionary industries, millions of people worldwide use products made from cacao, ranging from shampoo to chocolate. An Illumina Infinity II array was created using 13,530 SNPs identified within a small diversity panel of cacao. Of these SNPs, 12,643 derive from variation within annotated cacao genes. The genotypes of 3,072 trees were obtained, including two mapping populations from Ecuador. High-density linkage maps for these two populations were generated and compared to the cacao genome assembly. Phenotypic data from these populations were combined with the linkage maps to identify the QTLs for yield and disease resistance.

  8. X-ray diffraction measurement of cosolvent accessible volume in rhombohedral insulin crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soares, Alexei S.; Caspar, Donald L. D.

    We report x-ray crystallographic measurement of the number of solvent electrons in the unit cell of a protein crystal equilibrated with aqueous solutions of different densities provides information about preferential hydration in the crystalline state. Room temperature and cryo-cooled rhombohedral insulin crystals were equilibrated with 1.2 M trehalose to study the effect of lowered water activity. The native and trehalose soaked crystals were isomorphous and had similar structures. Including all the low resolution data, the amplitudes of the structure factors were put on an absolute scale (in units of electrons per asymmetric unit) by constraining the integrated number of electronsmore » inside the envelope of the calculated protein density map to equal the number deduced from the atomic model. This procedure defines the value of F(0 0 0), the amplitude at the origin of the Fourier transform, which is equal to the total number of electrons in the asymmetric unit (i.e. protein plus solvent). Comparison of the F(0 0 0) values for three isomorphous pairs of room temperature insulin crystals, three with trehalose and three without trehalose, indicates that 75 ± 12 electrons per asymmetric unit were added to the crystal solvent when soaked in 1.2 M trehalose. If all the water in the crystal were available as solvent for the trehalose, 304 electrons would have been added. Thus, the co-solvent accessible volume is one quarter of the total water in the crystal. Finally, determination of the total number of electrons in a protein crystal is an essential first step for mapping the average density distribution of the disordered solvent.« less

  9. X-ray diffraction measurement of cosolvent accessible volume in rhombohedral insulin crystals

    DOE PAGES

    Soares, Alexei S.; Caspar, Donald L. D.

    2017-08-31

    We report x-ray crystallographic measurement of the number of solvent electrons in the unit cell of a protein crystal equilibrated with aqueous solutions of different densities provides information about preferential hydration in the crystalline state. Room temperature and cryo-cooled rhombohedral insulin crystals were equilibrated with 1.2 M trehalose to study the effect of lowered water activity. The native and trehalose soaked crystals were isomorphous and had similar structures. Including all the low resolution data, the amplitudes of the structure factors were put on an absolute scale (in units of electrons per asymmetric unit) by constraining the integrated number of electronsmore » inside the envelope of the calculated protein density map to equal the number deduced from the atomic model. This procedure defines the value of F(0 0 0), the amplitude at the origin of the Fourier transform, which is equal to the total number of electrons in the asymmetric unit (i.e. protein plus solvent). Comparison of the F(0 0 0) values for three isomorphous pairs of room temperature insulin crystals, three with trehalose and three without trehalose, indicates that 75 ± 12 electrons per asymmetric unit were added to the crystal solvent when soaked in 1.2 M trehalose. If all the water in the crystal were available as solvent for the trehalose, 304 electrons would have been added. Thus, the co-solvent accessible volume is one quarter of the total water in the crystal. Finally, determination of the total number of electrons in a protein crystal is an essential first step for mapping the average density distribution of the disordered solvent.« less

  10. Planning and executing a lampricide treatment of the St. Marys River using georeferenced data

    USGS Publications Warehouse

    Fodale, Michael F.; Bergstedt, Roger A.; Cuddy, Douglas W.; Adams, Jean V.; Stolyarenko, Dimitri A.

    2003-01-01

    The St. Marys River is believed to be the primary source of sea lampreys (Petromyzon marinus) in Lake Huron. Planning or evaluating lampricide treatments required knowing where lampricides could effectively be placed and where larvae were located. Accurate maps of larval density were therefore critical to formulating or evaluating management strategies using lampricides. Larval abundance was systematically assessed with a deepwater electrofishing device at 12,000 georeferenced locations during 1993 to 1996. Maps were produced from catches at those locations, providing georeferenced detail previously unavailable. Catches were processed with a geographic information system (GIS), to create a map of larval density. Whole-river treatment scenarios using TFM (3-trifluoromethyl-4-nitrophenol) were evaluated by combining the map with one of lethal conditions predicted by a lampricide-transport model. The map was also used to evaluate spot treatment scenarios with a granular, bottom-release formulation of another lampricide, Bayluscide (2',5-dichloro-4'-nitro-salicylanilide). Potential high-density plots for Bayluscide treatment were selected from the map and estimates of area, cost, and larval population were developed using the GIS. Plots were ranked by the cost per larva killed. Spot treatments were found to be more cost effective than a conventional TFM treatment and Bayluscide was applied to 82 ha in 1998 and 759 ha in 1999. Effectiveness was estimated with stratified-random sampling before and after treatment in 1999 at 35%. Ten percent already had been removed in 1998, for a total reduction of 45% percent. This marked a change in how research and planning were combined in sea lamprey management to minimize treatment costs and evaluate success.

  11. Predicting Intra-Urban Population Densities in Africa using SAR and Optical Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Linard, C.; Steele, J.; Forget, Y.; Lopez, J.; Shimoni, M.

    2017-12-01

    The population of Africa is predicted to double over the next 40 years, driving profound social, environmental and epidemiological changes within rapidly growing cities. Estimations of within-city variations in population density must be improved in order to take urban heterogeneities into account and better help urban research and decision making, especially for vulnerability and health assessments. Satellite remote sensing offers an effective solution for mapping settlements and monitoring urbanization at different spatial and temporal scales. In Africa, the urban landscape is covered by slums and small houses, where the heterogeneity is high and where the man-made materials are natural. Innovative methods that combine optical and SAR data are therefore necessary for improving settlement mapping and population density predictions. An automatic method was developed to estimate built-up densities using recent and archived optical and SAR data and a multi-temporal database of built-up densities was produced for 48 African cities. Geo-statistical methods were then used to study the relationships between census-derived population densities and satellite-derived built-up attributes. Best predictors were combined in a Random Forest framework in order to predict intra-urban variations in population density in any large African city. Models show significant improvement of our spatial understanding of urbanization and urban population distribution in Africa in comparison to the state of the art.

  12. Combined approaches to flexible fitting and assessment in virus capsids undergoing conformational change☆

    PubMed Central

    Pandurangan, Arun Prasad; Shakeel, Shabih; Butcher, Sarah Jane; Topf, Maya

    2014-01-01

    Fitting of atomic components into electron cryo-microscopy (cryoEM) density maps is routinely used to understand the structure and function of macromolecular machines. Many fitting methods have been developed, but a standard protocol for successful fitting and assessment of fitted models has yet to be agreed upon among the experts in the field. Here, we created and tested a protocol that highlights important issues related to homology modelling, density map segmentation, rigid and flexible fitting, as well as the assessment of fits. As part of it, we use two different flexible fitting methods (Flex-EM and iMODfit) and demonstrate how combining the analysis of multiple fits and model assessment could result in an improved model. The protocol is applied to the case of the mature and empty capsids of Coxsackievirus A7 (CAV7) by flexibly fitting homology models into the corresponding cryoEM density maps at 8.2 and 6.1 Å resolution. As a result, and due to the improved homology models (derived from recently solved crystal structures of a close homolog – EV71 capsid – in mature and empty forms), the final models present an improvement over previously published models. In close agreement with the capsid expansion observed in the EV71 structures, the new CAV7 models reveal that the expansion is accompanied by ∼5° counterclockwise rotation of the asymmetric unit, predominantly contributed by the capsid protein VP1. The protocol could be applied not only to viral capsids but also to many other complexes characterised by a combination of atomic structure modelling and cryoEM density fitting. PMID:24333899

  13. New computational tools for H/D determination in macromolecular structures from neutron data.

    PubMed

    Siliqi, Dritan; Caliandro, Rocco; Carrozzini, Benedetta; Cascarano, Giovanni Luca; Mazzone, Annamaria

    2010-11-01

    Two new computational methods dedicated to neutron crystallography, called n-FreeLunch and DNDM-NDM, have been developed and successfully tested. The aim in developing these methods is to determine hydrogen and deuterium positions in macromolecular structures by using information from neutron density maps. Of particular interest is resolving cases in which the geometrically predicted hydrogen or deuterium positions are ambiguous. The methods are an evolution of approaches that are already applied in X-ray crystallography: extrapolation beyond the observed resolution (known as the FreeLunch procedure) and a difference electron-density modification (DEDM) technique combined with the electron-density modification (EDM) tool (known as DEDM-EDM). It is shown that the two methods are complementary to each other and are effective in finding the positions of H and D atoms in neutron density maps.

  14. Algorithms for computing the geopotential using a simple density layer

    NASA Technical Reports Server (NTRS)

    Morrison, F.

    1976-01-01

    Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.

  15. A high-density consensus map of barley linking DArT markers to SSR, RFLP and STS loci and agricultural traits

    PubMed Central

    Wenzl, Peter; Li, Haobing; Carling, Jason; Zhou, Meixue; Raman, Harsh; Paul, Edie; Hearnden, Phillippa; Maier, Christina; Xia, Ling; Caig, Vanessa; Ovesná, Jaroslava; Cakir, Mehmet; Poulsen, David; Wang, Junping; Raman, Rosy; Smith, Kevin P; Muehlbauer, Gary J; Chalmers, Ken J; Kleinhofs, Andris; Huttner, Eric; Kilian, Andrzej

    2006-01-01

    Background Molecular marker technologies are undergoing a transition from largely serial assays measuring DNA fragment sizes to hybridization-based technologies with high multiplexing levels. Diversity Arrays Technology (DArT) is a hybridization-based technology that is increasingly being adopted by barley researchers. There is a need to integrate the information generated by DArT with previous data produced with gel-based marker technologies. The goal of this study was to build a high-density consensus linkage map from the combined datasets of ten populations, most of which were simultaneously typed with DArT and Simple Sequence Repeat (SSR), Restriction Enzyme Fragment Polymorphism (RFLP) and/or Sequence Tagged Site (STS) markers. Results The consensus map, built using a combination of JoinMap 3.0 software and several purpose-built perl scripts, comprised 2,935 loci (2,085 DArT, 850 other loci) and spanned 1,161 cM. It contained a total of 1,629 'bins' (unique loci), with an average inter-bin distance of 0.7 ± 1.0 cM (median = 0.3 cM). More than 98% of the map could be covered with a single DArT assay. The arrangement of loci was very similar to, and almost as optimal as, the arrangement of loci in component maps built for individual populations. The locus order of a synthetic map derived from merging the component maps without considering the segregation data was only slightly inferior. The distribution of loci along chromosomes indicated centromeric suppression of recombination in all chromosomes except 5H. DArT markers appeared to have a moderate tendency toward hypomethylated, gene-rich regions in distal chromosome areas. On the average, 14 ± 9 DArT loci were identified within 5 cM on either side of SSR, RFLP or STS loci previously identified as linked to agricultural traits. Conclusion Our barley consensus map provides a framework for transferring genetic information between different marker systems and for deploying DArT markers in molecular breeding schemes. The study also highlights the need for improved software for building consensus maps from high-density segregation data of multiple populations. PMID:16904008

  16. A Larger Chocolate Chip—Development of a 15K Theobroma cacao L. SNP Array to Create High-Density Linkage Maps

    PubMed Central

    Livingstone, Donald; Stack, Conrad; Mustiga, Guiliana M.; Rodezno, Dayana C.; Suarez, Carmen; Amores, Freddy; Feltus, Frank A.; Mockaitis, Keithanne; Cornejo, Omar E.; Motamayor, Juan C.

    2017-01-01

    Cacao (Theobroma cacao L.) is an important cash crop in tropical regions around the world and has a rich agronomic history in South America. As a key component in the cosmetic and confectionary industries, millions of people worldwide use products made from cacao, ranging from shampoo to chocolate. An Illumina Infinity II array was created using 13,530 SNPs identified within a small diversity panel of cacao. Of these SNPs, 12,643 derive from variation within annotated cacao genes. The genotypes of 3,072 trees were obtained, including two mapping populations from Ecuador. High-density linkage maps for these two populations were generated and compared to the cacao genome assembly. Phenotypic data from these populations were combined with the linkage maps to identify the QTLs for yield and disease resistance. PMID:29259608

  17. A high density genetic map and QTL for agronomic and yield traits in Foxtail millet [Setaria italica (L.) P. Beauv].

    PubMed

    Fang, Xiaomei; Dong, Kongjun; Wang, Xiaoqin; Liu, Tianpeng; He, Jihong; Ren, Ruiyu; Zhang, Lei; Liu, Rui; Liu, Xueying; Li, Man; Huang, Mengzhu; Zhang, Zhengsheng; Yang, Tianyu

    2016-05-04

    Foxtail millet [Setaria italica (L.) P. Beauv.], a crop of historical importance in China, has been adopted as a model crop for studying C-4 photosynthesis, stress biology and biofuel traits. Construction of a high density genetic map and identification of stable quantitative trait loci (QTL) lay the foundation for marker-assisted selection for agronomic traits and yield improvement. A total of 10598 SSR markers were developed according to the reference genome sequence of foxtail millet cultivar 'Yugu1'. A total of 1013 SSR markers showing polymorphism between Yugu1 and Longgu7 were used to genotype 167 individuals from a Yugu1 × Longgu7 F2 population, and a high density genetic map was constructed. The genetic map contained 1035 loci and spanned 1318.8 cM with an average distance of 1.27 cM between adjacent markers. Based on agronomic and yield traits identified in 2 years, 29 QTL were identified for 11 traits with combined analysis and single environment analysis. These QTL explained from 7.0 to 14.3 % of phenotypic variation. Favorable QTL alleles for peduncle length originated from Longgu7 whereas favorable alleles for the other traits originated from Yugu1 except for qLMS6.1. New SSR markers, a high density genetic map and QTL identified for agronomic and yield traits lay the ground work for functional gene mapping, map-based cloning and marker-assisted selection in foxtail millet.

  18. Method for solvent extraction with near-equal density solutions

    DOEpatents

    Birdwell, Joseph F.; Randolph, John D.; Singh, S. Paul

    2001-01-01

    Disclosed is a modified centrifugal contactor for separating solutions of near equal density. The modified contactor has a pressure differential establishing means that allows the application of a pressure differential across fluid in the rotor of the contactor. The pressure differential is such that it causes the boundary between solutions of near-equal density to shift, thereby facilitating separation of the phases. Also disclosed is a method of separating solutions of near-equal density.

  19. The Chromosome Microdissection and Microcloning Technique.

    PubMed

    Zhang, Ying-Xin; Deng, Chuan-Liang; Hu, Zan-Min

    2016-01-01

    Chromosome microdissection followed by microcloning is an efficient tool combining cytogenetics and molecular genetics that can be used for the construction of the high density molecular marker linkage map and fine physical map, the generation of probes for chromosome painting, and the localization and cloning of important genes. Here, we describe a modified technique to microdissect a single chromosome, paint individual chromosomes, and construct single-chromosome DNA libraries.

  20. Optimization of magnetic flux density measurement using multiple RF receiver coils and multi-echo in MREIT.

    PubMed

    Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je

    2014-09-07

    Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.

  1. Construction of a high-density genetic map for grape using next generation restriction-site associated DNA sequencing

    PubMed Central

    2012-01-01

    Background Genetic mapping and QTL detection are powerful methodologies in plant improvement and breeding. Construction of a high-density and high-quality genetic map would be of great benefit in the production of superior grapes to meet human demand. High throughput and low cost of the recently developed next generation sequencing (NGS) technology have resulted in its wide application in genome research. Sequencing restriction-site associated DNA (RAD) might be an efficient strategy to simplify genotyping. Combining NGS with RAD has proven to be powerful for single nucleotide polymorphism (SNP) marker development. Results An F1 population of 100 individual plants was developed. In-silico digestion-site prediction was used to select an appropriate restriction enzyme for construction of a RAD sequencing library. Next generation RAD sequencing was applied to genotype the F1 population and its parents. Applying a cluster strategy for SNP modulation, a total of 1,814 high-quality SNP markers were developed: 1,121 of these were mapped to the female genetic map, 759 to the male map, and 1,646 to the integrated map. A comparison of the genetic maps to the published Vitis vinifera genome revealed both conservation and variations. Conclusions The applicability of next generation RAD sequencing for genotyping a grape F1 population was demonstrated, leading to the successful development of a genetic map with high density and quality using our designed SNP markers. Detailed analysis revealed that this newly developed genetic map can be used for a variety of genome investigations, such as QTL detection, sequence assembly and genome comparison. PMID:22908993

  2. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  3. Mapping surface energy balance components by combining landsat thematic mapper and ground-based meteorological data

    USGS Publications Warehouse

    Moran, M.S.; Jackson, R. D.; Raymond, L.H.; Gay, L.W.; Slater, P.N.

    1989-01-01

    Surface energy balance components were evaluated by combining satellite-based spectral data with on-site measurements of solar irradiance, air temperature, wind speed, and vapor pressure. Maps of latent heat flux density (??E) and net radiant flux density (Rn) were produced using Landsat Thematic Mapper (TM) data for three dates: 23 July 1985, 5 April 1986, and 24 June 1986. On each date, a Bowen-ratio apparatus, located in a vegetated field, was used to measure ??E and Rn at a point within the field. Estimates of ??E and Rn were also obtained using radiometers aboard an aircraft flown at 150 m above ground level. The TM-based estimates differed from the Bowen-ratio and aircraft-based estimates by less than 12 % over mature fields of cotton, wheat, and alfalfa, where ??E and Rn ranged from 400 to 700 Wm-2. ?? 1989.

  4. PRISM-EM: template interface-based modelling of multi-protein complexes guided by cryo-electron microscopy density maps.

    PubMed

    Kuzu, Guray; Keskin, Ozlem; Nussinov, Ruth; Gursoy, Attila

    2016-10-01

    The structures of protein assemblies are important for elucidating cellular processes at the molecular level. Three-dimensional electron microscopy (3DEM) is a powerful method to identify the structures of assemblies, especially those that are challenging to study by crystallography. Here, a new approach, PRISM-EM, is reported to computationally generate plausible structural models using a procedure that combines crystallographic structures and density maps obtained from 3DEM. The predictions are validated against seven available structurally different crystallographic complexes. The models display mean deviations in the backbone of <5 Å. PRISM-EM was further tested on different benchmark sets; the accuracy was evaluated with respect to the structure of the complex, and the correlation with EM density maps and interface predictions were evaluated and compared with those obtained using other methods. PRISM-EM was then used to predict the structure of the ternary complex of the HIV-1 envelope glycoprotein trimer, the ligand CD4 and the neutralizing protein m36.

  5. Assessment of the vegetation cover in a burned area 22-years ago using remote sensing techniques and GIS analysis (Sierra de las Nieves, South of Spain).

    NASA Astrophysics Data System (ADS)

    Martínez-Murillo, Juan F.; Remond, Ricardo; Ruiz-Sinoga, José D.

    2015-04-01

    The study aim was to characterize the vegetation cover in a burned area 22-years ago considering the previous situation to wildfire in 1991 and the current one in 2013. The objectives were to: (i) compare the current and previous vegetation cover to widlfire; (ii) evaluate whether the current vegetation has recovered the previous cover to wildfire; and (iii) determine the spatial variability of vegetation recovery after 22-years since the wildfire. The study area is located in Sierra de las Nieves, South of Spain. It corresponds to an area affected by a wildfire in August 8th, 1991. The burned area was equal to 8156 ha. The burn severity was spatially very high. The main geographic features of the burned area are: mountainous topography (altitudes ranging from 250 m to 1500 m; slope gradient >25%; exposure mainly southfacing); igneous (peridotites), metamorphic (gneiss) and calcareous rocks (limestones); and predominant forest land use (Pinus pinaster sp. woodlands, 10%; pinus opened forest + shrubland, 40%; shrubland, 35%; and bare soil + grassland, 15%). Remote sensing techniques and GIS analysis has been applied to achieve the objectives. Landsat 5 and Landsat 8 images were used: July 13th, 1991 and July 1st, 2013, for the previous wildfire situation and 22-years after, respectively. The 1990 CORINE land cover was also considered to map 1991 land uses prior the wildfire. Likewise, the Andalucía Regional Government wildfire historic records were used to select the burned area and its geographical limit. 1991 and 2013 land cover map were obtained by means of object-oriented classifications. Also, NDVI and PVI1 vegetation indexes were calculated and mapped for both years. Finally, some images transformations and kernel density images were applied to determine the most recovered areas and to map the spatial concentration of bare soil and pine cover areas in 1991 and 2013, respectively. According to the results, the combination of remote sensing and GIS analysis let map the most recovered areas affected by the wildfire in 1991. The vegetation indexes indicated that the vegetation cover in 2013 was still lower than that mapped just before the 1991 widlfire in most of the burned area after 22-years. This result was also confirmed by other techniques applied. Finally, the kernel density surface let identify and locate the most recovered areas of pine cover as well as those areas that still remain totally or partially uncovered (bare soil.

  6. Rotavirus - Global research density equalizing mapping and gender analysis.

    PubMed

    Köster, Corinna; Klingelhöfer, Doris; Groneberg, David A; Schwarzer, Mario

    2016-01-02

    Rotaviruses are the leading reason for dehydration and severe diarrheal disease and in infants and young children worldwide. An increasing number of related publications cause a crucial challenge to determine the relevant scientific output. Therefore, scientometric analyses are helpful to evaluate quantity as well as quality of the worldwide research activities on Rotavirus. Up to now, no in-depth global scientometric analysis relating to Rotavirus publications has been carried out. This study used scientometric tools and the method of density equalizing mapping to visualize the differences of the worldwide research effort referring to Rotavirus. The aim of the study was to compare scientific output geographically and over time by using an in-depth data analysis and New quality and quantity indices in science (NewQIS) tools. Furthermore, a gender analysis was part of the data interpretation. We retrieved all Rotavirus-related articles, which were published on "Rotavirus" during the time period from 1900 to 2013, from the Web of Science by a defined search term. These items were analyzed regarding quantitative and qualitative aspects, and visualized with the help of bibliometric methods and the technique of density equalizing mapping to show the differences of the worldwide research efforts. This work aimed to extend the current NewQIS platform. The 5906 Rotavirus associated articles were published in 138 countries from 1900 to 2013. The USA authored 2037 articles that equaled 34.5% of all published items followed by Japan with 576 articles and the United Kingdom - as the most productive representative of the European countries - with 495 articles. Furthermore, the USA established the most cooperations with other countries and was found to be in the center of an international collaborative network. We performed a gender analysis of authors per country (threshold was set at a publishing output of more than 100 articles by more than 50 authors whose names could be identified in more than 50% of cases) showed a domination of female scientists in Brazil, while in all other countries, male scientists predominate. Relating the number of publications to the population of a country (Q1) and compared to the GPD (Q2), we found that European and African countries as well as Australia and New Zealand - not the USA - were among the top ranked nations. Regarding rotavirus-related scientific output, the USA was the overall leading nation when qualitative and qualitative aspects were taken into account. In contrast to these classical scientometric variables, indices such as Q1 and Q2 enable comparability between countries with unequal conditions and scientific infrastructures helping to differentiate publishing quality and quantity in a more relevant way. Also, it was deduced that counties with a high rotavirus-associated child mortality, like the Democratic Republic of Congo, should be integrated into the collaborative efforts more intensively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. CuPc/Au(1 1 0): Determination of the azimuthal alignment by a combination of angle-resolved photoemission and density functional theory

    PubMed Central

    Lüftner, Daniel; Milko, Matus; Huppmann, Sophia; Scholz, Markus; Ngyuen, Nam; Wießner, Michael; Schöll, Achim; Reinert, Friedrich; Puschnig, Peter

    2014-01-01

    Here we report on a combined experimental and theoretical study on the structural and electronic properties of a monolayer of Copper-Phthalocyanine (CuPc) on the Au(1 1 0) surface. Low-energy electron diffraction reveals a commensurate overlayer unit cell containing one adsorbate species. The azimuthal alignment of the CuPc molecule is revealed by comparing experimental constant binding energy (kxky)-maps using angle-resolved photoelectron spectroscopy with theoretical momentum maps of the free molecule's highest occupied molecular orbital (HOMO). This structural information is confirmed by total energy calculations within the framework of van-der-Waals corrected density functional theory. The electronic structure is further analyzed by computing the molecule-projected density of states, using both a semi-local and a hybrid exchange-correlation functional. In agreement with experiment, the HOMO is located about 1.2 eV below the Fermi-level, while there is no significant charge transfer into the molecule and the CuPc LUMO remains unoccupied on the Au(1 1 0) surface. PMID:25284953

  8. Combined Training (Aerobic Plus Strength) Potentiates a Reduction in Body Fat but Demonstrates No Difference on the Lipid Profile in Postmenopausal Women When Compared With Aerobic Training With a Similar Training Load.

    PubMed

    Rossi, Fabrício E; Fortaleza, Ana C S; Neves, Lucas M; Buonani, Camila; Picolo, Malena R; Diniz, Tiego A; Kalva-Filho, Carlos A; Papoti, Marcelo; Lira, Fabio S; Freitas Junior, Ismael F

    2016-01-01

    The aim of this study was to verify the effects of aerobic and combined training on the body composition and lipid profile of obese postmenopausal women and to analyze which of these models is more effective after equalizing the training load. Sixty-five postmenopausal women (age = 61.0 ± 6.3 years) were divided into 3 groups: aerobic training (AT, n = 15), combined training (CT [strength + aerobic], n = 32), and control group (CG, n = 18). Their body composition upper body fat (TF), fat mass (FM), percentage of FM, and fat-free mass (FFM) were estimated by dual-energy x-ray absorptiometry. The lipid profile, total cholesterol, high-density lipoprotein (HDL) cholesterol, and low-density lipoprotein cholesterol were assessed. There was a statistically significant difference in the TF (AT = -4.4%, CT = -4.4%, and CG = 1.0%, p = 0.001) and FFM (AT = 1.7%, CT = 2.6%, and CG = -1.4%, p = 0.0001) between the experimental and the control groups. Regarding the percentage of body fat, there was a statistically significant difference only between the CT and CG groups (AT = -2.8%, CT = -3.9%, and CG = 0.31%; p = 0.004). When training loads were equalized, the aerobic and combined training decreased core fat and increased FFM, but only the combined training potentiated a reduction in percentage of body fat in obese postmenopausal women after the training program. High-density lipoprotein-c levels increased in the combined group, and the chol/HDL ratio (atherogenic index) decreased in the aerobic group; however, there were no significant differences between the intervention programs. Taken together, both the exercise training programs were effective for improving body composition and inducing an antiatherogenic status.

  9. Impact Ionization: Beyond the Golden Rule

    DTIC Science & Technology

    1992-01-01

    3]. Hence, the use electronic kinetic energy, H. is the phonon bath Hamil- of Monte Carlo methods combined with density matrix tonian, HA, is the...0 o5 () Wace i.a (bN w...,,,ae (W ( Ib) k- Figure 2. (a) Ionization rate in the 1 11 > direction. Figure 3. (a) Equal ionization rate curves in the k

  10. Pressure recovery performance of conical diffusers at high subsonic Mach numbers

    NASA Technical Reports Server (NTRS)

    Dolan, F. X.; Runstadler, P. W., Jr.

    1973-01-01

    The pressure recovery performance of conical diffusers has been measured for a wide range of geometries and inlet flow conditions. The approximate level and location (in terms of diffuser geometry of optimum performance were determined. Throat Mach numbers from low subsonic (m sub t equals 0.2) through choking (m sub t equals 1.0) were investigated in combination with throat blockage from 0.03 to 0.12. For fixed Mach number, performance was measured over a fourfold range of inlet Reynolds number. Maps of pressure recovery are presented as a function of diffuser geometry for fixed sets of inlet conditions. The influence of inlet blockage, throat Mach number, and inlet Reynolds number is discussed.

  11. New type of chaos synchronization in discrete-time systems: the F-M synchronization

    NASA Astrophysics Data System (ADS)

    Ouannas, Adel; Grassi, Giuseppe; Karouma, Abdulrahman; Ziar, Toufik; Wang, Xiong; Pham, Viet-Thanh

    2018-04-01

    In this paper, a new type of synchronization for chaotic (hyperchaotic) maps with different dimensions is proposed. The novel scheme is called F - M synchronization, since it combines the inverse generalized synchronization (based on a functional relationship F) with the matrix projective synchronization (based on a matrix M). In particular, the proposed approach enables F - M synchronization with index d to be achieved between n-dimensional drive system map and m-dimensional response system map, where the synchronization index d corresponds to the dimension of the synchronization error. The technique, which exploits nonlinear controllers and Lyapunov stability theory, proves to be effective in achieving the F - M synchronization not only when the synchronization index d equals n or m, but even if the synchronization index d is larger than the map dimensions n and m. Finally, simulation results are reported, with the aim to illustrate the capabilities of the novel scheme proposed herein.

  12. 76 FR 72144 - Standardized and Enhanced Disclosure Requirements for Television Broadcast Licensee Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-22

    ..., contour maps; ownership reports and related materials; portions of the Equal Employment Opportunity file... maps; ownership reports and related materials; portions of the Equal Employment Opportunity file held... immediately following the shortened license term. See 47 CFR 73.3526((e)(2), 73.3527(e)(2). Contour Maps (as...

  13. Spinstand demonstration of areal density enhancement using two-dimensional magnetic recording (invited)

    NASA Astrophysics Data System (ADS)

    Lippman, Thomas; Brockie, Richard; Coker, Jon; Contreras, John; Galbraith, Rick; Garzon, Samir; Hanson, Weldon; Leong, Tom; Marley, Arley; Wood, Roger; Zakai, Rehan; Zolla, Howard; Duquette, Paul; Petrizzi, Joe

    2015-05-01

    Exponential growth of the areal density has driven the magnetic recording industry for almost sixty years. But now areal density growth is slowing down, suggesting that current technologies are reaching their fundamental limit. The next generation of recording technologies, namely, energy-assisted writing and bit-patterned media, remains just over the horizon. Two-Dimensional Magnetic Recording (TDMR) is a promising new approach, enabling continued areal density growth with only modest changes to the heads and recording electronics. We demonstrate a first generation implementation of TDMR by using a dual-element read sensor to improve the recovery of data encoded by a conventional low-density parity-check (LDPC) channel. The signals are combined with a 2D equalizer into a single modified waveform that is decoded by a standard LDPC channel. Our detection hardware can perform simultaneous measurement of the pre- and post-combined error rate information, allowing one set of measurements to assess the absolute areal density capability of the TDMR system as well as the gain over a conventional shingled magnetic recording system with identical components. We discuss areal density measurements using this hardware and demonstrate gains exceeding five percent based on experimental dual reader components.

  14. A fully traits-based approach to modeling global vegetation distribution.

    PubMed

    van Bodegom, Peter M; Douma, Jacob C; Verheijen, Lieneke M

    2014-09-23

    Dynamic Global Vegetation Models (DGVMs) are indispensable for our understanding of climate change impacts. The application of traits in DGVMs is increasingly refined. However, a comprehensive analysis of the direct impacts of trait variation on global vegetation distribution does not yet exist. Here, we present such analysis as proof of principle. We run regressions of trait observations for leaf mass per area, stem-specific density, and seed mass from a global database against multiple environmental drivers, making use of findings of global trait convergence. This analysis explained up to 52% of the global variation of traits. Global trait maps, generated by coupling the regression equations to gridded soil and climate maps, showed up to orders of magnitude variation in trait values. Subsequently, nine vegetation types were characterized by the trait combinations that they possess using Gaussian mixture density functions. The trait maps were input to these functions to determine global occurrence probabilities for each vegetation type. We prepared vegetation maps, assuming that the most probable (and thus, most suited) vegetation type at each location will be realized. This fully traits-based vegetation map predicted 42% of the observed vegetation distribution correctly. Our results indicate that a major proportion of the predictive ability of DGVMs with respect to vegetation distribution can be attained by three traits alone if traits like stem-specific density and seed mass are included. We envision that our traits-based approach, our observation-driven trait maps, and our vegetation maps may inspire a new generation of powerful traits-based DGVMs.

  15. Constraints on neutron star radii based on chiral effective field theory interactions.

    PubMed

    Hebeler, K; Lattimer, J M; Pethick, C J; Schwenk, A

    2010-10-15

    We show that microscopic calculations based on chiral effective field theory interactions constrain the properties of neutron-rich matter below nuclear densities to a much higher degree than is reflected in commonly used equations of state. Combined with observed neutron star masses, our results lead to a radius R=9.7-13.9  km for a 1.4M⊙ star, where the theoretical range is due, in about equal amounts, to uncertainties in many-body forces and to the extrapolation to high densities.

  16. Yellow fever disease: density equalizing mapping and gender analysis of international research output

    PubMed Central

    2013-01-01

    Background A number of scientific papers on yellow fever have been published but no broad scientometric analysis on the published research of yellow fever has been reported. The aim of the article based study was to provide an in-depth evaluation of the yellow fever field using large-scale data analysis and employment of bibliometric indicators of production and quantity. Methods Data were retrieved from the Web of Science database (WoS) and analyzed as part of the NewQis platform. Then data were extracted from each file, transferred to databases and visualized as diagrams. Partially by means of density-equalizing mapping makes the findings clear and emphasizes the output of the analysis. Results In the study period from 1900 to 2012 a total of 5,053 yellow fever-associated items were published by 79 countries. The United States (USA) having the highest publication rate at 42% (n = 751) followed by far from Brazil (n = 203), France (n = 149) and the United Kingdom (n = 113). The most productive journals are the “Public Health Reports”, the “American Journal of Tropical Medicine and Hygiene” and the “Journal of Virology”. The gender analysis showed an overall steady increase of female authorship from 1950 to 2011. Brazil is the only country of the five most productive countries with a higher proportion of female scientists. Conclusions The present data shows an increase in research productivity over the entire study period, in particular an increase of female scientists. Brazil shows a majority of female authors, a fact that is confirmed by other studies. PMID:24245856

  17. Yellow fever disease: density equalizing mapping and gender analysis of international research output.

    PubMed

    Bundschuh, Matthias; Groneberg, David A; Klingelhoefer, Doris; Gerber, Alexander

    2013-11-18

    A number of scientific papers on yellow fever have been published but no broad scientometric analysis on the published research of yellow fever has been reported.The aim of the article based study was to provide an in-depth evaluation of the yellow fever field using large-scale data analysis and employment of bibliometric indicators of production and quantity. Data were retrieved from the Web of Science database (WoS) and analyzed as part of the NewQis platform. Then data were extracted from each file, transferred to databases and visualized as diagrams. Partially by means of density-equalizing mapping makes the findings clear and emphasizes the output of the analysis. In the study period from 1900 to 2012 a total of 5,053 yellow fever-associated items were published by 79 countries. The United States (USA) having the highest publication rate at 42% (n = 751) followed by far from Brazil (n = 203), France (n = 149) and the United Kingdom (n = 113). The most productive journals are the "Public Health Reports", the "American Journal of Tropical Medicine and Hygiene" and the "Journal of Virology". The gender analysis showed an overall steady increase of female authorship from 1950 to 2011. Brazil is the only country of the five most productive countries with a higher proportion of female scientists. The present data shows an increase in research productivity over the entire study period, in particular an increase of female scientists. Brazil shows a majority of female authors, a fact that is confirmed by other studies.

  18. Failure Maps for Rectangular 17-4PH Stainless Steel Sandwiched Foam Panels

    NASA Technical Reports Server (NTRS)

    Raj, S. V.; Ghosn, L. J.

    2007-01-01

    A new and innovative concept is proposed for designing lightweight fan blades for aircraft engines using commercially available 17-4PH precipitation hardened stainless steel. Rotating fan blades in aircraft engines experience a complex loading state consisting of combinations of centrifugal, distributed pressure and torsional loads. Theoretical failure plastic collapse maps, showing plots of the foam relative density versus face sheet thickness, t, normalized by the fan blade span length, L, have been generated for rectangular 17-4PH sandwiched foam panels under these three loading modes assuming three failure plastic collapse modes. These maps show that the 17-4PH sandwiched foam panels can fail by either the yielding of the face sheets, yielding of the foam core or wrinkling of the face sheets depending on foam relative density, the magnitude of t/L and the loading mode. The design envelop of a generic fan blade is superimposed on the maps to provide valuable insights on the probable failure modes in a sandwiched foam fan blade.

  19. Landslide susceptibility map: from research to application

    NASA Astrophysics Data System (ADS)

    Fiorucci, Federica; Reichenbach, Paola; Ardizzone, Francesca; Rossi, Mauro; Felicioni, Giulia; Antonini, Guendalina

    2014-05-01

    Susceptibility map is an important and essential tool in environmental planning, to evaluate landslide hazard and risk and for a correct and responsible management of the territory. Landslide susceptibility is the likelihood of a landslide occurring in an area on the basis of local terrain conditions. Can be expressed as the probability that any given region will be affected by landslides, i.e. an estimate of "where" landslides are likely to occur. In this work we present two examples of landslide susceptibility map prepared for the Umbria Region and for the Perugia Municipality. These two maps were realized following official request from the Regional and Municipal government to the Research Institute for the Hydrogeological Protection (CNR-IRPI). The susceptibility map prepared for the Umbria Region represents the development of previous agreements focused to prepare: i) a landslide inventory map that was included in the Urban Territorial Planning (PUT) and ii) a series of maps for the Regional Plan for Multi-risk Prevention. The activities carried out for the Umbria Region were focused to define and apply methods and techniques for landslide susceptibility zonation. Susceptibility maps were prepared exploiting a multivariate statistical model (linear discriminant analysis) for the five Civil Protection Alert Zones defined in the regional territory. The five resulting maps were tested and validated using the spatial distribution of recent landslide events that occurred in the region. The susceptibility map for the Perugia Municipality was prepared to be integrated as one of the cartographic product in the Municipal development plan (PRG - Piano Regolatore Generale) as required by the existing legislation. At strategic level, one of the main objectives of the PRG, is to establish a framework of knowledge and legal aspects for the management of geo-hydrological risk. At national level most of the susceptibility maps prepared for the PRG, were and still are obtained qualitatively classifying the territory according to slope classes. For the Perugia Municipality the susceptibility map was obtained combining results of statistical multivariate models and landslide density map. In particular, in the first phase a susceptibility zonation was prepared using different single and combined probability statistical multivariate techniques. The zonation was then combined and compared with the landslide density map in order to reclassify the false negative (portion of the territory classified by the model as stable affected by slope failures). The semi-quantitative resulting map was classified in five susceptibility classes. For each class a set of technical regulation was established to manage the territory.

  20. Measurement of carbon nanotube microstructure relative density by optical attenuation and observation of size-dependent variations.

    PubMed

    Park, Sei Jin; Schmidt, Aaron J; Bedewy, Mostafa; Hart, A John

    2013-07-21

    Engineering the density of carbon nanotube (CNT) forest microstructures is vital to applications such as electrical interconnects, micro-contact probes, and thermal interface materials. For CNT forests on centimeter-scale substrates, weight and volume can be used to calculate density. However, this is not suitable for smaller samples, including individual microstructures, and moreover does not enable mapping of spatial density variations within the forest. We demonstrate that the relative mass density of individual CNT microstructures can be measured by optical attenuation, with spatial resolution equaling the size of the focused spot. For this, a custom optical setup was built to measure the transmission of a focused laser beam through CNT microstructures. The transmittance was correlated with the thickness of the CNT microstructures by Beer-Lambert-Bouguer law to calculate the attenuation coefficient. We reveal that the density of CNT microstructures grown by CVD can depend on their size, and that the overall density of arrays of microstructures is affected significantly by run-to-run process variations. Further, we use the technique to quantify the change in CNT microstructure density due to capillary densification. This is a useful and accessible metrology technique for CNTs in future microfabrication processes, and will enable direct correlation of density to important properties such as stiffness and electrical conductivity.

  1. Human population, urban settlement patterns and their impact on Plasmodium falciparum malaria endemicity.

    PubMed

    Tatem, Andrew J; Guerra, Carlos A; Kabaria, Caroline W; Noor, Abdisalan M; Hay, Simon I

    2008-10-27

    The efficient allocation of financial resources for malaria control and the optimal distribution of appropriate interventions require accurate information on the geographic distribution of malaria risk and of the human populations it affects. Low population densities in rural areas and high population densities in urban areas can influence malaria transmission substantially. Here, the Malaria Atlas Project (MAP) global database of Plasmodium falciparum parasite rate (PfPR) surveys, medical intelligence and contemporary population surfaces are utilized to explore these relationships and other issues involved in combining malaria risk maps with those of human population distribution in order to define populations at risk more accurately. First, an existing population surface was examined to determine if it was sufficiently detailed to be used reliably as a mask to identify areas of very low and very high population density as malaria free regions. Second, the potential of international travel and health guidelines (ITHGs) for identifying malaria free cities was examined. Third, the differences in PfPR values between surveys conducted in author-defined rural and urban areas were examined. Fourth, the ability of various global urban extent maps to reliably discriminate these author-based classifications of urban and rural in the PfPR database was investigated. Finally, the urban map that most accurately replicated the author-based classifications was analysed to examine the effects of urban classifications on PfPR values across the entire MAP database. Masks of zero population density excluded many non-zero PfPR surveys, indicating that the population surface was not detailed enough to define areas of zero transmission resulting from low population densities. In contrast, the ITHGs enabled the identification and mapping of 53 malaria free urban areas within endemic countries. Comparison of PfPR survey results showed significant differences between author-defined 'urban' and 'rural' designations in Africa, but not for the remainder of the malaria endemic world. The Global Rural Urban Mapping Project (GRUMP) urban extent mask proved most accurate for mapping these author-defined rural and urban locations, and further sub-divisions of urban extents into urban and peri-urban classes enabled the effects of high population densities on malaria transmission to be mapped and quantified. The availability of detailed, contemporary census and urban extent data for the construction of coherent and accurate global spatial population databases is often poor. These known sources of uncertainty in population surfaces and urban maps have the potential to be incorporated into future malaria burden estimates. Currently, insufficient spatial information exists globally to identify areas accurately where population density is low enough to impact upon transmission. Medical intelligence does however exist to reliably identify malaria free cities. Moreover, in Africa, urban areas that have a significant effect on malaria transmission can be mapped.

  2. A reference consensus genetic map for molecular markers and economically important traits in faba bean (Vicia faba L.)

    PubMed Central

    2013-01-01

    Background Faba bean (Vicia faba L.) is among the earliest domesticated crops from the Near East. Today this legume is a key protein feed and food worldwide and continues to serve an important role in culinary traditions throughout Middle East, Mediterranean region, China and Ethiopia. Adapted to a wide range of soil types, the main faba bean breeding objectives are to improve yield, resistance to biotic and abiotic stresses, seed quality and other agronomic traits. Genomic approaches aimed at enhancing faba bean breeding programs require high-quality genetic linkage maps to facilitate quantitative trait locus analysis and gene tagging for use in a marker-assisted selection. The objective of this study was to construct a reference consensus map in faba bean by joining the information from the most relevant maps reported so far in this crop. Results A combination of two approaches, increasing the number of anchor loci in diverse mapping populations and joining the corresponding genetic maps, was used to develop a reference consensus map in faba bean. The map was constructed from three main recombinant inbreed populations derived from four parental lines, incorporates 729 markers and is based on 69 common loci. It spans 4,602 cM with a range from 323 to 1041 loci in six main linkage groups or chromosomes, and an average marker density of one locus every 6 cM. Locus order is generally well maintained between the consensus map and the individual maps. Conclusion We have constructed a reliable and fairly dense consensus genetic linkage map that will serve as a basis for genomic approaches in faba bean research and breeding. The core map contains a larger number of markers than any previous individual map, covers existing gaps and achieves a wider coverage of the large faba bean genome as a whole. This tool can be used as a reference resource for studies in different genetic backgrounds, and provides a framework for transferring genetic information when using different marker technologies. Combined with syntenic approaches, the consensus map will increase marker density in selected genomic regions and will be useful for future faba bean molecular breeding applications. PMID:24377374

  3. THE DARK HALO-SPHEROID CONSPIRACY AND THE ORIGIN OF ELLIPTICAL GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remus, Rhea-Silvia; Burkert, Andreas; Dolag, Klaus

    2013-04-01

    Dynamical modeling and strong-lensing data indicate that the total density profiles of early-type galaxies are close to isothermal, i.e., {rho}{sub tot}{proportional_to}r {sup {gamma}} with {gamma} Almost-Equal-To -2. To understand the origin of this universal slope we study a set of simulated spheroids formed in isolated binary mergers as well as the formation within the cosmological framework. The total stellar plus dark matter density profiles can always be described by a power law with an index of {gamma} Almost-Equal-To -2.1 with a tendency toward steeper slopes for more compact, lower-mass ellipticals. In the binary mergers the amount of gas involved inmore » the merger determines the precise steepness of the slope. This agrees with results from the cosmological simulations where ellipticals with steeper slopes have a higher fraction of stars formed in situ. Each gas-poor merger event evolves the slope toward {gamma} {approx} -2, once this slope is reached further merger events do not change it anymore. All our ellipticals have flat intrinsic combined stellar and dark matter velocity dispersion profiles. We conclude that flat velocity dispersion profiles and total density distributions with a slope of {gamma} {approx} -2 for the combined system of stars and dark matter act as a natural attractor. The variety of complex formation histories as present in cosmological simulations, including major as well as minor merger events, is essential to generate the full range of observed density slopes seen for present-day elliptical galaxies.« less

  4. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  5. A new all-sky map of Galactic high-velocity clouds from the 21-cm HI4PI survey

    NASA Astrophysics Data System (ADS)

    Westmeier, Tobias

    2018-02-01

    High-velocity clouds (HVCs) are neutral or ionized gas clouds in the vicinity of the Milky Way that are characterized by high radial velocities inconsistent with participation in the regular rotation of the Galactic disc. Previous attempts to create a homogeneous all-sky H I map of HVCs have been hampered by a combination of poor angular resolution, limited surface brightness sensitivity and suboptimal sampling. Here, a new and improved H I map of Galactic HVCs based on the all-sky HI4PI survey is presented. The new map is fully sampled and provides significantly better angular resolution (16.2 versus 36 arcmin) and column density sensitivity (2.3 versus 3.7 × 1018 cm-2 at the native resolution) than the previously available LAB survey. The new HVC map resolves many of the major HVC complexes in the sky into an intricate network of narrow H I filaments and clumps that were not previously resolved by the LAB survey. The resulting sky coverage fraction of high-velocity H I emission above a column density level of 2 × 1018 cm-2 is approximately 15 per cent, which reduces to about 13 per cent when the Magellanic Clouds and other non-HVC emission are removed. The differential sky coverage fraction as a function of column density obeys a truncated power law with an exponent of -0.93 and a turnover point at about 5 × 1019 cm-2. H I column density and velocity maps of the HVC sky are made publicly available as FITS images for scientific use by the community.

  6. 25 Tb/s transmission over 5,530 km using 16QAM at 5.2 b/s/Hz spectral efficiency.

    PubMed

    Cai, J-X; Batshon, H G; Zhang, H; Davidson, C R; Sun, Y; Mazurczyk, M; Foursa, D G; Sinkin, O; Pilipetskii, A; Mohs, G; Bergano, Neal S

    2013-01-28

    We transmit 250x100G PDM RZ-16QAM channels with 5.2 b/s/Hz spectral efficiency over 5,530 km using single-stage C-band EDFAs equalized to 40 nm. We use single parity check coded modulation and all channels are decoded with no errors after iterative decoding between a MAP decoder and an LDPC based FEC algorithm. We also observe that the optimum power spectral density is nearly independent of SE, signal baud rate or modulation format in a dispersion uncompensated system.

  7. Modeling 3-D density distribution in the mantle from inversion of geoid anomalies: Application to the Yellowstone Province

    NASA Astrophysics Data System (ADS)

    Chaves, Carlos Alberto Moreno; Ussami, Naomi

    2013-12-01

    developed a three-dimensional scheme to invert geoid anomalies aiming to map density variations in the mantle. Using an ellipsoidal-Earth approximation, the model space is represented by tesseroids. To assess the quality of the density models, the resolution and covariance matrices were computed. From a synthetic geoid anomaly caused by a plume tail with Gaussian noise added, the inversion code was able to recover a plausible solution about the density contrast and geometry when it is compared to the synthetic model. To test the inversion algorithm in a natural case study, geoid anomalies from the Yellowstone Province (YP) were inverted. From the Earth Gravitational Model 2008 expanded up to degree 2160, lower crust- and mantle-related negative geoid anomalies with amplitude of approximately 70 m were obtained after removing long-wavelength components (>5400 km) and crustal effects. We estimated three density models for the YP. The first model, the EDM-1 (estimated density model), uses a starting model with density contrast equal to 0. The other two models, the EDM-2 and EDM-3, use an initial density derived from two S-velocity models for the western United States, the Dynamic North America Models of S Waves by Obrebsky et al. (2011) and the Northwestern United States Teleseismic Tomography of S Waves (NWUS11-S) by James et al. (2011). In these three models, a lower and an upper bound for the density solution was also imposed as a priori information. Regardless of the initial constraints, the inversion of the residual geoid indicates that the lower crust and the upper mantle of the YP have a predominantly negative density contrast ( -50 kg/m3) relative to the surrounding mantle. This solution reveals that the density contrast extends at least to 660 km depth. Regional correlation analysis between the EDM-1 and NWUS11-S indicates an anticorrelation (coefficient of -0.7) at 400 km depth. Our study suggests that the mantle density derived from the inversion of geoid could be integrated with seismic velocity models to image mantle anomalous features beyond the depth limit of investigation achieved combining gravity and seismic tomography. ©2013. American Geophysical Union. All Rights Reserved.

  8. Fermionic currents in AdS spacetime with compact dimensions

    NASA Astrophysics Data System (ADS)

    Bellucci, S.; Saharian, A. A.; Vardanyan, V.

    2017-09-01

    We derive a closed expression for the vacuum expectation value (VEV) of the fermionic current density in a (D +1 )-dimensional locally AdS spacetime with an arbitrary number of toroidally compactified Poincaré spatial dimensions and in the presence of a constant gauge field. The latter can be formally interpreted in terms of a magnetic flux treading the compact dimensions. In the compact subspace, the field operator obeys quasiperiodicity conditions with arbitrary phases. The VEV of the charge density is zero and the current density has nonzero components along the compact dimensions only. They are periodic functions of the magnetic flux with the period equal to the flux quantum and tend to zero on the AdS boundary. Near the horizon, the effect of the background gravitational field is small and the leading term in the corresponding asymptotic expansion coincides with the VEV for a massless field in the locally Minkowski bulk. Unlike the Minkowskian case, in the system consisting of an equal number of fermionic and scalar degrees of freedom, with same masses, charges and phases in the periodicity conditions, the total current density does not vanish. In these systems, the leading divergences in the scalar and fermionic contributions on the horizon are canceled and, as a consequence of that, the charge flux, integrated over the coordinate perpendicular to the AdS boundary, becomes finite. We show that in odd spacetime dimensions the fermionic fields realizing two inequivalent representations of the Clifford algebra and having equal phases in the periodicity conditions give the same contribution to the VEV of the current density. Combining the contributions from these fields, the current density in odd-dimensional C -,P - and T -symmetric models are obtained. As an application, we consider the ground state current density in curved carbon nanotubes described in terms of a (2 +1 )-dimensional effective Dirac model.

  9. BCS Theory of Hadronic Matter at High Densities

    NASA Astrophysics Data System (ADS)

    Bohr, Henrik; Panda, Prafulla K.; Providência, Constança; da Providência, João

    2012-04-01

    The equilibrium between the so-called 2SC and CFL phases of strange quark matter at high densities is investigated in the framework of a simple schematic model of the NJL type. Equal densities are assumed for quarks u, d and s. The 2SC phase is here described by a color-flavor symmetric state, in which the quark numbers are independent of the color-flavor combination. In the CFL phase the quark numbers depend on the color-flavor combination, that is, the number of quarks associated with the color-flavor combinations ur, dg, sb is different from the number of quarks associated with the color flavor combinations ug, ub, dr, db, sr, sg. We find that the 2SC phase is stable for a chemical potential μ below μ c = 0.505 GeV, while the CFL phase is stable above, the equilibrium pressure being P c = 0.003 GeV4. We have used a 3-momentum regularizing cutoff Λ = 0.8 GeV, which is somewhat larger than is usual in NJL type models. This should be adequate if the relevant chemical potential does not exceed 0.6 GeV.

  10. Cross-correlation cosmography with intensity mapping of the neutral hydrogen 21 cm emission

    NASA Astrophysics Data System (ADS)

    Pourtsidou, A.; Bacon, D.; Crittenden, R.

    2015-11-01

    The cross-correlation of a foreground density field with two different background convergence fields can be used to measure cosmographic distance ratios and constrain dark energy parameters. We investigate the possibility of performing such measurements using a combination of optical galaxy surveys and neutral hydrogen (HI) intensity mapping surveys, with emphasis on the performance of the planned Square Kilometre Array (SKA). Using HI intensity mapping to probe the foreground density tracer field and/or the background source fields has the advantage of excellent redshift resolution and a longer lever arm achieved by using the lensing signal from high redshift background sources. Our results show that, for our best SKA-optical configuration of surveys, a constant equation of state for dark energy can be constrained to ≃8 % for a sky coverage fsky=0.5 and assuming a σ (ΩDE)=0.03 prior for the dark energy density parameter. We also show that using the cosmic microwave background as the second source plane is not competitive, even when considering a COrE-like satellite.

  11. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  12. Native protein mapping and visualization of protein interactions in the area of human plasma high-density lipoprotein by combining nondenaturing micro 2DE and quantitative LC-MS/MS.

    PubMed

    Jin, Ya; Bu, Shujie; Zhang, Jun; Yuan, Qi; Manabe, Takashi; Tan, Wen

    2014-07-01

    A human plasma sample was subjected to nondenaturing micro 2DE and a gel area (5 mm × 18 mm) that includes high-density lipoprotein (HDL) was cut into 1 mm × 1 mm squares, then the proteins in the 90 gel pieces were analyzed by quantitative LC-MS/MS. Grid-cutting of the gel was employed to; (i) ensure the total analysis of the proteins in the area, (ii) standardize the conditions of analysis by LC-MS/MS, (iii) reconstruct the protein distribution patterns from the quantity data. Totally 154 proteins were assigned in the 90 gel pieces and the quantity distribution of each was reconstructed as a color density pattern (a native protein map). The map of apolipoprotein (Apo) A-I showed a wide apparent mass distribution characteristic to HDL and was compared with the maps of the other 153 proteins. Eleven proteins showed maps of wide distribution that overlapped with the map of Apo A-I, and all have been reported to be the components of HDL. Further, seven minor proteins associated with HDL were detected at the gel positions of high Apo A-I quantity. These results for the first time visualized the localization of HDL apolipoproteins on a nondenaturing 2DE gel and strongly suggested their interactions. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Detailed petrophysical characterization enhances geological mapping of a buried substratum using aeromagnetic and gravity data; application to the southwestern Paris basin

    NASA Astrophysics Data System (ADS)

    Baptiste, Julien; Martelet, Guillaume; Faure, Michel; Beccaletto, Laurent; Chen, Yan; Reninger, Pierre-Alexandre

    2016-04-01

    Mapping the geometries (structure and lithology) of a buried basement is a key for targeting resources and for improving the regional geological knowledge. The Paris basin is a Mesozoic to Cenozoic intraplate basin set up on a Variscan substratum, which crops out in the surrounding massifs. We focus our study on the southwestern part of the Paris basin at its junction with the Aquitaine basin. This Mezo-Cenozoic cover separates the Armorican Massif and the Massif Central which compose of several litho-tectonic units bounded by crustal-scale shear zones. In spite of several lithological and structural correlations between various domains of the two massifs, their geological connection, hidden below the Paris basin sedimentary cover, is still largely debated. Potential field geophysics have proven effective for mapping buried basin/basement interfaces. In order to enhance the cartographic interpretation of these data, we have set up a detailed petrophysical library (field magnetic susceptibility data and density measurements on rock samples) of the Paleozoic rocks outcropping in the Variscan massifs. The combination of aeromagnetic and gravity data supported by the petrophysical signatures and field/borehole geological information, is carried out to propose a new map of the architecture of the Variscan substratum. The new synthetic map of geophysical signature of the Paris basin basement combines: i) the magnetic anomaly reduced to the pole, ii) the vertical gradient of the Bouguer anomaly and iii) the tilt derivative of the magnetic anomaly reduced to the pole. Based on this information, the Eastern extension of the major shear zones below the sedimentary cover is assessed. The petrophysical signatures were classified in three classes of magnetic susceptibility and density: low, intermediate and high. Basic rocks have high magnetization and density values whereas granite, migmatite and orthogneiss show low magnetization and density values, Proterozoic and Paleozoic sediments, micaschists and metagrauwackes have intermediate to low magnetization and density values. Detailed lithological attribution of geophysical anomalies was achieved separately for each geological sub-domain (in between 2 major structures). This methodology will be generalized at the scale of the entire Paris basin in order to propose a tectonic reconstruction of this segment of the Variscan belt, and provide guides for the exploration of hidden resources.

  14. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis.

    PubMed

    Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki

    2015-08-01

    A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.

  15. Beyond electronegativity and local hardness: Higher-order equalization criteria for determination of a ground-state electron density.

    PubMed

    Ayers, Paul W; Parr, Robert G

    2008-08-07

    Higher-order global softnesses, local softnesses, and softness kernels are defined along with their hardness inverses. The local hardness equalization principle recently derived by the authors is extended to arbitrary order. The resulting hierarchy of equalization principles indicates that the electronegativity/chemical potential, local hardness, and local hyperhardnesses all are constant when evaluated for the ground-state electron density. The new equalization principles can be used to test whether a trial electron density is an accurate approximation to the true ground-state density and to discover molecules with desired reactive properties, as encapsulated by their chemical reactivity indicators.

  16. Mapping the Frozen Void

    NASA Astrophysics Data System (ADS)

    Suutarinen, Aleksi; Fraser, Helen

    2013-07-01

    Reactions on the surfaces of dust grains play a vital role in the overall chemistry of interstellar matter. These grains become covered by icy layers, which are the largest molecular reservoir in the interstellar medium. Given this, it is surprising that the effect ice has on the overall chain of reactions is poorly characterized. One step on the path of gaining better understanding here is to develop methods of figuring out how much ice is present in these clouds, the links between ice components, and synergy between the ices and gas phase molecules. We do this by examining the absorption spectra of ices on lines of sight towards several stars behind clouds of interstellar matter. From these we can reconstruct spatial maps of the ice distribution on scales of as little as 1000 AU, as a test of the chemical variation within a cloud. By overlapping the ice data with other maps of the same region (gas emission, temperature, density etc) we create combined maps to reveal the astrochemistry of star-forming regions and pre-stellar cores. In this poster we present the continuing results of our ice mapping programme, using data from the AKARI satellite, specifically in slitless spectroscopy observations in the NIR. In this region the key ice features encompass H2O, CO and CO2. The maps illustrate the power of our dedicated AKARI data reduction pipeline, and the novelty of our observing programme. We also detail the next steps' in our ice mapping research. The method is being expanded to include the full 10'x10' AKARI field of view, taking account of image distortion induced by the dispersing optics. These maps are then combined with exiting gas-phase observations and SCUBA maps. The latest attempts at this are shown here. What is clear already is that it is difficult to predict ice abundances from factors such as extinction or gas density alone, and that ice formation and evolution can vary hugely over even very small astronomical scales.

  17. Interfacial tension measurement of immiscible liq uids using a capillary tube

    NASA Technical Reports Server (NTRS)

    Rashidnia, N.; Balasubramaniam, R.; Delsignore, D.

    1992-01-01

    The interfacial tension of immiscible liquids is an important thermophysical property that is useful in the behavior of liquids both in microgravity (Martinez et al. (1987) and Karri and Mathur (1988)) and in enhanced oil recovery processes under normal gravity (Slattery (1974)). Many techniques are available for its measurement, such as the ring method, drop weight method, spinning drop method, and capillary height method (Adamson (1960) and Miller and Neogi (1985)). Karri and Mathur mention that many of the techniques use equations that contain a density difference term and are inappropriate for equal density liquids. They reported a new method that is suitable for both equal and unequal density liquids. In their method, a capillary tube forms one of the legs of a U-tube. The interfacial tension is related to the heights of the liquids in the cups of the U-tube above the interface in the capillary. Our interest in this area arose from a need to measure small interfacial tension (around 1 mN/m) for a vegetable oil/silicon oil system that was used in a thermocapillary drop migration experiment (Rashidnia and Balasubramaniam (1991)). In our attempts to duplicate the method proposed by Karri and Mathur, we found it quite difficult to anchor the interface inside the capillary tube; small differences of the liquid heights in the cups drove the interface out of the capillary. We present an alternative method using a capillary tube to measure the interfacial tensions of liquids of equal or unequal density. The method is based on the combined capillary rises of both liquids in the tube.

  18. The warm and cold neutral phase in the local interstellar medium at absolute value of B greater than or equal to 10 deg

    NASA Astrophysics Data System (ADS)

    Poppel, W. G. L.; Marronetti, P.; Benaglia, P.

    1994-07-01

    We made a systematic separation of both the neutral phases using the atlases of 21-cm profiles of Heiles & Habing (1974) and Colomb et al. (1980), complemented with other data. First, we fitted the emission of the warm neutral medium (WNM) by means of a broad Gaussian curve (velocity dispersion sigma approximately 10-14 km/s). We derived maps of the column densities NWH and the radial velocities VW of the WNM. Its overall distribution appears to be very inhomogeneous with a large hole in the range b greater than or equal to +50 deg. However, if the hole is excluded, the mean latitude-profiles admit a rough cosec absolute value of b-fit common to both hemispheres. A kinematical analysis of VW for the range 10 deg less than or equal to absolute value of b less than or equal to 40 deg indicates a mean differential rotation with a small nodal deviation. At absolute value of b greater than 50 deg VW is negative, with larger values and discontinuities in the north. On the mean, sigma increases for absolute value of b decreasing, as is expected from differential rotation. From a statistical study of the peaks of the residual profiles we derived some characteristics of the cold neutral medium (CNM). The latter is generally characterized by a single component of sigma approximately 2-6 km/s. Additionally we derived the sky-distribution of the column densities NCH and the radial velocities VC of the CNM within bins of 1.2 deg sec b x 1 deg in l, b. Furthermore, we focused on the characteristics of Linblad's feature A of cool gas by considering the narrow ridge of local H I, which appears in the b-V contour maps at fixed l (e.g. Schoeber 1976). The ridge appears to be the main component of the CNM. We suggest a scenario for the formulation and evolution of the Gould belt system of stars and gas on the basis of an explosive event within a shingle of cold dense gas tilted to the galactic plane. The scenario appears to be consistent with the results found for both the neutral phases, as well as with Danly's (1989) optical and UV observations of interstellar cool gas in the lower halo.

  19. Single tree biomass modelling using airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Kankare, Ville; Räty, Minna; Yu, Xiaowei; Holopainen, Markus; Vastaranta, Mikko; Kantola, Tuula; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2013-11-01

    Accurate forest biomass mapping methods would provide the means for e.g. detecting bioenergy potential, biofuel and forest-bound carbon. The demand for practical biomass mapping methods at all forest levels is growing worldwide, and viable options are being developed. Airborne laser scanning (ALS) is a promising forest biomass mapping technique, due to its capability of measuring the three-dimensional forest vegetation structure. The objective of the study was to develop new methods for tree-level biomass estimation using metrics derived from ALS point clouds and to compare the results with field references collected using destructive sampling and with existing biomass models. The study area was located in Evo, southern Finland. ALS data was collected in 2009 with pulse density equalling approximately 10 pulses/m2. Linear models were developed for the following tree biomass components: total, stem wood, living branch and total canopy biomass. ALS-derived geometric and statistical point metrics were used as explanatory variables when creating the models. The total and stem biomass root mean square error per cents equalled 26.3% and 28.4% for Scots pine (Pinus sylvestris L.), and 36.8% and 27.6% for Norway spruce (Picea abies (L.) H. Karst.), respectively. The results showed that higher estimation accuracy for all biomass components can be achieved with models created in this study compared to existing allometric biomass models when ALS-derived height and diameter were used as input parameters. Best results were achieved when adding field-measured diameter and height as inputs in the existing biomass models. The only exceptions to this were the canopy and living branch biomass estimations for spruce. The achieved results are encouraging for the use of ALS-derived metrics in biomass mapping and for further development of the models.

  20. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  1. Momentum balance in four solar flares

    NASA Technical Reports Server (NTRS)

    Canfield, Richard C.; Metcalf, Thomas R.; Zarro, Dominic M.; Lemen, James R.

    1990-01-01

    Solar Maximum Mission soft X-ray spectra and National Solar Observatory (Sacramento Peak) H-alpha spectra were combined in a study of high-speed flows during the impulsive phase of four solar flares. In all events, a blue asymmetry (indicative of upflows) was observed in the coronal Ca XIX line during the soft X-ray rise phase. In all events a red asymmetry (indicative of downflows) was observed simultaneously in chromospheric H-alpha. These oppositely directed flows were concurrent with impulsive hard X-ray emission. Combining the velocity data with estimates of the density based on emission measurements and volume estimates, it is shown that for the impulsive phase as a whole the total momentum of upflowing soft X-ray plasma equaled that of the downflowing H-alpha plasma, to within an order of magnitude, in all four events. Only the chromospheric evaporation model predicts equal total momentum in the upflowing soft X-ray-emitting and downflowing H-alphba-emitting materials.

  2. Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose

    NASA Astrophysics Data System (ADS)

    Petrovič, Dušan

    2018-05-01

    The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.

  3. Nitrate contamination risk assessment in groundwater at regional scale

    NASA Astrophysics Data System (ADS)

    Daniela, Ducci

    2016-04-01

    Nitrate groundwater contamination is widespread in the world, due to the intensive use of fertilizers, to the leaking from the sewage network and to the presence of old septic systems. This research presents a methodology for groundwater contamination risk assessment using thematic maps derived mainly from the land-use map and from statistical data available at the national institutes of statistic (especially demographic and environmental data). The potential nitrate contamination is considered as deriving from three sources: agricultural, urban and periurban. The first one is related to the use of fertilizers. For this reason the land-use map is re-classified on the basis of the crop requirements in terms of fertilizers. The urban source is the possibility of leaks from the sewage network and, consequently, is linked to the anthropogenic pressure, expressed by the population density, weighted on the basis of the mapped urbanized areas of the municipality. The periurban sources include the un-sewered areas, especially present in the periurban context, where illegal sewage connections coexist with on-site sewage disposal (cesspools, septic tanks and pit latrines). The potential nitrate contamination map is produced by overlaying the agricultural, urban and periurban maps. The map combination process is very easy, being an algebraic combination: the output values are the arithmetic average of the input values. The groundwater vulnerability to contamination can be assessed using parametric methods, like DRASTIC or easier, like AVI (that involves a limited numbers of parameters). In most of cases, previous documents produced at regional level can be used. The pollution risk map is obtained by combining the thematic maps of the potential nitrate contamination map and the groundwater contamination vulnerability map. The criterion for the linkages of the different GIS layers is very easy, corresponding to an algebraic combination. The methodology has been successfully applied in a large flat area of southern Italy, with high concentrations in NO3.

  4. Combining binary decision tree and geostatistical methods to estimate snow distribution in a mountain watershed

    USGS Publications Warehouse

    Balk, Benjamin; Elder, Kelly

    2000-01-01

    We model the spatial distribution of snow across a mountain basin using an approach that combines binary decision tree and geostatistical techniques. In April 1997 and 1998, intensive snow surveys were conducted in the 6.9‐km2 Loch Vale watershed (LVWS), Rocky Mountain National Park, Colorado. Binary decision trees were used to model the large‐scale variations in snow depth, while the small‐scale variations were modeled through kriging interpolation methods. Binary decision trees related depth to the physically based independent variables of net solar radiation, elevation, slope, and vegetation cover type. These decision tree models explained 54–65% of the observed variance in the depth measurements. The tree‐based modeled depths were then subtracted from the measured depths, and the resulting residuals were spatially distributed across LVWS through kriging techniques. The kriged estimates of the residuals were added to the tree‐based modeled depths to produce a combined depth model. The combined depth estimates explained 60–85% of the variance in the measured depths. Snow densities were mapped across LVWS using regression analysis. Snow‐covered area was determined from high‐resolution aerial photographs. Combining the modeled depths and densities with a snow cover map produced estimates of the spatial distribution of snow water equivalence (SWE). This modeling approach offers improvement over previous methods of estimating SWE distribution in mountain basins.

  5. Breast cancer research output, 1945-2008: a bibliometric and density-equalizing analysis.

    PubMed

    Glynn, Ronan W; Scutaru, Cristian; Kerin, Michael J; Sweeney, Karl J

    2010-01-01

    Breast cancer is the most common form of cancer among women, with an estimated 194,280 new cases diagnosed in the United States in 2009 alone. The primary aim of this work was to provide an in-depth evaluation of research yield in breast cancer from 1945 to 2008, using large-scale data analysis, the employment of bibliometric indicators of production and quality, and density-equalizing mapping. Data were retrieved from the Web of Science (WOS) Science Citation Expanded database; this was searched using the Boolean operator, 'OR', with different terms related to breast cancer, including "breast cancer", "mammary ductal carcinoma" and "breast tumour". Data were then extracted from each file, transferred to Excel charts and visualised as diagrams. Mapping was performed as described by Groneberg-Kloft et al. in 2008. A total of 180,126 breast cancer-associated items were produced over the study period; these had been cited 4,136,224 times. The United States returned the greatest level of output (n = 77,101), followed by the UK (n = 18,357) and Germany (n = 12,529). International cooperation peaked in 2008, with 3,127 entries produced as a result; relationships between the United States and other countries formed the basis for the 10 most common forms of bilateral cooperation. Publications from nations with high levels of international cooperation were associated with greater average citation rates. A total of 4,096 journals published at least one item on breast cancer, although the top 50 most prolific titles together accounted for over 43% (77,517/180,126) of the total output. Breast cancer-associated research output continues to increase annually. In an era when bibliometric indicators are increasingly being employed in performance assessment, these findings should provide useful information for those tasked with improving that performance.

  6. Comparison of lithological mapping results from airborne hyperspectral VNIR-SWIR, LWIR and combined data

    NASA Astrophysics Data System (ADS)

    Feng, Jilu; Rogge, Derek; Rivard, Benoit

    2018-02-01

    This study investigates using the Airborne Hyperspectral Imaging Systems (AISA) visible and short-wave infrared (SWIR) and Spatially Enhanced Broadband Array Spectrograph System (SEBASS) longwave infrared (LWIR) (2 and 4 m spatial resolution, respectively) imagery independently and in combination to produce detailed lithologic maps in a subarctic region (Cape Smith Belt, Nunavik, Canada) where regionally metamorphosed lower greenschist mafic, ultramafic and sedimentary rocks are exposed in the presence of lichen coatings. We make use of continuous wavelet analysis (CWA) to improve the radiometric quality of the imagery through the minimization of random noise and the enhancement of spectral features, the minimization of residual errors in the ISAC radiometric correction and target temperature estimation in the case of the LWIR data, the minimization of line to line residual calibration effects that lead to inconsistencies in data mosaics, and the reduction in variability of the spectral continuum introduced by variable illumination and topography. The use of CWA also provides a platform to directly combine the wavelet scale spectral profiles of the SWIR and LWIR after applying a scalar correction factor to the LWIR such that the dynamic range of two data sets have equal weight. This is possible using CWA as the datasets are normalized to a zero mean allowing spectra from different spectral regions to be adjoined. Lithologic maps are generated using an iterative spectral unmixing approach with image spectral endmembers extracted from the SWIR and LWIR imagery based on locations defined from previous work of the study area and field mapping information. Unmixing results of the independent SWIR and LWIR data, and the combined data show clear benefits to using the CWA combined imagery. The analysis showed SWIR and LWIR imagery highlight similar regions and spatial distributions for the three ultramafic units (dunite, peridotite, pyroxenite). However, significant differences are observed for quartz-rich sediments, with the SWIR overestimating the distribution of these rocks whereas the LWIR provided more consistent results compared with existing maps. Both SWIR and LWIR imagery were impacted by the pervasive lichen coatings on the mafic rocks (basalts and gabbros), although the SWIR provided better results than the LWIR. Limitations observed for the independent data sets were removed using the combined spectral data resulting in all geologically meaningful units mapped correctly in comparison with existing geological maps.

  7. An Approach to the Crustal Thickness Inversion Problem

    NASA Astrophysics Data System (ADS)

    De Marchi, F.; Di Achille, G.

    2017-12-01

    We describe a method to estimate the crustal thickness of a planet and we apply it to Venus. As in the method of (Parker, 1972), modified by (Wieczorek & Phillips, 1998), the gravity field anomalies of a planet are assumed to be due to the combined effect of topography and relief on the crust-mantle interface. No assumptions on isostasy are necessary. In our case, rather than using the expansion of the powers of the relief in Taylor series, we model the gravitational field of topography/relief by means of a large number of prism-shaped masses covering the whole surface of the planet. Under the hypothesis that crustal and mantle densities are the same everywhere, we solve for the relief depths on the crust-mantle interface by imposing that observed and modeled gravity field at a certain reference spherical surface (external to the planet) must be equal. This method can be extended to the case of non-uniform densities. Finally, we calculate a map of the crustal thickness of Venus and compare our results with those predicted by previous work and with the global distribution of main geological features (e.g. rift zones, tesserae, coronae). We discuss the agremeent between our results and the main geodynamical and crustal models put forth to explain the origin of such features and the applicability of this method in the context of the mission VOX (Venus Origins Explore), proposed for NASA's NF4 call.

  8. Calculation of the Curie temperature of Ni using first principles based Wang-Landau Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Eisenbach, Markus; Yin, Junqi; Li, Ying Wai; Nicholson, Don

    2015-03-01

    We combine constrained first principles density functional with a Wang-Landau Monte Carlo algorithm to calculate the Curie temperature of Ni. Mapping the magnetic interactions in Ni onto a Heisenberg like model to underestimates the Curie temperature. Using a model we show that the addition of the magnitude of the local magnetic moments can account for the difference in the calculated Curie temperature. For ab initio calculations, we have extended our Locally Selfconsistent Multiple Scattering (LSMS) code to constrain the magnitude of the local moments in addition to their direction and apply the Replica Exchange Wang-Landau method to sample the larger phase space efficiently to investigate Ni where the fluctuation in the magnitude of the local magnetic moments is of importance equal to their directional fluctuations. We will present our results for Ni where we compare calculations that consider only the moment directions and those including fluctuations of the magnetic moment magnitude on the Curie temperature. This research was sponsored by the Department of Energy, Offices of Basic Energy Science and Advanced Computing. We used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory, supported by US DOE under contract DE-AC05-00OR22725.

  9. Estimation of Chinese surface NO2 concentrations combining satellite data and Land Use Regression

    NASA Astrophysics Data System (ADS)

    Anand, J.; Monks, P.

    2016-12-01

    Monitoring surface-level air quality is often limited by in-situ instrument placement and issues arising from harmonisation over long timescales. Satellite instruments can offer a synoptic view of regional pollution sources, but in many cases only a total or tropospheric column can be measured. In this work a new technique of estimating surface NO2 combining both satellite and in-situ data is presented, in which a Land Use Regression (LUR) model is used to create high resolution pollution maps based on known predictor variables such as population density, road networks, and land cover. By employing a mixed effects approach, it is possible to take advantage of the spatiotemporal variability in the satellite-derived column densities to account for daily and regional variations in surface NO2 caused by factors such as temperature, elevation, and wind advection. In this work, surface NO2 maps are modelled over the North China Plain and Pearl River Delta during high-pollution episodes by combining in-situ measurements and tropospheric columns from the Ozone Monitoring Instrument (OMI). The modelled concentrations show good agreement with in-situ data and surface NO2 concentrations derived from the MACC-II global reanalysis.

  10. Global surface density of water mass variations by using a two-step inversion by cumulating daily satellite gravity information

    NASA Astrophysics Data System (ADS)

    Ramillien, Guillaume; Frappart, Frédéric; Seoane, Lucia

    2016-04-01

    We propose a new method to produce time series of global maps of surface mass variations by progressive integration of daily geopotential variations measured by orbiting satellites. In the case of the GRACE mission, these geopotential variations can be determined from very accurate inter-satellite K-Band Range Rate (KBRR) measurements of 5-second daily orbits. In particular, the along-track gravity contribution of hydrological mass changes is extracted by removing de-aliasing models for static field, atmosphere, oceans mass variations (including periodical tides), as well as polar movements. Our determination of surface mass sources is composed of two successive dependent Kalman filter stages. The first one consists of reducing the satellite-based potential anomalies by adjusting the longest spatial wavelengths (i.e., low-degree spherical harmonics lower than 2). In the second stage, the residual potential anomalies from the previous stage are used to recover surface mass density changes - in terms of Equivalent-Water Height (EWH) - over a global network of juxtaposed triangular elements. These surface tiles of ~100,000 km x km (or equivalently 330 km by 330 km) are defined to be of equal areas over the terrestrial sphere. However they can be adapted to the local geometry of the surface mass. Our global approach was tested by inverting geopotential data, and successfully applied to estimate time-varying surface mass densities from real GRACE-based residuals. This strategy of combined Kalman filter-type inversions can also be useful for exploring the possibility of improving time and space resolutions for ocean and land studies that would be hopefully brought by future low altitude geodetic missions.

  11. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  12. A Large Maize (Zea mays L.) SNP Genotyping Array: Development and Germplasm Genotyping, and Genetic Mapping to Compare with the B73 Reference Genome

    PubMed Central

    Ganal, Martin W.; Durstewitz, Gregor; Polley, Andreas; Bérard, Aurélie; Buckler, Edward S.; Charcosset, Alain; Clarke, Joseph D.; Graner, Eva-Maria; Hansen, Mark; Joets, Johann; Le Paslier, Marie-Christine; McMullen, Michael D.; Montalent, Pierre; Rose, Mark; Schön, Chris-Carolin; Sun, Qi; Walter, Hildrun; Martin, Olivier C.; Falque, Matthieu

    2011-01-01

    SNP genotyping arrays have been useful for many applications that require a large number of molecular markers such as high-density genetic mapping, genome-wide association studies (GWAS), and genomic selection. We report the establishment of a large maize SNP array and its use for diversity analysis and high density linkage mapping. The markers, taken from more than 800,000 SNPs, were selected to be preferentially located in genes and evenly distributed across the genome. The array was tested with a set of maize germplasm including North American and European inbred lines, parent/F1 combinations, and distantly related teosinte material. A total of 49,585 markers, including 33,417 within 17,520 different genes and 16,168 outside genes, were of good quality for genotyping, with an average failure rate of 4% and rates up to 8% in specific germplasm. To demonstrate this array's use in genetic mapping and for the independent validation of the B73 sequence assembly, two intermated maize recombinant inbred line populations – IBM (B73×Mo17) and LHRF (F2×F252) – were genotyped to establish two high density linkage maps with 20,913 and 14,524 markers respectively. 172 mapped markers were absent in the current B73 assembly and their placement can be used for future improvements of the B73 reference sequence. Colinearity of the genetic and physical maps was mostly conserved with some exceptions that suggest errors in the B73 assembly. Five major regions containing non-colinearities were identified on chromosomes 2, 3, 6, 7 and 9, and are supported by both independent genetic maps. Four additional non-colinear regions were found on the LHRF map only; they may be due to a lower density of IBM markers in those regions or to true structural rearrangements between lines. Given the array's high quality, it will be a valuable resource for maize genetics and many aspects of maize breeding. PMID:22174790

  13. Site-specific microtubule-associated protein 4 dephosphorylation causes microtubule network densification in pressure overload cardiac hypertrophy.

    PubMed

    Chinnakkannu, Panneerselvam; Samanna, Venkatesababa; Cheng, Guangmao; Ablonczy, Zsolt; Baicu, Catalin F; Bethard, Jennifer R; Menick, Donald R; Kuppuswamy, Dhandapani; Cooper, George

    2010-07-09

    In severe pressure overload-induced cardiac hypertrophy, a dense, stabilized microtubule network forms that interferes with cardiocyte contraction and microtubule-based transport. This is associated with persistent transcriptional up-regulation of cardiac alpha- and beta-tubulin and microtubule-stabilizing microtubule-associated protein 4 (MAP4). There is also extensive microtubule decoration by MAP4, suggesting greater MAP4 affinity for microtubules. Because the major determinant of this affinity is site-specific MAP4 dephosphorylation, we characterized this in hypertrophied myocardium and then assessed the functional significance of each dephosphorylation site found by mimicking it in normal cardiocytes. We first isolated MAP4 from normal and pressure overload-hypertrophied feline myocardium; volume-overloaded myocardium, which has an equal degree and duration of hypertrophy but normal functional and cytoskeletal properties, served as a control for any nonspecific growth-related effects. After cloning cDNA-encoding feline MAP4 and obtaining its deduced amino acid sequence, we characterized by mass spectrometry any site-specific MAP4 dephosphorylation. Solely in pressure overload-hypertrophied myocardium, we identified striking MAP4 dephosphorylation at Ser-472 in the MAP4 N-terminal projection domain and at Ser-924 and Ser-1056 in the assembly-promoting region of the C-terminal microtubule-binding domain. Site-directed mutagenesis of MAP4 cDNA was then used to switch each serine to non-phosphorylatable alanine. Wild-type and mutated cDNAs were used to construct adenoviruses; microtubule network density, stability, and MAP4 decoration were assessed in normal cardiocytes following an equivalent level of MAP4 expression. The Ser-924 --> Ala MAP4 mutant produced a microtubule phenotype indistinguishable from that seen in pressure overload hypertrophy, such that Ser-924 MAP4 dephosphorylation during pressure overload hypertrophy may be central to this cytoskeletal abnormality.

  14. Computational prediction of atomic structures of helical membrane proteins aided by EM maps.

    PubMed

    Kovacs, Julio A; Yeager, Mark; Abagyan, Ruben

    2007-09-15

    Integral membrane proteins pose a major challenge for protein-structure prediction because only approximately 100 high-resolution structures are available currently, thereby impeding the development of rules or empirical potentials to predict the packing of transmembrane alpha-helices. However, when an intermediate-resolution electron microscopy (EM) map is available, it can be used to provide restraints which, in combination with a suitable computational protocol, make structure prediction feasible. In this work we present such a protocol, which proceeds in three stages: 1), generation of an ensemble of alpha-helices by flexible fitting into each of the density rods in the low-resolution EM map, spanning a range of rotational angles around the main helical axes and translational shifts along the density rods; 2), fast optimization of side chains and scoring of the resulting conformations; and 3), refinement of the lowest-scoring conformations with internal coordinate mechanics, by optimizing the van der Waals, electrostatics, hydrogen bonding, torsional, and solvation energy contributions. In addition, our method implements a penalty term through a so-called tethering map, derived from the EM map, which restrains the positions of the alpha-helices. The protocol was validated on three test cases: GpA, KcsA, and MscL.

  15. The INIA19 Template and NeuroMaps Atlas for Primate Brain Image Parcellation and Spatial Normalization

    PubMed Central

    Rohlfing, Torsten; Kroenke, Christopher D.; Sullivan, Edith V.; Dubach, Mark F.; Bowden, Douglas M.; Grant, Kathleen A.; Pfefferbaum, Adolf

    2012-01-01

    The INIA19 is a new, high-quality template for imaging-based studies of non-human primate brains, created from high-resolution, T1-weighted magnetic resonance (MR) images of 19 rhesus macaque (Macaca mulatta) animals. Combined with the comprehensive cortical and sub-cortical label map of the NeuroMaps atlas, the INIA19 is equally suitable for studies requiring both spatial normalization and atlas label propagation. Population-averaged template images are provided for both the brain and the whole head, to allow alignment of the atlas with both skull-stripped and unstripped data, and thus to facilitate its use for skull stripping of new images. This article describes the construction of the template using freely available software tools, as well as the template itself, which is being made available to the scientific community (http://nitrc.org/projects/inia19/). PMID:23230398

  16. Phased-array ultrasonic surface contour mapping system and method for solids hoppers and the like

    DOEpatents

    Fasching, George E.; Smith, Jr., Nelson S.

    1994-01-01

    A real time ultrasonic surface contour mapping system is provided including a digitally controlled phased-array of transmitter/receiver (T/R) elements located in a fixed position above the surface to be mapped. The surface is divided into a predetermined number of pixels which are separately scanned by an arrangement of T/R elements by applying phase delayed signals thereto that produce ultrasonic tone bursts from each T/R that arrive at a point X in phase and at the same time relative to the leading edge of the tone burst pulse so that the acoustic energies from each T/R combine in a reinforcing manner at point X. The signals produced by the reception of the echo signals reflected from point X back to the T/Rs are also delayed appropriately so that they add in phase at the input of a signal combiner. This combined signal is then processed to determine the range to the point X using density-corrected sound velocity values. An autofocusing signal is developed from the computed average range for a complete scan of the surface pixels. A surface contour map is generated in real time form the range signals on a video monitor.

  17. Multi-Skyrmions on AdS2 × S2, rational maps and popcorn transitions

    NASA Astrophysics Data System (ADS)

    Canfora, Fabrizio; Tallarita, Gianni

    2017-08-01

    By combining two different techniques to construct multi-soliton solutions of the (3 + 1)-dimensional Skyrme model, the generalized hedgehog and the rational map ansatz, we find multi-Skyrmion configurations in AdS2 ×S2. We construct Skyrmionic multi-layered configurations such that the total Baryon charge is the product of the number of kinks along the radial AdS2 direction and the degree of the rational map. We show that, for fixed total Baryon charge, as one increases the charge density on ∂ (AdS2 ×S2) , it becomes increasingly convenient energetically to have configurations with more peaks in the radial AdS2 direction but a lower degree of the rational map. This has a direct relation with the so-called holographic popcorn transitions in which, when the charge density is high, multi-layered configurations with low charge on each layer are favored over configurations with few layers but with higher charge on each layer. The case in which the geometry is M2 ×S2 can also be analyzed.

  18. Shadows and Dust: Mid-Infrared Extinction Mapping of the Initial Conditions of Massive Star and Star Cluster Formation

    NASA Astrophysics Data System (ADS)

    Tan, Jonathan

    We describe a research plan to develop and extend the mid-infrared (MIR) extinction mapping technique presented by Butler & Tan (2009), who studied Infrared Dark Clouds (IRDCs) using Spitzer Space Telescope Infrared Array Camera (IRAC) 8 micron images. This method has the ability to probe the detailed spatial structure of very high column density regions, i.e. the gas clouds thought to represent the initial conditions for massive star and star cluster formation. We will analyze the data Spitzer obtained at other wavelengths, i.e. the IRAC bands at 3.6, 4.5 and 5.8 microns, and the Multiband Imaging Photometer (MIPS) bands, especially at 24 microns. This will allow us to measure the dust extinction law across the MIR and search for evidence of dust grain evolution, e.g. grain growth and ice mantle formation, as a function of gas density and column density. We will also study the detailed structure of the extinction features, including individual cores that may form single stars or close binaries, especially focusing on those cores that may form massive stars. By studying independent dark cores in a given IRDC, we will be able to test if they have a common minimum observed intensity, which we will then attribute to the foreground. This is a new method that should allow us to more accurately map distant, high column density IRDCs, probing more extreme regimes of star formation. We will combine MIR extinction mapping, which works best at high column densities, with near- IR mapping based on 2MASS images of star fields, which is most useful at lower columns that probe the extended giant molecular cloud structure. This information is crucial to help understand the formation process of IRDCs, which may be the rate limiting step for global galactic star formation rates. We will use our new extinction mapping methods to analyze large samples of IRDCs and thus search the Galaxy for the most extreme examples of high column density cores and assess the global star formation efficiency in dense gas. We will estimate the ability of future NASA missions, such as JWST, to carry out MIR extinction mapping science. We will develop the results of this research into an E/PO presentation to be included in the various public outreach events organized and courses taught by the PI.

  19. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  20. Evaluation of ERTS imagery for mapping and detection of changes of snowcover land and on glaciers

    NASA Technical Reports Server (NTRS)

    Meier, M. F.

    1973-01-01

    The percentage of snowcover area on specific drainage basins was measured from ERTS imagery by video density slicing with a repeatability of 4 percent of the snowcovered area. Data from ERTS images of the melt season snowcover in the Thunder Creek drainage basin in the North Cascades were combined with existing hydrologic and meteorologic observations to enable calculation of the time distribution of the water stored in this mountain snowpack. Similar data could be used for frequent updating of expected inflow to reservoirs. Equivalent snowline altitudes were determined from area measurements. Snowline altitudes were also determined by combining enlarged ERTS images with maps with an accuracy of about 60 m under favorable conditions. Ability to map snowcover or to determine snowline altitude depends primarily on cloud cover and vegetation and secondarily on slope, terrain roughness, sun angle, radiometric fidelity, and amount of spectral information available.

  1. Mapping the Influence of Prior Tectonism on Seismicity in the Central and Eastern US

    NASA Astrophysics Data System (ADS)

    Boyd, O. S.; Levandowski, W.; Ramirez-Guzman, L.; Zellman, M.; Briggs, R.

    2015-12-01

    From the Atlantic margin to the Rockies, most earthquakes in the central and eastern U.S. occur in ancient tectonic zones, yet many such features have been historically quiescent. If all intraplate stress were transferred from plate boundaries or bases, the stress field would be broadly uniform, with all well-oriented faults equally likely to slip. But faults are not the only product of tectonism; intrusions, metamorphism, or any number of other alterations may modify crustal and/or upper mantle density, leaving behind lithostatic pressure gradients that can locally elevate or reduce stress on faults. With data provided by Earthscope, we are working to map lithospheric density across the U.S. and to quantify gravitational body-forces using analytical and finite-element methods. Regional-scale 3D models show that gravitational forces focus seismicity and reorient principal stress both in the New Madrid seismic zone and the western Great Plains. Sedimentary fill and low elevation encourage Reelfoot Rift-normal contraction, yet along-strike variations in lower crustal density rotate body-forces beneath New Madrid to interfere constructively with far-field compression, augmenting differential stress by 5-10 MPa. On the plains of SE Colorado and SE Wyoming, the Cheraw and Wheatland/Whalen faults collocate with multiply reactivated Proterozoic sutures, enigmatic Quaternary extension, and focused seismicity with regionally anomalous NW-SE moment tensor T-axes. Earthscope data help reveal anomalously buoyant lower crust beneath each suture -­- which we hypothesize reflects hydration by Farallon slab-derived fluids that have preferentially migrated along ancient fracture networks -- that generates 10 MPa of localized suture-normal tension, consistent with geomorphic strain- and seismic stress-indicators. As continent-wide seismic models emerge from Earthscope data, we will continue to map regions where inherited structures encourage intraplate seismicity.

  2. Mapping radon-prone areas using γ-radiation dose rate and geological information.

    PubMed

    García-Talavera, M; García-Pérez, A; Rey, C; Ramos, L

    2013-09-01

    Identifying radon-prone areas is key to policies on the control of this environmental carcinogen. In the current paper, we present the methodology followed to delineate radon-prone areas in Spain. It combines information from indoor radon measurements with γ-radiation and geological maps. The advantage of the proposed approach is that it lessens the requirement for a high density of measurements by making use of commonly available information. It can be applied for an initial definition of radon-prone areas in countries committed to introducing a national radon policy or to improving existing radon maps in low population regions.

  3. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  4. ELEMENT MASSES IN THE CRAB NEBULA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibley, Adam R.; Katz, Andrea M.; Satterfield, Timothy J.

    Using our previously published element abundance or mass-fraction distributions in the Crab Nebula, we derived actual mass distributions and estimates for overall nebular masses of hydrogen, helium, carbon, nitrogen, oxygen and sulfur. As with the previous work, computations were carried out for photoionization models involving constant hydrogen density and also constant nuclear density. In addition, employing new flux measurements for [Ni ii]  λ 7378, along with combined photoionization models and analytic computations, a nickel abundance distribution was mapped and a nebular stable nickel mass estimate was derived.

  5. Spectral density mapping at multiple magnetic fields suitable for 13C NMR relaxation studies

    NASA Astrophysics Data System (ADS)

    Kadeřávek, Pavel; Zapletal, Vojtěch; Fiala, Radovan; Srb, Pavel; Padrta, Petr; Přecechtělová, Jana Pavlíková; Šoltésová, Mária; Kowalewski, Jozef; Widmalm, Göran; Chmelík, Josef; Sklenář, Vladimír; Žídek, Lukáš

    2016-05-01

    Standard spectral density mapping protocols, well suited for the analysis of 15N relaxation rates, introduce significant systematic errors when applied to 13C relaxation data, especially if the dynamics is dominated by motions with short correlation times (small molecules, dynamic residues of macromolecules). A possibility to improve the accuracy by employing cross-correlated relaxation rates and on measurements taken at several magnetic fields has been examined. A suite of protocols for analyzing such data has been developed and their performance tested. Applicability of the proposed protocols is documented in two case studies, spectral density mapping of a uniformly labeled RNA hairpin and of a selectively labeled disaccharide exhibiting highly anisotropic tumbling. Combination of auto- and cross-correlated relaxation data acquired at three magnetic fields was applied in the former case in order to separate effects of fast motions and conformational or chemical exchange. An approach using auto-correlated relaxation rates acquired at five magnetic fields, applicable to anisotropically moving molecules, was used in the latter case. The results were compared with a more advanced analysis of data obtained by interpolation of auto-correlated relaxation rates measured at seven magnetic fields, and with the spectral density mapping of cross-correlated relaxation rates. The results showed that sufficiently accurate values of auto- and cross-correlated spectral density functions at zero and 13C frequencies can be obtained from data acquired at three magnetic fields for uniformly 13C -labeled molecules with a moderate anisotropy of the rotational diffusion tensor. Analysis of auto-correlated relaxation rates at five magnetic fields represents an alternative for molecules undergoing highly anisotropic motions.

  6. Association of polycystic ovary syndrome with cardiovascular risk factors.

    PubMed

    Akram, Tanzeela; Hasan, Shahid; Imran, Muhammad; Karim, Asima; Arslan, Muhammad

    2010-01-01

    Polycystic ovary syndrome (PCOS), also clinically known as Stein-Leventhal syndrome, is an endocrine disorder that affects 5-10% of women. To evaluate the risk factors for developing early onset of cardiovascular disease (CVD) in young patients with PCOS from our local population. Case-control study. Fifty women with PCOS selected by history and transvaginal ultrasounds and 30 age-matched healthy women (controls). The case subjects and controls were further divided into two age categories comprising of equal number of subjects, of 20-29 and 30-39 years of age. The subjects underwent a detailed medical history, general physical examination, systolic (SBP) and diastolic blood pressures (DBP). Fasting blood samples were analyzed for glucose, insulin, triacylglycerides (TAG), total cholesterol, high density lipoprotein-C (HDL-C), low density lipoprotein-C (LDL-C), follicle-stimulating hormone (FSH), and luteinizing hormone (LH). Women with the PCOS had significantly higher mean arterial pressure (MAP), serum TAG, LDL-C, insulin, and LH levels when compared with the age-matched control subjects. No significant differences were observed between serum cholesterol, glucose, and FSH levels between cases and controls. However, no marked differences were observed in biochemical parameters between the two age groups of PCOS patients. Younger women with PCOS are equally at risk of developing CVD as older women.

  7. A GIS/Remote Sensing-based methodology for groundwater potentiality assessment in Tirnavos area, Greece

    NASA Astrophysics Data System (ADS)

    Oikonomidis, D.; Dimogianni, S.; Kazakis, N.; Voudouris, K.

    2015-06-01

    The aim of this paper is to assess the groundwater potentiality combining Geographic Information Systems and Remote Sensing with data obtained from the field, as an additional tool to the hydrogeological research. The present study was elaborated in the broader area of Tirnavos, covering 419.4 km2. The study area is located in Thessaly (central Greece) and is crossed by two rivers, Pinios and Titarisios. Agriculture is one of the main elements of Thessaly's economy resulting in intense agricultural activity and consequently increased exploitation of groundwater resources. Geographic Information Systems (GIS) and Remote Sensing (RS) were used in order to create a map that depicts the likelihood of existence of groundwater, consisting of five classes, showing the groundwater potentiality and ranging from very high to very low. The extraction of this map is based on the study of input data such as: rainfall, potential recharge, lithology, lineament density, slope, drainage density and depth to groundwater. Weights were assigned to all these factors according to their relevance to groundwater potential and eventually a map based on weighted spatial modeling system was created. Furthermore, a groundwater quality suitability map was illustrated by overlaying the groundwater potentiality map with the map showing the potential zones for drinking groundwater in the study area. The results provide significant information and the maps could be used from local authorities for groundwater exploitation and management.

  8. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  9. Correlated hydrogen bonding fluctuations and vibrational cross peaks in N-methyl acetamide: simulation based on a complete electrostatic density functional theory map.

    PubMed

    Hayashi, Tomoyuki; Mukamel, Shaul

    2006-11-21

    The coherent nonlinear response of the entire amide line shapes of N-methyl acetamide to three infrared pulses is simulated using an electrostatic density functional theory map. Positive and negative cross peaks contain signatures of correlations between the fundamentals and the combination state. The amide I-A and I-III cross-peak line shapes indicate positive correlation and anticorrelation of frequency fluctuations, respectively. These can be ascribed to correlated hydrogen bonding at C[double bond]O and N-H sites. The amide I frequency is negatively correlated with the hydrogen bond on carbonyl C[double bond]O, whereas the amide A and III are negatively and positively correlated, respectively, with the hydrogen bond on amide N-H.

  10. [Use of magnetic therapy combined with galvanization and tissue electrophoresis in the treatment of trophic ulcers].

    PubMed

    Alekseenko, A V; Gusak, V V; Stoliar, V F; Iftodiĭ, A G; Tarabanchuk, V V; Shcherban, N G; Naumets, A A

    1993-01-01

    The results of treatment of 86 patients with the use of magnetotherapy in combination with galvanization and intratissue electrophoresis are presented. To create an electric field, the "Potok-1" apparatus with a density of current equal to 0.05-0.1 mA/cm2 was employed. Simultaneously, the "MAG-30" apparatus for low-frequency magnetotherapy with induction of 30 mT and area of exposure of 20 cm2 was applied to a trophic ulcer site. The use of magnetogalvanotherapy in the complex of treatment of trophic ulcers of the lower extremities is recommended.

  11. Research on Integrated Mapping——A Case Study of Integrated Land Use with Swamp Mapping

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Yan, F.; Chang, L.

    2015-12-01

    Unified real estate registration system shows the attention, determination and effort to of CPC Central Committee and State Council on real estate registration in China. However, under current situation, China's real estate registration work made less progress. One of the reasons is that it's hard to express the property right of real estate on one map under the multi-sector management system. Under current multi-sector management system in China, different departments usually just survey and mapping the land type under its jurisdiction. For example, wetland investigation only mapping all kinds of wetland resources but not mapping other resource types. As a result, it cause he problem of coincidence or leak in integration of different results from different departments. As resources of the earth's surface, the total area of forest, grassland, wetland and so on should be equal to the total area of the earth's surface area. However, under the current system, the area of all kinds of resources is not equal to the sum of the earth's surface. Therefore, it is of great importance to express all the resources on one map. On one hand, this is conducive to find out the real area and distribution of resources and avoid the problem of coincidence or leak in integration; On the other hand, it is helpful to study the dynamic change of different resources. Therefore, we first proposed the "integrated mapping" as a solution, and take integrated land use with swamp mapping in Northeast China as an example to investigate the feasibility and difficulty. Study showed that: integrated land use with swamp mapping can be achieved through combining land use survey standards with swamps survey standards and "second mapping" program. Based on the experience of integrated land use with swamp mapping, we point out its reference function on integrated mapping and unified real estate registration system. We concluded that: (1) Comprehending and integrating different survey standard of different resources is the premise of "integrated mapping", (2) We put forward "multiple code" and "multiple interpretation" scheme in order to solve the problem of "attribute overlap", (3) The area of "attribute overlap" can be segmented by a certain ratio to determine the property right in unified real estate registration.

  12. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  13. Mapping of AFLP markers linked to seed coat colour loci in Brassica juncea (L.) Czern.

    PubMed

    Sabharwal, V; Negi, M S; Banga, S S; Lakshmikumaran, M

    2004-06-01

    Association mapping of the seed-coat colour with amplified fragment length polymorphism (AFLP) markers was carried out in 39 Brassica juncea lines. The lines had genetically diverse parentages and varied for seed-coat colour and other morphological characters. Eleven AFLP primer combinations were used to screen the 39 B. juncea lines, and a total of 335 polymorphic bands were detected. The bands were analysed for association with seed-coat colour using multiple regression analysis. This analysis revealed 15 markers associated with seed-coat colour, obtained with eight AFLP primer combinations. The marker E-ACA/M-CTG(350 )explained 69% of the variation in seed-coat colour. This marker along with markers E-AAC/M-CTC(235 )and E-AAC/M-CTA(250) explained 89% of the total variation. The 15 associated markers were validated for linkage with the seed-coat colour loci using a recombinant inbred line (RIL) mapping population. Bands were amplified with the eight AFLP primer combinations in 54 RIL progenies. Of the 15 associated markers, 11 mapped on two linkage groups. Eight markers were placed on linkage group 1 at a marker density of 6.0 cM, while the remaining three were mapped on linkage group 2 at a marker density of 3.6 cM. Marker E-ACA/M-CTG(350 )co-segregated with Gene1 controlling seed-coat colour; it was specific for yellow seed-coat colour and mapped to linkage group 1. Marker E-AAC/M-CTC(235) (AFLP8), which had been studied previously, was present on linkage group 2; it was specific for brown seed-coat colour. Since AFLP markers are not adapted for large-scale applications in plant breeding, it is important to convert these to sequence-characterised amplified region (SCAR) markers. Marker E-AAC/M-CTC(235) (AFLP8) had been previously converted into a SCAR. Work is in progress to convert the second of the linked markers, E-ACA/M-CTG(350), to a SCAR. The two linked AFLP markers converted to SCARs will be useful for developing yellow-seeded B. juncea lines by means of marker-assisted selection.

  14. Exploiting genotyping by sequencing to characterize the genomic structure of the American cranberry through high-density linkage mapping

    USDA-ARS?s Scientific Manuscript database

    The application of genotyping by sequencing (GBS) approaches, combined with data imputation methodologies, is narrowing the genetic knowledge gap between major and understudied, minor crops. GBS is an excellent tool to characterize the genomic structure of recently domesticated (~200 years) and unde...

  15. Combined Exact-Repeat and Geodetic Mission Altimetry for High-Resolution Empirical Tide Mapping

    NASA Astrophysics Data System (ADS)

    Zaron, E. D.

    2014-12-01

    The configuration of present and historical exact-repeat mission (ERM) altimeter ground tracks determines the maximum resolution of empirical tidal maps obtained with ERM data. Although the mode-1 baroclinic tide is resolvable at mid-latitudes in the open ocean, the ability to detect baroclinic and barotropic tides near islands and complex coastlines is limited, in part, by ERM track density. In order to obtain higher resolution maps, the possibility of combining ERM and geodetic mission (GM) altimetry is considered, using a combination of spatial thin-plate splines and temporal harmonic analysis. Given the present spatial and temporal distribution of GM missions, it is found that GM data can contribute to resolving tidal features smaller than 75 km, provided the signal amplitude is greater than about 1 cm. Uncertainties in the mean sea surface and environmental corrections are significant components of the GM error budget, and methods to optimize data selection and along-track filtering are still being optimized. Application to two regions, Monterey Bay and Luzon Strait, finds evidence for complex tidal fields in agreement with independent observations and modeling studies.

  16. Effects of gravity in folding

    NASA Astrophysics Data System (ADS)

    Minkel, Donald Howe

    Effects of gravity on buckle folding are studied using a Newtonian fluid finite element model of a single layer embedded between two thicker less viscous layers. The methods allow arbitrary density jumps, surface tension coefficients, resistance to slip at the interfaces, and tracking of fold growth to a large amplitudes. When density increases downward in two equal jumps, a layer buckles less and thickens more than with uniform density. When density increases upward in two equal jumps, it buckles more and thickens less. A low density layer with periodic thickness variations buckles more, sometimes explosively. Thickness variations form, even if not present initially. These effects are greater with; smaller viscosities, larger density jump, larger length scale, and slower shortening rate. They also depend on wavelength and amplitude, and these dependencies are described in detail. The model is applied to the explosive growth of the salt anticlines of the Paradox Basin, Colorado and Utah. There, shale (higher density) overlies salt (lower density). Methods for simulating realistic earth surface erosion and deposition conditions are introduced. Growth rates increase both with ease of slip at the salt-shale interface, and when earth surface relief stays low due to erosion and deposition. Model anticlines grow explosively, attaining growth rates and amplitudes close to those of the field examples. Fastest growing wavelengths are the same as seen in the field. It is concluded that a combination of partial-slip at the salt-shale interface, with reasonable earth surface conditions, promotes sufficiently fast buckling of the salt-shale interface due to density inversion alone. Neither basement faulting, nor tectonic shortening is required to account for the observed structures. Of fundamental importance is the strong tendency of gravity to promote buckling in low density layers with thickness variations. These develop, even if not present initially.

  17. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    NASA Astrophysics Data System (ADS)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  18. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  19. Mapping Sub-Antarctic Cushion Plants Using Random Forests to Combine Very High Resolution Satellite Imagery and Terrain Modelling

    PubMed Central

    Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.

    2013-01-01

    Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805

  20. Galaxy bias from the Dark Energy Survey Science Verification data: combining galaxy density maps and weak lensing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Pujol, A.; Gaztañaga, E.

    We measure the redshift evolution of galaxy bias from a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for amore » $$\\sim$$116 deg$$^{2}$$ area of the Dark Energy Survey (DES) Science Verification data. This method was first developed in Amara et al. (2012) and later re-examined in a companion paper (Pujol et al., in prep) with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a magnitude-limited galaxy sample. We find the galaxy bias and 1$$\\sigma$$ error bars in 4 photometric redshift bins to be 1.33$$\\pm$$0.18 (z=0.2-0.4), 1.19$$\\pm$$0.23 (z=0.4-0.6), 0.99$$\\pm$$0.36 ( z=0.6-0.8), and 1.66$$\\pm$$0.56 (z=0.8-1.0). These measurements are consistent at the 1-2$$\\sigma$$ level with mea- surements on the same dataset using galaxy clustering and cross-correlation of galaxies with CMB lensing. In addition, our method provides the only $$\\sigma_8$$-independent constraint among the three. We forward-model the main observational effects using mock galaxy catalogs by including shape noise, photo-z errors and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Furthermore, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  1. Geostatistical analysis of disease data: accounting for spatial support and population density in the isopleth mapping of cancer mortality risk using area-to-point Poisson kriging

    PubMed Central

    Goovaerts, Pierre

    2006-01-01

    Background Geostatistical techniques that account for spatially varying population sizes and spatial patterns in the filtering of choropleth maps of cancer mortality were recently developed. Their implementation was facilitated by the initial assumption that all geographical units are the same size and shape, which allowed the use of geographic centroids in semivariogram estimation and kriging. Another implicit assumption was that the population at risk is uniformly distributed within each unit. This paper presents a generalization of Poisson kriging whereby the size and shape of administrative units, as well as the population density, is incorporated into the filtering of noisy mortality rates and the creation of isopleth risk maps. An innovative procedure to infer the point-support semivariogram of the risk from aggregated rates (i.e. areal data) is also proposed. Results The novel methodology is applied to age-adjusted lung and cervix cancer mortality rates recorded for white females in two contrasted county geographies: 1) state of Indiana that consists of 92 counties of fairly similar size and shape, and 2) four states in the Western US (Arizona, California, Nevada and Utah) forming a set of 118 counties that are vastly different geographical units. Area-to-point (ATP) Poisson kriging produces risk surfaces that are less smooth than the maps created by a naïve point kriging of empirical Bayesian smoothed rates. The coherence constraint of ATP kriging also ensures that the population-weighted average of risk estimates within each geographical unit equals the areal data for this unit. Simulation studies showed that the new approach yields more accurate predictions and confidence intervals than point kriging of areal data where all counties are simply collapsed into their respective polygon centroids. Its benefit over point kriging increases as the county geography becomes more heterogeneous. Conclusion A major limitation of choropleth maps is the common biased visual perception that larger rural and sparsely populated areas are of greater importance. The approach presented in this paper allows the continuous mapping of mortality risk, while accounting locally for population density and areal data through the coherence constraint. This form of Poisson kriging will facilitate the analysis of relationships between health data and putative covariates that are typically measured over different spatial supports. PMID:17137504

  2. Multi-atlas based segmentation using probabilistic label fusion with adaptive weighting of image similarity measures.

    PubMed

    Sjöberg, C; Ahnesjö, A

    2013-06-01

    Label fusion multi-atlas approaches for image segmentation can give better segmentation results than single atlas methods. We present a multi-atlas label fusion strategy based on probabilistic weighting of distance maps. Relationships between image similarities and segmentation similarities are estimated in a learning phase and used to derive fusion weights that are proportional to the probability for each atlas to improve the segmentation result. The method was tested using a leave-one-out strategy on a database of 21 pre-segmented prostate patients for different image registrations combined with different image similarity scorings. The probabilistic weighting yields results that are equal or better compared to both fusion with equal weights and results using the STAPLE algorithm. Results from the experiments demonstrate that label fusion by weighted distance maps is feasible, and that probabilistic weighted fusion improves segmentation quality more the stronger the individual atlas segmentation quality depends on the corresponding registered image similarity. The regions used for evaluation of the image similarity measures were found to be more important than the choice of similarity measure. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Repeat Mapping in the Lower Monterey Submarine Canyon Sheds Light on Morphological Change During Discrete Sediment Density Flow Events

    NASA Astrophysics Data System (ADS)

    Anderson, K.; Lundsten, E. M.; Caress, D. W.; Thomas, H. J.; Paull, C. K.; Maier, K. L.; Gales, J. A.; Gwiazda, R.; Talling, P.; Xu, J.; Parsons, D. R.

    2017-12-01

    The Coordinated Canyon Experiment (CCE), a multi-institutional collaboration effort, was designed to monitor the passage of sediment density flows along the axis of Monterey Canyon, offshore California, between 200 and 1850 m water depth. An array of moorings and sensors were deployed for three 6-month periods from October 2015 to April 2017. Aligned with the CCE deployments, repeat high-resolution multibeam bathymetric surveys of the Monterey Canyon floor were conducted with a mapping AUV (Autonomous Underwater Vehicle). The AUV carried a Reson 7125 multibeam echosounder (vertical precision of 0.15 m and horizontal resolution of 1.0 m). An inertial navigation system combined with a Doppler velocity logger allowed the AUV to fly pre-programmed grids at 3 knots, while maintaining an altitude of 50 m above the seafloor, to obtain a nominal line spacing of 130 m. The floor and lower flanks of the canyon between 200 to 540 m and 1350 to 1880 m water depths were mapped six times during the CCE. These repeat maps are subtracted to create bathymetry difference grids to show morphological change. Coupling the sensor observations with the bathymetric surveys, the CCE successfully documented sediment density flow events as well as the associated changes in seafloor morphology. Between repeat surveys, three sediment density flow events reached the lower canyon, extending to at least 1850 m water depth. On January 15, 2016, a particularly large density flow traveled more than 50 km down Monterey Canyon. Unlike in the upper canyon where this event caused wholesale reorganization of geomorphological features, changes to the lower canyon morphology involved a more moderate re-sculpting of the features. The effect of a sediment density flow of known magnitude and duration on the seafloor morphology has never been documented in a deep-sea setting before.

  4. Digital mine claim density map for Federal lands in Montana, 1996

    USGS Publications Warehouse

    Campbell, Harry W.; Hyndman, Paul C.

    1998-01-01

    This report describes a digital map and data files generated by the U.S. Geological Survey (USGS) to provide digital spatial mining claim information for Federal lands in Montana as of March, 1997. Statewide, 159,704 claims had been recorded with the Bureau of Land Management since 1975. Of those claims, 21,055 (13%) are still actively held while 138,649 (87%) are closed and are no longer held. Montana contains 147,704 sections (usually 1 section equals 1 square mile) in the Public Land Survey System, with 8,569 sections (6%) containing claim data. Of the sections with claim data, 2,192 (26%) contain actively held claims. Only 1.5% of Montana’s sections contains actively held mining claims. The four types of mining claim are lode, placer, mill, and tunnel. A mill claim may be as much as 5 acres or 1/128th (0.78125%) of a square mile. A lode claim, about 20 acres, would cover 1/32nd (3.125%) of a square mile. Mining claim data is earth science information deemed to be relevant to the assessment of historic, current, and future ecological, economic, and social systems. The digital map and data files that are available in this report are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller. Campbell (1996) summarized the methodology and GIS techniques that were used to produce the mining claim density map of the Pacific Northwest. Campbell and Hyndman (1997) displayed mining claim information for the Pacific Northwest that used data acquired in 1994. Appendix A of this report lists the attribute data for the digital data files. Appendix B contains the GIS metadata.

  5. Development and characterization of plasma targets for controlled injection of electrons into laser-driven wakefields

    NASA Astrophysics Data System (ADS)

    Kleinwaechter, Tobias; Goldberg, Lars; Palmer, Charlotte; Schaper, Lucas; Schwinkendorf, Jan-Patrick; Osterhoff, Jens

    2012-10-01

    Laser-driven wakefield acceleration within capillary discharge waveguides has been used to generate high-quality electron bunches with GeV-scale energies. However, owing to fluctuations in laser and plasma conditions in combination with a difficult to control self-injection mechanism in the non-linear wakefield regime these bunches are often not reproducible and can feature large energy spreads. Specialized plasma targets with tailored density profiles offer the possibility to overcome these issues by controlling the injection and acceleration processes. This requires precise manipulation of the longitudinal density profile. Therefore our target concept is based on a capillary structure with multiple gas in- and outlets. Potential target designs are simulated using the fluid code OpenFOAM and those meeting the specified criteria are fabricated using femtosecond-laser machining of structures into sapphire plates. Density profiles are measured over a range of inlet pressures utilizing gas-density profilometry via Raman scattering and pressure calibration with longitudinal interferometry. In combination these allow absolute density mapping. Here we report the preliminary results.

  6. Using Gravity and Topography to Map Mars' Crustal Thickness

    NASA Image and Video Library

    2016-03-21

    Newly detailed mapping of local variations in Mars' gravitational pull on orbiters (center), combined with topographical mapping of the planet's mountains and valleys (left) yields the best-yet mapping of Mars' crustal thickness (right). These three views of global mapping are centered at 90 degrees west longitude, showing portions of the planet that include tall volcanoes on the left and the deep Valles Marineris canyon system just right of center. Additional views of these global maps are available at http://svs.gsfc.nasa.gov/goto?4436. The new map of Mars' gravity (center) results from analysis of the planet's gravitational effects on orbiters passing over each location on the globe. The data come from many years of using NASA's Deep Space Network to track positions and velocities of NASA's Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter. If Mars were a perfectly smooth sphere of uniform density, the gravity experienced by the spacecraft would be exactly the same everywhere. But like other rocky bodies in the solar system, including Earth, Mars has both a bumpy surface and a lumpy interior. As the spacecraft fly in their orbits, they experience slight variations in gravity caused by both of these irregularities, variations which show up as small changes in the velocity and altitude of the three spacecraft. The "free-air" gravity map presents the results without any adjustment for the known bumpiness of Mars' surface. Local gravitational variations in acceleration are expressed in units called gals or galileos. The color-coding key beneath the center map indicates how colors on the map correspond to mGal (milligal) values. The map on the left shows the known bumpiness, or topography, of the Martian surface, using data from the Mars Orbiter Laser Altimeter (MOLA) instrument on Mars Global Surveyor. Mars has no actual "sea level," but does have a defined zero elevation level. The color-coding key beneath this map indicates how the colors correspond to elevations above or below zero, in kilometers. Analysis that subtracts effects of the surface topography from the free-air gravity mapping, combined with an assumption that crust material has a uniform density, leads to the derived mapping of crustal thickness -- or subsurface "lumpiness" -- on the right. Highs in gravity indicate places where the denser mantle material beneath the crust is closer to the surface, and hence where the crust is thinner. The color-coding key for this map indicates how the colors on the map correspond to the thickness of the crust, in kilometers. http://photojournal.jpl.nasa.gov/catalog/PIA20277

  7. Characterizing TPS Microstructure: A Review of Some techniques

    NASA Technical Reports Server (NTRS)

    Gasch, Matthew; Stackpole, Mairead; Agrawal, Parul; Chavez-Garcie, Jose

    2011-01-01

    I. When seeking to understand ablator microstructure and morphology there are several useful techniques A. SEM 1) Visual characteriza3on at various length scales. 2) Chemical mapping by backscatter or x-ray highlights areas of interest. 3) Combined with other techniques (density, weight change, chemical analysis) SEM is a powerful tool to aid in explaining thermo/structural data. B. ASAP. 1) Chemical characteriza3on at various length scales. 2) Chemical mapping of pore structure by gas adsorption. 3) Provides a map of pore size vs. pore volume. 4) Provided surface area of exposed TPS. II. Both methods help characterize and understand how ablators react with other chemical species and provides insight into how they oxidize.

  8. Computational Prediction of Atomic Structures of Helical Membrane Proteins Aided by EM Maps

    PubMed Central

    Kovacs, Julio A.; Yeager, Mark; Abagyan, Ruben

    2007-01-01

    Integral membrane proteins pose a major challenge for protein-structure prediction because only ≈100 high-resolution structures are available currently, thereby impeding the development of rules or empirical potentials to predict the packing of transmembrane α-helices. However, when an intermediate-resolution electron microscopy (EM) map is available, it can be used to provide restraints which, in combination with a suitable computational protocol, make structure prediction feasible. In this work we present such a protocol, which proceeds in three stages: 1), generation of an ensemble of α-helices by flexible fitting into each of the density rods in the low-resolution EM map, spanning a range of rotational angles around the main helical axes and translational shifts along the density rods; 2), fast optimization of side chains and scoring of the resulting conformations; and 3), refinement of the lowest-scoring conformations with internal coordinate mechanics, by optimizing the van der Waals, electrostatics, hydrogen bonding, torsional, and solvation energy contributions. In addition, our method implements a penalty term through a so-called tethering map, derived from the EM map, which restrains the positions of the α-helices. The protocol was validated on three test cases: GpA, KcsA, and MscL. PMID:17496035

  9. Integrating population dynamics into mapping human exposure to seismic hazard

    NASA Astrophysics Data System (ADS)

    Freire, S.; Aubrecht, C.

    2012-11-01

    Disaster risk is not fully characterized without taking into account vulnerability and population exposure. Assessment of earthquake risk in urban areas would benefit from considering the variation of population distribution at more detailed spatial and temporal scales, and from a more explicit integration of this improved demographic data with existing seismic hazard maps. In the present work, "intelligent" dasymetric mapping is used to model population dynamics at high spatial resolution in order to benefit the analysis of spatio-temporal exposure to earthquake hazard in a metropolitan area. These night- and daytime-specific population densities are then classified and combined with seismic intensity levels to derive new spatially-explicit four-class-composite maps of human exposure. The presented approach enables a more thorough assessment of population exposure to earthquake hazard. Results show that there are significantly more people potentially at risk in the daytime period, demonstrating the shifting nature of population exposure in the daily cycle and the need to move beyond conventional residence-based demographic data sources to improve risk analyses. The proposed fine-scale maps of human exposure to seismic intensity are mainly aimed at benefiting visualization and communication of earthquake risk, but can be valuable in all phases of the disaster management process where knowledge of population densities is relevant for decision-making.

  10. High-density genetic maps for loci involved in nuclear male sterility (NMS1) and sporophytic self-incompatibility (S-locus) in chicory (Cichorium intybus L., Asteraceae).

    PubMed

    Gonthier, Lucy; Blassiau, Christelle; Mörchen, Monika; Cadalen, Thierry; Poiret, Matthieu; Hendriks, Theo; Quillet, Marie-Christine

    2013-08-01

    High-density genetic maps were constructed for loci involved in nuclear male sterility (NMS1-locus) and sporophytic self-incompatibility (S-locus) in chicory (Cichorium intybus L.). The mapping population consisted of 389 F1' individuals derived from a cross between two plants, K28 (male-sterile) and K59 (pollen-fertile), both heterozygous at the S-locus. This F1' mapping population segregated for both male sterility (MS) and strong self-incompatibility (SI) phenotypes. Phenotyping F1' individuals for MS allowed us to map the NMS1-locus to linkage group (LG) 5, while controlled diallel and factorial crosses to identify compatible/incompatible phenotypes mapped the S-locus to LG2. To increase the density of markers around these loci, bulked segregant analysis was used. Bulks and parental plants K28 and K59 were screened using amplified fragment length polymorphism (AFLP) analysis, with a complete set of 256 primer combinations of EcoRI-ANN and MseI-CNN. A total of 31,000 fragments were generated, of which 2,350 showed polymorphism between K59 and K28. Thirteen AFLP markers were identified close to the NMS1-locus and six in the vicinity of the S-locus. From these AFLP markers, eight were transformed into sequence-characterized amplified region (SCAR) markers and of these five showed co-dominant polymorphism. The chromosomal regions containing the NMS1-locus and the S-locus were each confined to a region of 0.8 cM. In addition, we mapped genes encoding proteins similar to S-receptor kinase, the female determinant of sporophytic SI in the Brasicaceae, and also markers in the vicinity of the putative S-locus of sunflower, but none of these genes or markers mapped close to the chicory S-locus.

  11. Time-dependent transition density matrix for visualizing charge-transfer excitations in photoexcited organic donor-acceptor systems

    NASA Astrophysics Data System (ADS)

    Li, Yonghui; Ullrich, Carsten

    2013-03-01

    The time-dependent transition density matrix (TDM) is a useful tool to visualize and interpret the induced charges and electron-hole coherences of excitonic processes in large molecules. Combined with time-dependent density functional theory on a real-space grid (as implemented in the octopus code), the TDM is a computationally viable visualization tool for optical excitation processes in molecules. It provides real-time maps of particles and holes which gives information on excitations, in particular those that have charge-transfer character, that cannot be obtained from the density alone. Some illustration of the TDM and comparison with standard density difference plots will be shown for photoexcited organic donor-acceptor molecules. This work is supported by NSF Grant DMR-1005651

  12. Crowdsourcing Vector Surveillance: Using Community Knowledge and Experiences to Predict Densities and Distribution of Outdoor-Biting Mosquitoes in Rural Tanzania.

    PubMed

    Mwangungulu, Stephen Peter; Sumaye, Robert David; Limwagu, Alex Julius; Siria, Doreen Josen; Kaindoa, Emmanuel Wilson; Okumu, Fredros Oketch

    2016-01-01

    Lack of reliable techniques for large-scale monitoring of disease-transmitting mosquitoes is a major public health challenge, especially where advanced geo-information systems are not regularly applicable. We tested an innovative crowd-sourcing approach, which relies simply on knowledge and experiences of residents to rapidly predict areas where disease-transmitting mosquitoes are most abundant. Guided by community-based resource persons, we mapped boundaries and major physical features in three rural Tanzanian villages. We then selected 60 community members, taught them basic map-reading skills, and offered them gridded maps of their own villages (grid size: 200m×200m) so they could identify locations where they believed mosquitoes were most abundant, by ranking the grids from one (highest density) to five (lowest density). The ranks were interpolated in ArcGIS-10 (ESRI-USA) using inverse distance weighting (IDW) method, and re-classified to depict areas people believed had high, medium and low mosquito densities. Finally, we used odor-baited mosquito traps to compare and verify actual outdoor mosquito densities in the same areas. We repeated this process for 12 months, each time with a different group of 60 residents. All entomological surveys depicted similar geographical stratification of mosquito densities in areas classified by community members as having high, medium and low vector abundance. These similarities were observed when all mosquito species were combined, and also when only malaria vectors were considered. Of the 12,412 mosquitoes caught, 60.9% (7,555) were from areas considered by community members as having high mosquito densities, 28% (3,470) from medium density areas, and 11.2% (1,387) from low density areas. This study provides evidence that we can rely on community knowledge and experiences to identify areas where mosquitoes are most abundant or least abundant, even without entomological surveys. This crowd-sourcing method could be further refined and validated to improve community-based planning of mosquito control operations at low-cost.

  13. Crowdsourcing Vector Surveillance: Using Community Knowledge and Experiences to Predict Densities and Distribution of Outdoor-Biting Mosquitoes in Rural Tanzania

    PubMed Central

    Limwagu, Alex Julius; Siria, Doreen Josen; Kaindoa, Emmanuel Wilson; Okumu, Fredros Oketch

    2016-01-01

    Lack of reliable techniques for large-scale monitoring of disease-transmitting mosquitoes is a major public health challenge, especially where advanced geo-information systems are not regularly applicable. We tested an innovative crowd-sourcing approach, which relies simply on knowledge and experiences of residents to rapidly predict areas where disease-transmitting mosquitoes are most abundant. Guided by community-based resource persons, we mapped boundaries and major physical features in three rural Tanzanian villages. We then selected 60 community members, taught them basic map-reading skills, and offered them gridded maps of their own villages (grid size: 200m×200m) so they could identify locations where they believed mosquitoes were most abundant, by ranking the grids from one (highest density) to five (lowest density). The ranks were interpolated in ArcGIS-10 (ESRI-USA) using inverse distance weighting (IDW) method, and re-classified to depict areas people believed had high, medium and low mosquito densities. Finally, we used odor-baited mosquito traps to compare and verify actual outdoor mosquito densities in the same areas. We repeated this process for 12 months, each time with a different group of 60 residents. All entomological surveys depicted similar geographical stratification of mosquito densities in areas classified by community members as having high, medium and low vector abundance. These similarities were observed when all mosquito species were combined, and also when only malaria vectors were considered. Of the 12,412 mosquitoes caught, 60.9% (7,555) were from areas considered by community members as having high mosquito densities, 28% (3,470) from medium density areas, and 11.2% (1,387) from low density areas. This study provides evidence that we can rely on community knowledge and experiences to identify areas where mosquitoes are most abundant or least abundant, even without entomological surveys. This crowd-sourcing method could be further refined and validated to improve community-based planning of mosquito control operations at low-cost. PMID:27253869

  14. Georeferenced LiDAR 3D vine plantation map generation.

    PubMed

    Llorens, Jordi; Gil, Emilio; Llop, Jordi; Queraltó, Meritxell

    2011-01-01

    The use of electronic devices for canopy characterization has recently been widely discussed. Among such devices, LiDAR sensors appear to be the most accurate and precise. Information obtained with LiDAR sensors during reading while driving a tractor along a crop row can be managed and transformed into canopy density maps by evaluating the frequency of LiDAR returns. This paper describes a proposed methodology to obtain a georeferenced canopy map by combining the information obtained with LiDAR with that generated using a GPS receiver installed on top of a tractor. Data regarding the velocity of LiDAR measurements and UTM coordinates of each measured point on the canopy were obtained by applying the proposed transformation process. The process allows overlap of the canopy density map generated with the image of the intended measured area using Google Earth(®), providing accurate information about the canopy distribution and/or location of damage along the rows. This methodology was applied and tested on different vine varieties and crop stages in two important vine production areas in Spain. The results indicate that the georeferenced information obtained with LiDAR sensors appears to be an interesting tool with the potential to improve crop management processes.

  15. Spatial relationship between bone formation and mechanical stimulus within cortical bone: Combining 3D fluorochrome mapping and poroelastic finite element modelling.

    PubMed

    Carrieroa, A; Pereirab, A F; Wilson, A J; Castagno, S; Javaheri, B; Pitsillides, A A; Marenzana, M; Shefelbine, S J

    2018-06-01

    Bone is a dynamic tissue and adapts its architecture in response to biological and mechanical factors. Here we investigate how cortical bone formation is spatially controlled by the local mechanical environment in the murine tibia axial loading model (C57BL/6). We obtained 3D locations of new bone formation by performing 'slice and view' 3D fluorochrome mapping of the entire bone and compared these sites with the regions of high fluid velocity or strain energy density estimated using a finite element model, validated with ex-vivo bone surface strain map acquired ex-vivo using digital image correlation. For the comparison, 2D maps of the average bone formation and peak mechanical stimulus on the tibial endosteal and periosteal surface across the entire cortical surface were created. Results showed that bone formed on the periosteal and endosteal surface in regions of high fluid flow. Peak strain energy density predicted only the formation of bone periosteally. Understanding how the mechanical stimuli spatially relates with regions of cortical bone formation in response to loading will eventually guide loading regime therapies to maintain or restore bone mass in specific sites in skeletal pathologies.

  16. Newton-Cartan Gravity in Noninertial Reference Frames

    NASA Astrophysics Data System (ADS)

    Rodriguez, Leo; St. Germaine-Fuller, James; Wickramasekara, Sujeev

    2015-03-01

    We study Newton-Cartan gravity under transformations into all noninertial, nonrelativistic reference frames. These transformations form an infinite dimensional Lie group, called the Galilean line group, which contains as a subgroup the Galilei group. The fictitious forces of noninertial reference frames are encoded in the Cartan connection transformed under the Galilean line group. These fictitious forces, which are coordinate effects, do not contribute to the Ricci tensor. Only the 00-component of the Ricci tensor is non-zero and equals (4 π times) the matter density in all reference frames. While the Ricci field equation and Gauss' law are fulfilled by the physical matter density in inertial and linearly accelerating reference frames, in rotating reference frames Gauss' law holds for an effective mass density that differs from the physical matter density. This effective density has its origin in the simulated magnetic field of rotating frames, highlighting a striking difference between linearly and rotationally accelerating frames. The equations governing the simulated fields have the same form as Maxwell's equations, a surprising result given that these equations obey special relativity (and U (1) -gauge symmetry), rather than Galilean symmetry. This work was supported in part by the HHMI Undergraduate Science Education Award 52006298 and the Grinnell College Academic Affairs' CSFS and MAP programs.

  17. Identification of irrigated crop types from ERTS-1 density contour maps and color infrared aerial photography. [Wyoming

    NASA Technical Reports Server (NTRS)

    Marrs, R. W.; Evans, M. A.

    1974-01-01

    The author has identified the following significant results. The crop types of a Great Plains study area were mapped from color infrared aerial photography. Each field was positively identified from field checks in the area. Enlarged (50x) density contour maps were constructed from three ERTS-1 images taken in the summer of 1973. The map interpreted from the aerial photography was compared to the density contour maps and the accuracy of the ERTS-1 density contour map interpretations were determined. Changes in the vegetation during the growing season and harvest periods were detectable on the ERTS-1 imagery. Density contouring aids in the detection of such charges.

  18. Drought effects on evapotransiration and subsurface water storage in the southern Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Bales, R. C.; Goulden, M.; Hunsaker, C. T.; Conklin, M. H.; Hartsough, P. C.; O'Geen, T. T.; Hopmans, J. W.; Safeeq, M.

    2015-12-01

    Multi-year measurements of evapotranspiration (ET) at three elevations in the southern Sierra Nevada show the extent to which subsurface water storage in the regolith provides a buffer against multi-year dry periods. ET in a 2000-m elevation mixed-conifer forest showed a 24% decrease in ET in water-year 2014, the third dry year, as compared to the wet year of 2011. This decrease reflected reduced transpiration for the July to September period. Over half of the annual ET in both wet and dry years came from below the 1-m depth mapped soil, and with come coming from below the 2.5 m depth of our soil-moisture measurements. The ability of trees to access water from these depths does provide a 2-3 year buffer for ET, which also depends on forest density and the balance between perennial overstory and annual understory vegetation. An equally dense lower-elevation pine-oak forest (1160 m) showed nearly a 50% decrease in ET during the third year of drought, with significant visible effects on vegetation. While this lower elevation forest may have as much or more subsurface storage as does that at 2000-m elevation, the combination of lower precipitation as one goes down in elevation and very high forest density provides only a one-year buffer for ET in dry years. Regaining resiliency in this forest will only occur with significant reductions in biomass and commensurate lowering of ET. In a 400-m elevation oak savannah ET responds to annual precipitation, with essentially no multi-year buffer provided by subsurface storage.

  19. Microstructure stability of ultra-fine grained magnesium alloy AZ31 processed by extrusion and equal-channel angular pressing (EX–ECAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stráská, Jitka, E-mail: straska.jitka@gmail.com; Janeček, Miloš, E-mail: janecek@met.mff.cuni.cz; Čížek, Jakub, E-mail: jcizek@mbox.troja.mff.cuni.cz

    Thermal stability of the ultra-fine grained (UFG) microstructure of magnesium AZ31 alloy was investigated. UFG microstructure was achieved by a combined two-step severe plastic deformation process: the extrusion (EX) and subsequent equal-channel angular pressing (ECAP). This combined process leads to refined microstructure and enhanced microhardness. Specimens with UFG microstructure were annealed isochronally at temperatures 150–500 °C for 1 h. The evolution of microstructure, mechanical properties and dislocation density was studied by electron backscatter diffraction (EBSD), microhardness measurements and positron annihilation spectroscopy (PAS). The coarsening of the fine-grained structure at higher temperatures was accompanied by a gradual decrease of the microhardnessmore » and decrease of dislocation density. Mechanism of grain growth was studied by general equation for grain growth and Arrhenius equation. Activation energies for grain growth were calculated to be 115, 33 and 164 kJ/mol in temperature ranges of 170–210 °C, 210–400 °C and 400–500 °C (443–483 K, 483–673 K and 673–773 K), respectively. - Highlights: • Microhardness of UFG AZ31 alloy decreases with increasing annealing temperature. • This fact has two reasons: dislocation annihilations and/or grain growth. • The activation energies for grain growth were calculated for all temperature ranges.« less

  20. Detecting the environmental impact of off-road vehicles on Rawdat Al Shams in central Saudi Arabia by remote sensing.

    PubMed

    Dewidar, K; Thomas, J; Bayoumi, S

    2016-07-01

    Off-road vehicles can have a devastating impact on vegetation and soil. Here, we sought to quantify, through a combination of field vegetation, bulk soil, and image analyses, the impact of off-road vehicles on the vegetation and soils of Rawdat Al Shams, which is located in central Saudi Arabia. Soil compaction density was measured in the field, and 27 soil samples were collected for bulk density analysis in the lab to quantify the impacts of off-road vehicles. High spatial resolution images, such as those obtained by the satellites GeoEye-1 and IKONOS-2, were used for surveying the damage to vegetation cover and soil compaction caused by these vehicles. Vegetation cover was mapped using the Normalized Difference Vegetation Index (NDVI) technique based on high-resolution images taken at different times of the year. Vehicle trails were derived from satellite data via visual analysis. All damaged areas were determined from high-resolution image data. In this study, we conducted quantitative analyses of vegetation cover change, the impacts of vehicle trails (hereafter "trail impacts"), and a bulk soil analysis. Image data showed that both vegetation cover and trail impacts increased from 2008 to 2015, with the average percentage of trail impacts nearly equal to that of the percentage of vegetation cover during this period. Forty-six species of plants were found to be present in the study area, consisting of all types of life forms, yet trees were represented by a single species, Acacia gerrardii. Herbs composed the largest share of plant life, with 29 species, followed by perennial herbs (12 species), grasses (5 species), and shrubs (3 species). Analysis of soil bulk density for Rawdat Al Shams showed that off-road driving greatly impacts soil density. Twenty-two plant species were observed on the trails, the majority of which were ephemerals. Notoceras bicorne was the most common, with a frequency rate of 93.33 %, an abundance value of 78.47 %, and a density of 0.1 in transect 1, followed by Plantago ovata.

  1. Multiple Scales of Control on the Structure and Spatial Distribution of Woody Vegetation in African Savanna Watersheds

    PubMed Central

    Vaughn, Nicholas R.; Asner, Gregory P.; Smit, Izak P. J.; Riddel, Edward S.

    2015-01-01

    Factors controlling savanna woody vegetation structure vary at multiple spatial and temporal scales, and as a consequence, unraveling their combined effects has proven to be a classic challenge in savanna ecology. We used airborne LiDAR (light detection and ranging) to map three-dimensional woody vegetation structure throughout four savanna watersheds, each contrasting in geologic substrate and climate, in Kruger National Park, South Africa. By comparison of the four watersheds, we found that geologic substrate had a stronger effect than climate in determining watershed-scale differences in vegetation structural properties, including cover, height and crown density. Generalized Linear Models were used to assess the spatial distribution of woody vegetation structural properties, including cover, height and crown density, in relation to mapped hydrologic, topographic and fire history traits. For each substrate and climate combination, models incorporating topography, hydrology and fire history explained up to 30% of the remaining variation in woody canopy structure, but inclusion of a spatial autocovariate term further improved model performance. Both crown density and the cover of shorter woody canopies were determined more by unknown factors likely to be changing on smaller spatial scales, such as soil texture, herbivore abundance or fire behavior, than by our mapped regional-scale changes in topography and hydrology. We also detected patterns in spatial covariance at distances up to 50–450 m, depending on watershed and structural metric. Our results suggest that large-scale environmental factors play a smaller role than is often attributed to them in determining woody vegetation structure in southern African savannas. This highlights the need for more spatially-explicit, wide-area analyses using high resolution remote sensing techniques. PMID:26660502

  2. Multiple Scales of Control on the Structure and Spatial Distribution of Woody Vegetation in African Savanna Watersheds.

    PubMed

    Vaughn, Nicholas R; Asner, Gregory P; Smit, Izak P J; Riddel, Edward S

    2015-01-01

    Factors controlling savanna woody vegetation structure vary at multiple spatial and temporal scales, and as a consequence, unraveling their combined effects has proven to be a classic challenge in savanna ecology. We used airborne LiDAR (light detection and ranging) to map three-dimensional woody vegetation structure throughout four savanna watersheds, each contrasting in geologic substrate and climate, in Kruger National Park, South Africa. By comparison of the four watersheds, we found that geologic substrate had a stronger effect than climate in determining watershed-scale differences in vegetation structural properties, including cover, height and crown density. Generalized Linear Models were used to assess the spatial distribution of woody vegetation structural properties, including cover, height and crown density, in relation to mapped hydrologic, topographic and fire history traits. For each substrate and climate combination, models incorporating topography, hydrology and fire history explained up to 30% of the remaining variation in woody canopy structure, but inclusion of a spatial autocovariate term further improved model performance. Both crown density and the cover of shorter woody canopies were determined more by unknown factors likely to be changing on smaller spatial scales, such as soil texture, herbivore abundance or fire behavior, than by our mapped regional-scale changes in topography and hydrology. We also detected patterns in spatial covariance at distances up to 50-450 m, depending on watershed and structural metric. Our results suggest that large-scale environmental factors play a smaller role than is often attributed to them in determining woody vegetation structure in southern African savannas. This highlights the need for more spatially-explicit, wide-area analyses using high resolution remote sensing techniques.

  3. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less

  4. GIM-TEC adaptive ionospheric weather assessment and forecast system

    NASA Astrophysics Data System (ADS)

    Gulyaeva, T. L.; Arikan, F.; Hernandez-Pajares, M.; Stanislawska, I.

    2013-09-01

    The Ionospheric Weather Assessment and Forecast (IWAF) system is a computer software package designed to assess and predict the world-wide representation of 3-D electron density profiles from the Global Ionospheric Maps of Total Electron Content (GIM-TEC). The unique system products include daily-hourly numerical global maps of the F2 layer critical frequency (foF2) and the peak height (hmF2) generated with the International Reference Ionosphere extended to the plasmasphere, IRI-Plas, upgraded by importing the daily-hourly GIM-TEC as a new model driving parameter. Since GIM-TEC maps are provided with 1- or 2-days latency, the global maps forecast for 1 day and 2 days ahead are derived using an harmonic analysis applied to the temporal changes of TEC, foF2 and hmF2 at 5112 grid points of a map encapsulated in IONEX format (-87.5°:2.5°:87.5°N in latitude, -180°:5°:180°E in longitude). The system provides online the ionospheric disturbance warnings in the global W-index map establishing categories of the ionospheric weather from the quiet state (W=±1) to intense storm (W=±4) according to the thresholds set for instant TEC perturbations regarding quiet reference median for the preceding 7 days. The accuracy of IWAF system predictions of TEC, foF2 and hmF2 maps is superior to the standard persistence model with prediction equal to the most recent ‘true’ map. The paper presents outcomes of the new service expressed by the global ionospheric foF2, hmF2 and W-index maps demonstrating the process of origin and propagation of positive and negative ionosphere disturbances in space and time and their forecast under different scenarios.

  5. How will Mahanarva spectabilis (Hemiptera: Cercopidae) Respond to Global Warming?

    PubMed Central

    Auad, A. M.; Resende, T. T.; Hott, M. C.; Borges, C.A.V.

    2016-01-01

    The aim of this study was to determine the favorable constant temperature range for Mahanarva spectabilis (Distant) (Hemiptera: Cercopidae) development as well as to generate geographic distribution maps of this insect pest for future climate scenarios. M. spectabilis eggs were reared on two host plants (Brachiaria ruziziensis (Germain and Edvard) and Pennisetum purpureum (Schumach)), with individual plants kept at temperatures of 16, 20, 24, 28, and 32°C. Nymphal stage duration, nymphal survival, adult longevity, and egg production were recorded for each temperature*host plant combination. Using the favorable temperature ranges for M. spectabilis development, it was possible to generate geographic distribution. Nymphal survival was highest at 24.4°C, with estimates of 44 and 8% on Pennisetum and Brachiaria, respectively. Nymphal stage duration was greater on Brachiaria than on Pennisetum at 20 and 24°C but equal at 28°C. Egg production was higher on Pennisetum at 24 and 28°C than at 20°C, and adult longevity on Pennisetum was higher at 28°C than at 20°C, whereas adult longevity at 24°C did not differ from that at 20 and 28°C. With these results, it was possible to predict a reduction in M. spectabilis densities in most regions of Brazil in future climate scenarios. PMID:27012869

  6. Site-specific Microtubule-associated Protein 4 Dephosphorylation Causes Microtubule Network Densification in Pressure Overload Cardiac Hypertrophy*

    PubMed Central

    Chinnakkannu, Panneerselvam; Samanna, Venkatesababa; Cheng, Guangmao; Ablonczy, Zsolt; Baicu, Catalin F.; Bethard, Jennifer R.; Menick, Donald R.; Kuppuswamy, Dhandapani; Cooper, George

    2010-01-01

    In severe pressure overload-induced cardiac hypertrophy, a dense, stabilized microtubule network forms that interferes with cardiocyte contraction and microtubule-based transport. This is associated with persistent transcriptional up-regulation of cardiac α- and β-tubulin and microtubule-stabilizing microtubule-associated protein 4 (MAP4). There is also extensive microtubule decoration by MAP4, suggesting greater MAP4 affinity for microtubules. Because the major determinant of this affinity is site-specific MAP4 dephosphorylation, we characterized this in hypertrophied myocardium and then assessed the functional significance of each dephosphorylation site found by mimicking it in normal cardiocytes. We first isolated MAP4 from normal and pressure overload-hypertrophied feline myocardium; volume-overloaded myocardium, which has an equal degree and duration of hypertrophy but normal functional and cytoskeletal properties, served as a control for any nonspecific growth-related effects. After cloning cDNA-encoding feline MAP4 and obtaining its deduced amino acid sequence, we characterized by mass spectrometry any site-specific MAP4 dephosphorylation. Solely in pressure overload-hypertrophied myocardium, we identified striking MAP4 dephosphorylation at Ser-472 in the MAP4 N-terminal projection domain and at Ser-924 and Ser-1056 in the assembly-promoting region of the C-terminal microtubule-binding domain. Site-directed mutagenesis of MAP4 cDNA was then used to switch each serine to non-phosphorylatable alanine. Wild-type and mutated cDNAs were used to construct adenoviruses; microtubule network density, stability, and MAP4 decoration were assessed in normal cardiocytes following an equivalent level of MAP4 expression. The Ser-924 → Ala MAP4 mutant produced a microtubule phenotype indistinguishable from that seen in pressure overload hypertrophy, such that Ser-924 MAP4 dephosphorylation during pressure overload hypertrophy may be central to this cytoskeletal abnormality. PMID:20436166

  7. Diffusion-Based Density-Equalizing Maps: an Interdisciplinary Approach to Visualizing Homicide Rates and Other Georeferenced Statistical Data

    NASA Astrophysics Data System (ADS)

    Mazzitello, Karina I.; Candia, Julián

    2012-12-01

    In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.

  8. Construction of an ultra-high density consensus genetic map, and enhancement of the physical map from genome sequencing in Lupinus angustifolius.

    PubMed

    Zhou, Gaofeng; Jian, Jianbo; Wang, Penghao; Li, Chengdao; Tao, Ye; Li, Xuan; Renshaw, Daniel; Clements, Jonathan; Sweetingham, Mark; Yang, Huaan

    2018-01-01

    An ultra-high density genetic map containing 34,574 sequence-defined markers was developed in Lupinus angustifolius. Markers closely linked to nine genes of agronomic traits were identified. A physical map was improved to cover 560.5 Mb genome sequence. Lupin (Lupinus angustifolius L.) is a recently domesticated legume grain crop. In this study, we applied the restriction-site associated DNA sequencing (RADseq) method to genotype an F 9 recombinant inbred line population derived from a wild type × domesticated cultivar (W × D) cross. A high density linkage map was developed based on the W × D population. By integrating sequence-defined DNA markers reported in previous mapping studies, we established an ultra-high density consensus genetic map, which contains 34,574 markers consisting of 3508 loci covering 2399 cM on 20 linkage groups. The largest gap in the entire consensus map was 4.73 cM. The high density W × D map and the consensus map were used to develop an improved physical map, which covered 560.5 Mb of genome sequence data. The ultra-high density consensus linkage map, the improved physical map and the markers linked to genes of breeding interest reported in this study provide a common tool for genome sequence assembly, structural genomics, comparative genomics, functional genomics, QTL mapping, and molecular plant breeding in lupin.

  9. Canopy Density Mapping on Ultracam-D Aerial Imagery in Zagros Woodlands, Iran

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Khodaee, Z.

    2013-09-01

    Canopy density maps express different characteristics of forest stands, especially in woodlands. Obtaining such maps by field measurements is so expensive and time-consuming. It seems necessary to find suitable techniques to produce these maps to be used in sustainable management of woodland ecosystems. In this research, a robust procedure was suggested to obtain these maps by very high spatial resolution aerial imagery. It was aimed to produce canopy density maps by UltraCam-D aerial imagery, newly taken in Zagros woodlands by Iran National Geographic Organization (NGO), in this study. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The very high spatial resolution aerial imagery of the plot purchased from NGO, was classified by kNN technique and the tree crowns were extracted precisely. The canopy density was determined in each cell of different meshes with different sizes overlaid on the study area map. The accuracy of the final maps was investigated by the ground truth obtained by complete field measurements. The results showed that the proposed method of obtaining canopy density maps was efficient enough in the study area. The final canopy density map obtained by a mesh with 30 Ar (3000 m2) cell size had 80% overall accuracy and 0.61 KHAT coefficient of agreement which shows a great agreement with the observed samples. This method can also be tested in other case studies to reveal its capability in canopy density map production in woodlands.

  10. Mapping-by-sequencing in complex polyploid genomes using genic sequence capture: a case study to map yellow rust resistance in hexaploid wheat.

    PubMed

    Gardiner, Laura-Jayne; Bansept-Basler, Pauline; Olohan, Lisa; Joynson, Ryan; Brenchley, Rachel; Hall, Neil; O'Sullivan, Donal M; Hall, Anthony

    2016-08-01

    Previously we extended the utility of mapping-by-sequencing by combining it with sequence capture and mapping sequence data to pseudo-chromosomes that were organized using wheat-Brachypodium synteny. This, with a bespoke haplotyping algorithm, enabled us to map the flowering time locus in the diploid wheat Triticum monococcum L. identifying a set of deleted genes (Gardiner et al., 2014). Here, we develop this combination of gene enrichment and sliding window mapping-by-synteny analysis to map the Yr6 locus for yellow stripe rust resistance in hexaploid wheat. A 110 MB NimbleGen capture probe set was used to enrich and sequence a doubled haploid mapping population of hexaploid wheat derived from an Avalon and Cadenza cross. The Yr6 locus was identified by mapping to the POPSEQ chromosomal pseudomolecules using a bespoke pipeline and algorithm (Chapman et al., 2015). Furthermore the same locus was identified using newly developed pseudo-chromosome sequences as a mapping reference that are based on the genic sequence used for sequence enrichment. The pseudo-chromosomes allow us to demonstrate the application of mapping-by-sequencing to even poorly defined polyploidy genomes where chromosomes are incomplete and sub-genome assemblies are collapsed. This analysis uniquely enabled us to: compare wheat genome annotations; identify the Yr6 locus - defining a smaller genic region than was previously possible; associate the interval with one wheat sub-genome and increase the density of SNP markers associated. Finally, we built the pipeline in iPlant, making it a user-friendly community resource for phenotype mapping. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  11. Intensity Maps Production Using Real-Time Joint Streaming Data Processing From Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Y. Y.; Tiampo, K. F.; Qin, J.; Bauer, M.

    2015-12-01

    Intensity is one of the most useful measures of earthquake hazard, as it quantifies the strength of shaking produced at a given distance from the epicenter. Today, there are several data sources that could be used to determine intensity level which can be divided into two main categories. The first category is represented by social data sources, in which the intensity values are collected by interviewing people who experienced the earthquake-induced shaking. In this case, specially developed questionnaires can be used in addition to personal observations published on social networks such as Twitter. These observations are assigned to the appropriate intensity level by correlating specific details and descriptions to the Modified Mercalli Scale. The second category of data sources is represented by observations from different physical sensors installed with the specific purpose of obtaining an instrumentally-derived intensity level. These are usually based on a regression of recorded peak acceleration and/or velocity amplitudes. This approach relates the recorded ground motions to the expected felt and damage distribution through empirical relationships. The goal of this work is to implement and evaluate streaming data processing separately and jointly from both social and physical sensors in order to produce near real-time intensity maps and compare and analyze their quality and evolution through 10-minute time intervals immediately following an earthquake. Results are shown for the case study of the M6.0 2014 South Napa, CA earthquake that occurred on August 24, 2014. The using of innovative streaming and pipelining computing paradigms through IBM InfoSphere Streams platform made it possible to read input data in real-time for low-latency computing of combined intensity level and production of combined intensity maps in near-real time. The results compare three types of intensity maps created based on physical, social and combined data sources. Here we correlate the count and density of Tweets with intensity level and show the importance of processing combined data sources at the earliest time stages after earthquake happens. This method can supplement existing approaches of intensity level detection, especially in the regions with high number of Twitter users and low density of seismic networks.

  12. Mapping the temperature-dependent conformational landscapes of the dynamic enzymes cyclophilin A and urease

    NASA Astrophysics Data System (ADS)

    Thorne, Robert; Keedy, Daniel; Warkentin, Matthew; Fraser, James; Moreau, David; Atakisi, Hakan; Rau, Peter

    Proteins populate complex, temperature-dependent ensembles of conformations that enable their function. Yet in X-ray crystallographic studies, roughly 98% of structures have been determined at 100 K, and most refined to only a single conformation. A combination of experimental methods enabled by studies of ice formation and computational methods for mining low-density features in electron density maps have been applied to determine the evolution of the conformational landscapes of the enzymes cyclophilin A and urease between 300 K and 100 K. Minority conformations of most side chains depopulate on cooling from 300 to ~200 K, below which subsequent conformational evolution is quenched. The characteristic temperatures for this depopulation are highly heterogeneous throughout each enzyme. The temperature-dependent ensemble of the active site flap in urease has also been mapped. These all-atom, site-resolved measurements and analyses rule out one interpretation of the protein-solvent glass transition, and give an alternative interpretation of a dynamical transition identified in site-averaged experiments. They demonstrate a powerful approach to structural characterization of the dynamic underpinnings of protein function. Supported by NSF MCB-1330685.

  13. Quantitative trait loci controlling leaf venation in Arabidopsis.

    PubMed

    Rishmawi, Louai; Bühler, Jonas; Jaegle, Benjamin; Hülskamp, Martin; Koornneef, Maarten

    2017-08-01

    Leaf veins provide the mechanical support and are responsible for the transport of nutrients and water to the plant. High vein density is a prerequisite for plants to have C4 photosynthesis. We investigated the genetic variation and genetic architecture of leaf venation traits within the species Arabidopsis thaliana using natural variation. Leaf venation traits, including leaf vein density (LVD) were analysed in 66 worldwide accessions and 399 lines of the multi-parent advanced generation intercross population. It was shown that there is no correlation between LVD and photosynthesis parameters within A. thaliana. Association mapping was performed for LVD and identified 16 and 17 putative quantitative trait loci (QTLs) in the multi-parent advanced generation intercross and worldwide sets, respectively. There was no overlap between the identified QTLs suggesting that many genes can affect the traits. In addition, linkage mapping was performed using two biparental recombinant inbred line populations. Combining linkage and association mapping revealed seven candidate genes. For one of the candidate genes, RCI2c, we demonstrated its function in leaf venation patterning. © 2017 John Wiley & Sons Ltd.

  14. High-density genetic map construction and QTLs identification for plant height in white jute (Corchorus capsularis L.) using specific locus amplified fragment (SLAF) sequencing.

    PubMed

    Tao, Aifen; Huang, Long; Wu, Guifen; Afshar, Reza Keshavarz; Qi, Jianmin; Xu, Jiantang; Fang, Pingping; Lin, Lihui; Zhang, Liwu; Lin, Peiqing

    2017-05-08

    Genetic mapping and quantitative trait locus (QTL) detection are powerful methodologies in plant improvement and breeding. White jute (Corchorus capsularis L.) is an important industrial raw material fiber crop because of its elite characteristics. However, construction of a high-density genetic map and identification of QTLs has been limited in white jute due to a lack of sufficient molecular markers. The specific locus amplified fragment sequencing (SLAF-seq) strategy combines locus-specific amplification and high-throughput sequencing to carry out de novo single nuclear polymorphism (SNP) discovery and large-scale genotyping. In this study, SLAF-seq was employed to obtain sufficient markers to construct a high-density genetic map for white jute. Moreover, with the development of abundant markers, genetic dissection of fiber yield traits such as plant height was also possible. Here, we present QTLs associated with plant height that were identified using our newly constructed genetic linkage groups. An F 8 population consisting of 100 lines was developed. In total, 69,446 high-quality SLAFs were detected of which 5,074 SLAFs were polymorphic; 913 polymorphic markers were used for the construction of a genetic map. The average coverage for each SLAF marker was 43-fold in the parents, and 9.8-fold in each F 8 individual. A linkage map was constructed that contained 913 SLAFs on 11 linkage groups (LGs) covering 1621.4 cM with an average density of 1.61 cM per locus. Among the 11 LGs, LG1 was the largest with 210 markers, a length of 406.34 cM, and an average distance of 1.93 cM between adjacent markers. LG11 was the smallest with only 25 markers, a length of 29.66 cM, and an average distance of 1.19 cM between adjacent markers. 'SNP_only' markers accounted for 85.54% and were the predominant markers on the map. QTL mapping based on the F 8 phenotypes detected 11 plant height QTLs including one major effect QTL across two cultivation locations, with each QTL accounting for 4.14-15.63% of the phenotypic variance. To our knowledge, the linkage map constructed here is the densest one available to date for white jute. This analysis also identified the first QTL in white jute. The results will provide an important platform for gene/QTL mapping, sequence assembly, genome comparisons, and marker-assisted selection breeding for white jute.

  15. High-resolution mapping of forest carbon stocks in the Colombian Amazon

    NASA Astrophysics Data System (ADS)

    Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Yepes Quintero, A. P.; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.

    2012-07-01

    High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or light detection and ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high-resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (> 40%) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon maps have 14% uncertainty at 1 ha resolution, and the regional map based on stratification has 28% uncertainty in any given hectare. High-resolution approaches with quantifiable pixel-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.

  16. High-resolution Mapping of Forest Carbon Stocks in the Colombian Amazon

    NASA Astrophysics Data System (ADS)

    Asner, G. P.; Clark, J. K.; Mascaro, J.; Galindo García, G. A.; Chadwick, K. D.; Navarrete Encinales, D. A.; Paez-Acosta, G.; Cabrera Montenegro, E.; Kennedy-Bowdoin, T.; Duque, Á.; Balaji, A.; von Hildebrand, P.; Maatoug, L.; Bernal, J. F. Phillips; Knapp, D. E.; García Dávila, M. C.; Jacobson, J.; Ordóñez, M. F.

    2012-03-01

    High-resolution mapping of tropical forest carbon stocks can assist forest management and improve implementation of large-scale carbon retention and enhancement programs. Previous high-resolution approaches have relied on field plot and/or Light Detection and Ranging (LiDAR) samples of aboveground carbon density, which are typically upscaled to larger geographic areas using stratification maps. Such efforts often rely on detailed vegetation maps to stratify the region for sampling, but existing tropical forest maps are often too coarse and field plots too sparse for high resolution carbon assessments. We developed a top-down approach for high-resolution carbon mapping in a 16.5 million ha region (>40 %) of the Colombian Amazon - a remote landscape seldom documented. We report on three advances for large-scale carbon mapping: (i) employing a universal approach to airborne LiDAR-calibration with limited field data; (ii) quantifying environmental controls over carbon densities; and (iii) developing stratification- and regression-based approaches for scaling up to regions outside of LiDAR coverage. We found that carbon stocks are predicted by a combination of satellite-derived elevation, fractional canopy cover and terrain ruggedness, allowing upscaling of the LiDAR samples to the full 16.5 million ha region. LiDAR-derived carbon mapping samples had 14.6 % uncertainty at 1 ha resolution, and regional maps based on stratification and regression approaches had 25.6 % and 29.6 % uncertainty, respectively, in any given hectare. High-resolution approaches with reported local-scale uncertainties will provide the most confidence for monitoring changes in tropical forest carbon stocks. Improved confidence will allow resource managers and decision-makers to more rapidly and effectively implement actions that better conserve and utilize forests in tropical regions.

  17. Non-binary LDPC-coded modulation for high-speed optical metro networks with backpropagation

    NASA Astrophysics Data System (ADS)

    Arabaci, Murat; Djordjevic, Ivan B.; Saunders, Ross; Marcoccia, Roberto M.

    2010-01-01

    To simultaneously mitigate the linear and nonlinear channel impairments in high-speed optical communications, we propose the use of non-binary low-density-parity-check-coded modulation in combination with a coarse backpropagation method. By employing backpropagation, we reduce the memory in the channel and in return obtain significant reductions in the complexity of the channel equalizer which is exponentially proportional to the channel memory. We then compensate for the remaining channel distortions using forward error correction based on non-binary LDPC codes. We propose non-binary-LDPC-coded modulation scheme because, compared to bit-interleaved binary-LDPC-coded modulation scheme employing turbo equalization, the proposed scheme lowers the computational complexity and latency of the overall system while providing impressively larger coding gains.

  18. Exploiting genotyping by sequencing to characterize the genomic structure of the American cranberry through high-density linkage mapping.

    PubMed

    Covarrubias-Pazaran, Giovanny; Diaz-Garcia, Luis; Schlautman, Brandon; Deutsch, Joseph; Salazar, Walter; Hernandez-Ochoa, Miguel; Grygleski, Edward; Steffan, Shawn; Iorizzo, Massimo; Polashock, James; Vorsa, Nicholi; Zalapa, Juan

    2016-06-13

    The application of genotyping by sequencing (GBS) approaches, combined with data imputation methodologies, is narrowing the genetic knowledge gap between major and understudied, minor crops. GBS is an excellent tool to characterize the genomic structure of recently domesticated (~200 years) and understudied species, such as cranberry (Vaccinium macrocarpon Ait.), by generating large numbers of markers for genomic studies such as genetic mapping. We identified 10842 potentially mappable single nucleotide polymorphisms (SNPs) in a cranberry pseudo-testcross population wherein 5477 SNPs and 211 short sequence repeats (SSRs) were used to construct a high density linkage map in cranberry of which a total of 4849 markers were mapped. Recombination frequency, linkage disequilibrium (LD), and segregation distortion at the genomic level in the parental and integrated linkage maps were characterized for first time in cranberry. SSR markers, used as the backbone in the map, revealed high collinearity with previously published linkage maps. The 4849 point map consisted of twelve linkage groups spanning 1112 cM, which anchored 2381 nuclear scaffolds accounting for ~13 Mb of the estimated 470 Mb cranberry genome. Bin mapping identified 592 and 672 unique bins in the parentals and a total of 1676 unique marker positions in the integrated map. Synteny analyses comparing the order of anchored cranberry scaffolds to their homologous positions in kiwifruit, grape, and coffee genomes provided initial evidence of homology between cranberry and closely related species. GBS data was used to rapidly saturate the cranberry genome with markers in a pseudo-testcross population. Collinearity between the present saturated genetic map and previous cranberry SSR maps suggests that the SNP locations represent accurate marker order and chromosome structure of the cranberry genome. SNPs greatly improved current marker genome coverage, which allowed for genome-wide structure investigations such as segregation distortion, recombination, linkage disequilibrium, and synteny analyses. In the future, GBS can be used to accelerate cranberry molecular breeding through QTL mapping and genome-wide association studies (GWAS).

  19. Permafrost thaw and wildfire: Equally important drivers of boreal tree cover changes in the Taiga Plains, Canada

    NASA Astrophysics Data System (ADS)

    Helbig, M.; Pappas, C.; Sonnentag, O.

    2016-02-01

    Boreal forests cover vast areas of the permafrost zones of North America, and changes in their composition and structure can lead to pronounced impacts on the regional and global climate. We partition the variation in regional boreal tree cover changes between 2000 and 2014 across the Taiga Plains, Canada, into its main causes: permafrost thaw, wildfire disturbance, and postfire regrowth. Moderate Resolution Imaging Spectroradiometer Percent Tree Cover (PTC) data are used in combination with maps of historic fires, and permafrost and drainage characteristics. We find that permafrost thaw is equally important as fire history to explain PTC changes. At the southern margin of the permafrost zone, PTC loss due to permafrost thaw outweighs PTC gain from postfire regrowth. These findings emphasize the importance of permafrost thaw in controlling regional boreal forest changes over the last decade, which may become more pronounced with rising air temperatures and accelerated permafrost thaw.

  20. Clinical application of a color map pattern on shear-wave elastography for invasive breast cancer.

    PubMed

    Lee, Seokwon; Jung, Younglae; Bae, Youngtae

    2016-03-01

    The aim of this study was to classify the color map pattern on shear-wave elastography (SWE) and to determine its association with clinicopathological factors for clinical application in invasive breast cancer. From June to December 2014, 103 invasive breast cancers were imaged by B-mode ultrasonography (US) and SWE just before surgery. The color map pattern identified on the SWE could be classified into three main categories: type 1 (diffuse pattern), increased stiffness in the surrounding stroma and the interior lesion itself; type 2 (lateral pattern), marked peri-tumoral stiffness at the anterior and lateral portions with no or minor stiffness at the posterior portion; and type 3 (rim-off pattern), marked peri-tumoral stiffness at the anterior and posterior portion with no or minor stiffness at both lateral portions. High-grade density on mammography (grade 3-4) was more frequent in the type 1 pattern than the other pattern types (80.5% in high-grade density vs. 19.5% in low-grade density). For type 1 tumors, the extent of synchronous non-invasive cancers (pT0), ductal carcinoma in situ (DCIS), was 1.8-2.0 times wider than that measured by US or magnetic resonance imaging (MRI). For type 2 tumors, the invasive tumor components (pT size) size was 1.3 times greater than measured by MRI (p = 0.049). On the other hand, the pT size and pT0 extent of type 3 tumors were almost equal to the preoperative US and MRI measurements. In terms of immunohistochemical (IHC) profiles, type 3 tumors showed a high histologic grade (p = 0.021), poor differentiation (p = 0.009), presence of necrosis (p = 0.018), and high Ki-67 (p = 0.002). The percentage of HER2-positive cancers was relatively high within the type 2 group, and the percentage of triple negative breast cancer was relatively high in the type 3 group (p = 0.011). We expect that assessments of the SWE color map pattern will prove useful for surgical or therapeutic plan decisions and to predict prognosis in invasive breast cancer patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Comparison of an Atomic Model and Its Cryo-EM Image at the Central Axis of a Helix

    PubMed Central

    He, Jing; Zeil, Stephanie; Hallak, Hussam; McKaig, Kele; Kovacs, Julio; Wriggers, Willy

    2016-01-01

    Cryo-electron microscopy (cryo-EM) is an important biophysical technique that produces three-dimensional (3D) density maps at different resolutions. Because more and more models are being produced from cryo-EM density maps, validation of the models is becoming important. We propose a method for measuring local agreement between a model and the density map using the central axis of the helix. This method was tested using 19 helices from cryo-EM density maps between 5.5 Å and 7.2 Å resolution and 94 helices from simulated density maps. This method distinguished most of the well-fitting helices, although challenges exist for shorter helices. PMID:27280059

  2. Preliminary Correlations of Gravity and Topography from Mars Global Surveyor

    NASA Technical Reports Server (NTRS)

    Zuber, M. T.; Tyler, G. L.; Smith, D. E.; Balmino, G. S.; Johnson, G. L.; Lemoine, F. G.; Neumann, G. A.; Phillips, R. J.; Sjogren, W. L.; Solomon, S. C.

    1999-01-01

    The Mars Global Surveyor (MGS) spacecraft is currently in a 400-km altitude polar mapping orbit and scheduled to begin global mapping of Mars in March of 1999. Doppler tracking data collected in this Gravity Calibration Orbit prior to the nominal mapping mission combined with observations from the MGS Science Phasing Orbit in Spring - Summer 1999 and the Viking and mariner 9 orbiters has led to preliminary high resolution gravity fields. Spherical harmonic expansions have been performed to degree and order 70 and are characterized by the first high spatial resolution coverage of high latitudes. Topographic mapping by the Mars Orbiter Laser Altimeter on MGS is providing measurements of the height of the martian surface with sub-meter vertical resolution and 5-30 m absolute accuracy. Data obtained during the circular mapping phase are expected to provide the first high resolution measurements of surface heights in the southern hemisphere. The combination of gravity and topography measurements provides information on the structure of the planetary interior, i.e. the rigidity and distribution of internal density. The observations can also be used to address the mechanisms of support of surface topography. Preliminary results of correlations of gravity and topography at long planetary wavelengths will be presented and the implications for internal structure will be addressed.

  3. Systematics in lensing reconstruction: dark matter rings in the sky?

    NASA Astrophysics Data System (ADS)

    Ponente, P. P.; Diego, J. M.

    2011-11-01

    Context. Non-parametric lensing methods are a useful way of reconstructing the lensing mass of a cluster without making assumptions about the way the mass is distributed in the cluster. These methods are particularly powerful in the case of galaxy clusters with a large number of constraints. The advantage of not assuming implicitly that the luminous matter follows the dark matter is particularly interesting in those cases where the cluster is in a non-relaxed dynamical state. On the other hand, non-parametric methods have several limitations that should be taken into account carefully. Aims: We explore some of these limitations and focus on their implications for the possible ring of dark matter around the galaxy cluster CL0024+17. Methods: We project three background galaxies through a mock cluster of known radial profile density and obtain a map for the arcs (θ map). We also calculate the shear field associated with the mock cluster across the whole field of view (3.3 arcmin). Combining the positions of the arcs and the two-direction shear, we perform an inversion of the lens equation using two separate methods, the biconjugate gradient, and the quadratic programming (QADP) to reconstruct the convergence map of the mock cluster. Results: We explore the space of the solutions of the convergence map and compare the radial density profiles to the density profile of the mock cluster. When the inversion matrix algorithms are forced to find the exact solution, we encounter systematic effects resembling ring structures, that clearly depart from the original convergence map. Conclusions: Overfitting lensing data with a non-parametric method can produce ring-like structures similar to the alleged one in CL0024.

  4. Uncertainties in mapping forest carbon in urban ecosystems.

    PubMed

    Chen, Gang; Ozelkan, Emre; Singh, Kunwar K; Zhou, Jun; Brown, Marilyn R; Meentemeyer, Ross K

    2017-02-01

    Spatially explicit urban forest carbon estimation provides a baseline map for understanding the variation in forest vertical structure, informing sustainable forest management and urban planning. While high-resolution remote sensing has proven promising for carbon mapping in highly fragmented urban landscapes, data cost and availability are the major obstacle prohibiting accurate, consistent, and repeated measurement of forest carbon pools in cities. This study aims to evaluate the uncertainties of forest carbon estimation in response to the combined impacts of remote sensing data resolution and neighborhood spatial patterns in Charlotte, North Carolina. The remote sensing data for carbon mapping were resampled to a range of resolutions, i.e., LiDAR point cloud density - 5.8, 4.6, 2.3, and 1.2 pt s/m 2 , aerial optical NAIP (National Agricultural Imagery Program) imagery - 1, 5, 10, and 20 m. Urban spatial patterns were extracted to represent area, shape complexity, dispersion/interspersion, diversity, and connectivity of landscape patches across the residential neighborhoods with built-up densities from low, medium-low, medium-high, to high. Through statistical analyses, we found that changing remote sensing data resolution introduced noticeable uncertainties (variation) in forest carbon estimation at the neighborhood level. Higher uncertainties were caused by the change of LiDAR point density (causing 8.7-11.0% of variation) than changing NAIP image resolution (causing 6.2-8.6% of variation). For both LiDAR and NAIP, urban neighborhoods with a higher degree of anthropogenic disturbance unveiled a higher level of uncertainty in carbon mapping. However, LiDAR-based results were more likely to be affected by landscape patch connectivity, and the NAIP-based estimation was found to be significantly influenced by the complexity of patch shape. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A High Density Genetic Map Derived from RAD Sequencing and Its Application in QTL Analysis of Yield-Related Traits in Vigna unguiculata

    PubMed Central

    Pan, Lei; Wang, Nian; Wu, Zhihua; Guo, Rui; Yu, Xiaolu; Zheng, Yu; Xia, Qiuju; Gui, Songtao; Chen, Chanyou

    2017-01-01

    Cowpea [Vigna unguiculata (L.) Walp.] is an annual legume of economic importance and widely grown in the semi-arid tropics. However, high-density genetic maps of cowpea are still lacking. Here, we identified 34,868 SNPs (single nucleotide polymorphisms) that were distributed in the cowpea genome based on the RAD sequencing (restriction-site associated DNA sequencing) technique using a population of 170 individuals (two cowpea parents and 168 F2:3 progenies). Of these, 17,996 reliable SNPs were allotted to 11 consensus linkage groups (LGs). The length of the genetic map was 1,194.25 cM in total with a mean distance of 0.066 cM/SNP marker locus. Using this map and the F2:3 population, combined with the CIM (composite interval mapping) method, eleven quantitative trait loci (QTL) of yield-related trait were detected on seven LGs (LG4, 5, 6, 7, 9, 10, and 11) in cowpea. These QTL explained 0.05–17.32% of the total phenotypic variation. Among these, four QTL were for pod length, four QTL for thousand-grain weight (TGW), two QTL for grain number per pod, and one QTL for carpopodium length. Our results will provide a foundation for understanding genes related to grain yield in the cowpea and genus Vigna. PMID:28936219

  6. Combining Neutron and Magnetic Resonance Imaging to Study the Interaction of Plant Roots and Soil

    NASA Astrophysics Data System (ADS)

    Oswald, Sascha E.; Tötzke, Christian; Haber-Pohlmeier, Sabina; Pohlmeier, Andreas; Kaestner, Anders P.; Lehmann, Eberhard

    The soil in direct vicinity of the roots, the root-soil interface or so called rhizosphere, is heavily modified by the activity of roots, compared to bulk soil, e.g. in respect to microbiology and soil chemistry. It has turned out that the root-soil interface, though small in size, also plays a decisive role in the hydraulics controlling the water flow from bulk soil into the roots. A promising approach for the non-invasive investigation of water dynamics, water flow and solute transport is the combination of the two imaging techniques magnetic resonance imaging (MRI) and neutron imaging (NI). Both methods are complementary, because NI maps the total proton density, possibly amplified by NI tracers, which usually corresponds to total water content, and is able to detect changes and spatial patterns with high resolution. On the other side, nuclear magnetic resonance relaxation times reflect the interaction between fluid and matrix, while also a mapping of proton spin density and thus water content is possible. Therefore MRI is able to classify different water pools via their relaxation times additionally to the water distribution inside soil as a porous medium. We have started such combined measurements with the approach to use the same samples and perform tomography with each imaging method at different location and short-term sample transfer.

  7. Identification of stable QTLs for seed oil content by combined linkage and association mapping in Brassica napus.

    PubMed

    Sun, Fengming; Liu, Jing; Hua, Wei; Sun, Xingchao; Wang, Xinfa; Wang, Hanzhong

    2016-11-01

    Seed oil content is an important agricultural trait in rapeseed breeding. Although numerous quantitative trait locus (QTL) have been identified, most of them cannot be applied in practical breeding mainly due to environmental instability or large confidence intervals. The purpose of this study was to identify and validate high quality and more stable QTLs by combining linkage mapping and genome-wide association study (GWAS). For linkage mapping, we constructed two F 2 populations from crosses of high-oil content (∼50%) lines 6F313 and 61616 with a low-oil content (∼40%) line 51070. Two high density linkage maps spanned 1987cM (1659 bins) and 1856cM (1746 bins), respectively. For GWAS, we developed more than 34,000 high-quality SNP markers based on 227 accessions. Finally, 40 QTLs and 29 associations were established by linkage and association mapping in different environments. After merging the results, 32 consensus QTLs were obtained and 7 of them were identified by both mapping methods. Seven overlapping QTLs covered an average confidence interval of 183kb and explained the phenotypic variation of 10.23 to 24.45%. We further developed allele-specific PCR primers to identify each of the seven QTLs. These stable QTLs should be useful in gene cloning and practical breeding application. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Genetic mapping reveals that sinefungin resistance in Toxoplasma gondii is controlled by a putative amino acid transporter locus that can be used as a negative selectable marker.

    PubMed

    Behnke, Michael S; Khan, Asis; Sibley, L David

    2015-02-01

    Quantitative trait locus (QTL) mapping studies have been integral in identifying and understanding virulence mechanisms in the parasite Toxoplasma gondii. In this study, we interrogated a different phenotype by mapping sinefungin (SNF) resistance in the genetic cross between type 2 ME49-FUDR(r) and type 10 VAND-SNF(r). The genetic map of this cross was generated by whole-genome sequencing of the progeny and subsequent identification of single nucleotide polymorphisms (SNPs) inherited from the parents. Based on this high-density genetic map, we were able to pinpoint the sinefungin resistance phenotype to one significant locus on chromosome IX. Within this locus, a single nonsynonymous SNP (nsSNP) resulting in an early stop codon in the TGVAND_290860 gene was identified, occurring only in the sinefungin-resistant progeny. Using CRISPR/CAS9, we were able to confirm that targeted disruption of TGVAND_290860 renders parasites sinefungin resistant. Because disruption of the SNR1 gene confers resistance, we also show that it can be used as a negative selectable marker to insert either a positive drug selection cassette or a heterologous reporter. These data demonstrate the power of combining classical genetic mapping, whole-genome sequencing, and CRISPR-mediated gene disruption for combined forward and reverse genetic strategies in T. gondii. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  9. Lidar-based fracture characterization: An outcrop-scale study of the Woodford Shale, McAlister Shale Pit, Oklahoma

    NASA Astrophysics Data System (ADS)

    Hanzel, Jason

    The use of lidar (light detection and ranging), a remote sensing tool based on principles of laser optometry, in mapping complex, multi-scale fracture networks had not been rigorously tested prior to this study despite its foreseeable utility in interpreting rock fabric with imprints of complex tectonic evolution. This thesis demonstrates lidar-based characterization of the Woodford Shale where intense fracturing could be due to both tectonism and mineralogy. The study area is the McAlister Shale Pit in south-central Oklahoma where both the upper and middle sections of the Woodford Shale are exposed and can be lidar-mapped. Lidar results are validated using hand-measured strike and dips of fracture planes, thin sections and mineral chemistry of selected samples using X-ray diffraction (XRD). Complexity of the fracture patterns as well as inaccessibility of multiple locations within the shale pit makes hand-measurement prone to errors and biases; lidar provides an opportunity for less biased and more efficient field mapping. Fracture mapping with lidar is a multi-step process. The lidar data are converted from point clouds into a mesh through triangulation. User-defined parameters such as size and orientation of the individual triangular elements are then used to group similar elements into surfaces. The strike and dip attribute of the simulated surfaces are visualized in an equal area lower hemisphere projection stereonet. Three fracture sets were identified in the upper and middle sections with common orientation but substantially different spatial density. Measured surface attributes and spatial density relations from lidar were validated using their hand-measured counterparts. Thin section analysis suggests that high fracture density in the upper Woodford measured by both the lidar and the hand-measured data could be due to high quartz. A significant finding of this study is the reciprocal relation between lidar intensity and gamma-ray (GR), which is generally used to infer outcrop mineralogy. XRD analysis of representative samples along the common profiles show that both GR and lidar intensity were influenced by the same minerals in essentially opposite ways. Results strongly suggest that the lidar cannot only remotely map the geomorphology, but also the relative mineralogical variations to the first order of approximation.

  10. Evaluate ERTS imagery for mapping and detection of changes of snowcover on land and on glaciers

    NASA Technical Reports Server (NTRS)

    Meier, M. F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. The percentage of snow cover area on specific drainage basins was measured from ERTS-1 imagery by video density slicing with a repeatability of 4 percent of the snow covered area. Data from ERTS-1 images of the melt season snow cover in the Thunder Creek drainage basin in the North Cascades were combined with existing hydrologic and meteorologic observations to enable calculations of the time distribution of the water stored in this mountain snowpack. Similar data could be used for frequent updating of expected inflow to reservoirs. Equivalent snowline altitudes were determined from area measurements. Snowline altitudes were also determined by combining enlarged ERTS-1 images with maps. ERTS-1 imagery was also successfully used to measure glacier accumulation area ratios for a small test basin.

  11. Construction of a high-density linkage map and mapping quantitative trait loci for somatic embryogenesis using leaf petioles as explants in upland cotton (Gossypium hirsutum L.).

    PubMed

    Xu, Zhenzhen; Zhang, Chaojun; Ge, Xiaoyang; Wang, Ni; Zhou, Kehai; Yang, Xiaojie; Wu, Zhixia; Zhang, Xueyan; Liu, Chuanliang; Yang, Zuoren; Li, Changfeng; Liu, Kun; Yang, Zhaoen; Qian, Yuyuan; Li, Fuguang

    2015-07-01

    The first high-density linkage map was constructed to identify quantitative trait loci (QTLs) for somatic embryogenesis (SE) in cotton ( Gossypium hirsutum L.) using leaf petioles as explants. Cotton transformation is highly limited by only a few regenerable genotypes and the lack of understanding of the genetic and molecular basis of somatic embryogenesis (SE) in cotton (Gossypium hirsutum L.). To construct a more saturated linkage map and further identify quantitative trait loci (QTLs) for SE using leaf petioles as explants, a high embryogenesis frequency line (W10) from the commercial Chinese cotton cultivar CRI24 was crossed with TM-1, a genetic standard upland cotton with no embryogenesis frequency. The genetic map spanned 2300.41 cM in genetic distance and contained 411 polymorphic simple sequence repeat (SSR) loci. Of the 411 mapped loci, 25 were developed from unigenes identified for SE in our previous study. Six QTLs for SE were detected by composite interval mapping method, each explaining 6.88-37.07% of the phenotypic variance. Single marker analysis was also performed to verify the reliability of QTLs detection, and the SSR markers NAU3325 and DPL0209 were detected by the two methods. Further studies on the relatively stable and anchoring QTLs/markers for SE in an advanced population of W10 × TM-1 and other cross combinations with different SE abilities may shed light on the genetic and molecular mechanism of SE in cotton.

  12. Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo

    NASA Astrophysics Data System (ADS)

    Kosempa, M.; Chambers, D. P.

    2016-02-01

    Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?

  13. Integration of Full Tensor Gravity and Z-Axis Tipper Electromagnetic Passive Low Frequency EM Instruments for Simultaneous Data Acquisition - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieberg, Scott

    Ground gravity is a common and useful tool for geothermal exploration. Gravity surveys map density changes in the subsurface that may be caused by tectonic deformation such as faulting, fracturing, plutonism, volcanism, hydrothermal alteration, etc. Full Tensor Gravity Gradient (FTG) data has been used for over a decade in both petroleum and mining exploration to map changes in density associated with geologic structure. Measuring the gravity gradient, rather than the gravity field, provides significantly higher resolution data. Modeling studies have shown FTG data to be a viable tool for geothermal exploration, but no FTG data had been acquired for geothermalmore » applications to date. Electromagnetic methods have been used for geothermal exploration for some time. The Z-Axis Tipper Electromagnetic (ZTEM) was a newer technology that had found success in mapping deep conductivity changes for mining applications. ZTEM had also been used in limited tests for geothermal exploration. This newer technology provided the ability to cost effectively map large areas whilst detailing the electrical properties of the geological structures at depths. The ZTEM is passive and it uses naturally occurring audio frequency magnetic (AFMAG) signals as the electromagnetic triggering source. These geophysical methods were to be tested over a known geothermal site to determine whether or not the data provided the information required for accurately interpreting the subsurface geologic structure associated with a geothermal deposit. After successful acquisition and analysis of the known source area, an additional survey of a “greenfield” area was to be completed. The final step was to develop a combined interpretation model and determine if the combination produced a higher confident geophysical model compared to models developed using each of the technologies individually.« less

  14. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    PubMed

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  15. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast-growing Eucalyptus forest plantation using airborne LiDAR data.

    PubMed

    Silva, Carlos Alberto; Hudak, Andrew Thomas; Klauberg, Carine; Vierling, Lee Alexandre; Gonzalez-Benecke, Carlos; de Padua Chaves Carvalho, Samuel; Rodriguez, Luiz Carlos Estraviz; Cardil, Adrián

    2017-12-01

    LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m -2 (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m. The results show that LiDAR pulse density of 5 pulses m -2 provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m -2 in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system. LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m -2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.

  16. Improvement of density resolution in short-pulse hard x-ray radiographic imaging using detector stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borm, B.; Gärtner, F.; Khaghani, D.

    2016-09-15

    We demonstrate that stacking several imaging plates (IPs) constitutes an easy method to increase hard x-ray detection efficiency. Used to record x-ray radiographic images produced by an intense-laser driven hard x-ray backlighter source, the IP stacks resulted in a significant improvement of the radiograph density resolution. We attribute this to the higher quantum efficiency of the combined detectors, leading to a reduced photon noise. Electron-photon transport simulations of the interaction processes in the detector reproduce the observed contrast improvement. Increasing the detection efficiency to enhance radiographic imaging capabilities is equally effective as increasing the x-ray source yield, e.g., by amore » larger drive laser energy.« less

  17. The frequency-domain approach for apparent density mapping

    NASA Astrophysics Data System (ADS)

    Tong, T.; Guo, L.

    2017-12-01

    Apparent density mapping is a technique to estimate density distribution in the subsurface layer from the observed gravity data. It has been widely applied for geologic mapping, tectonic study and mineral exploration for decades. Apparent density mapping usually models the density layer as a collection of vertical, juxtaposed prisms in both horizontal directions, whose top and bottom surfaces are assumed to be horizontal or variable-depth, and then inverts or deconvolves the gravity anomalies to determine the density of each prism. Conventionally, the frequency-domain approach, which assumes that both top and bottom surfaces of the layer are horizontal, is usually utilized for fast density mapping. However, such assumption is not always valid in the real world, since either the top surface or the bottom surface may be variable-depth. Here, we presented a frequency-domain approach for apparent density mapping, which permits both the top and bottom surfaces of the layer to be variable-depth. We first derived the formula for forward calculation of gravity anomalies caused by the density layer, whose top and bottom surfaces are variable-depth, and the formula for inversion of gravity anomalies for the density distribution. Then we proposed the procedure for density mapping based on both the formulas of inversion and forward calculation. We tested the approach on the synthetic data, which verified its effectiveness. We also tested the approach on the real Bouguer gravity anomalies data from the central South China. The top surface was assumed to be flat and was on the sea level, and the bottom surface was considered as the Moho surface. The result presented the crustal density distribution, which was coinciding well with the basic tectonic features in the study area.

  18. Using Moran's I and GIS to study spatial pattern of forest litter carbon density in typical subtropical region, China

    NASA Astrophysics Data System (ADS)

    Fu, W. J.; Jiang, P. K.; Zhou, G. M.; Zhao, K. L.

    2013-12-01

    The spatial variation of forest litter carbon (FLC) density in the typical subtropical forests in southeast China was investigated using Moran's I, geostatistics and a geographical information system (GIS). A total of 839 forest litter samples were collected based on a 12 km (South-North) × 6 km (East-West) grid system in Zhejiang Province. Forest litter carbon density values were very variable, ranging from 10.2 kg ha-1 to 8841.3 kg ha-1, with an average of 1786.7 kg ha-1. The aboveground biomass had the strongest positive correlation with FLC density, followed by forest age and elevation. Global Moran's I revealed that FLC density had significant positive spatial autocorrelation. Clear spatial patterns were observed using Local Moran's I. A spherical model was chosen to fit the experimental semivariogram. The moderate "nugget-to-sill" (0.536) value revealed that both natural and anthropogenic factors played a key role in spatial heterogeneity of FLC density. High FLC density values were mainly distributed in northwestern and western part of Zhejiang province, which were related to adopting long-term policy of forest conservation in these areas. While Hang-Jia-Hu (HJH) Plain, Jin-Qu (JQ) basin and coastal areas had low FLC density due to low forest coverage and intensive management of economic forests. These spatial patterns in distribution map were in line with the spatial-cluster map described by local Moran's I. Therefore, Moran's I, combined with geostatistics and GIS could be used to study spatial patterns of environmental variables related to forest ecosystem.

  19. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  20. Analysis and application of ERTS-1 data for regional geological mapping

    NASA Technical Reports Server (NTRS)

    Gold, D. P.; Parizek, R. R.; Alexander, S. A.

    1973-01-01

    Combined visual and digital techniques of analysing ERTS-1 data for geologic information have been tried on selected areas in Pennsylvania. The major physiolographic and structural provinces show up well. Supervised mapping, following the imaged expression of known geologic features on ERTS band 5 enlargements (1:250,000) of parts of eastern Pennsylvania, delimited the Diabase Sills and the Precambrian rocks of the Reading Prong with remarkable accuracy. From unsupervised mapping, transgressive linear features are apparent in unexpected density, and exhibit strong control over river valley and stream channel directions. They are unaffected by bedrock type, age, or primary structural boundaries, which suggests they are either rejuvenated basement joint directions on different scales, or they are a recently impressed structure possibly associated with a drifting North American plate. With ground mapping and underflight data, 6 scales of linear features have been recognized.

  1. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    PubMed Central

    Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  2. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform.

    PubMed

    Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud

    2012-11-01

    ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  3. Linkage disequilibrium fine mapping of quantitative trait loci: A simulation study

    PubMed Central

    Abdallah, Jihad M; Goffinet, Bruno; Cierco-Ayrolles, Christine; Pérez-Enciso, Miguel

    2003-01-01

    Recently, the use of linkage disequilibrium (LD) to locate genes which affect quantitative traits (QTL) has received an increasing interest, but the plausibility of fine mapping using linkage disequilibrium techniques for QTL has not been well studied. The main objectives of this work were to (1) measure the extent and pattern of LD between a putative QTL and nearby markers in finite populations and (2) investigate the usefulness of LD in fine mapping QTL in simulated populations using a dense map of multiallelic or biallelic marker loci. The test of association between a marker and QTL and the power of the test were calculated based on single-marker regression analysis. The results show the presence of substantial linkage disequilibrium with closely linked marker loci after 100 to 200 generations of random mating. Although the power to test the association with a frequent QTL of large effect was satisfactory, the power was low for the QTL with a small effect and/or low frequency. More powerful, multi-locus methods may be required to map low frequent QTL with small genetic effects, as well as combining both linkage and linkage disequilibrium information. The results also showed that multiallelic markers are more useful than biallelic markers to detect linkage disequilibrium and association at an equal distance. PMID:12939203

  4. To the Greatest Lengths: Al Qaeda, Proximity and Recruitment Risk

    DTIC Science & Technology

    2010-12-01

    activity (Boba, 2005, pp. 218–219). On the complex end of this spectrum, density mapping uses mathematical formulas to determine degrees of criminal...area. These calculations "combines actuarial risk prediction with environmental criminology to assign risk values to places according to their...translated records, and the compilation of distance variables are correct. 46 2. Model Mathematically , the formula for this test is

  5. Galaxy bias from the Dark Energy Survey Science Verification data: Combining galaxy density maps and weak lensing maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C.; Pujol, A.; Gaztañaga, E.

    We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  6. Galaxy bias from the Dark Energy Survey Science Verification data: Combining galaxy density maps and weak lensing maps

    DOE PAGES

    Chang, C.; Pujol, A.; Gaztañaga, E.; ...

    2016-04-15

    We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less

  7. Quantifying volcanic hazard at Campi Flegrei caldera (Italy) with uncertainty assessment: 2. Pyroclastic density current invasion maps

    NASA Astrophysics Data System (ADS)

    Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano

    2015-04-01

    Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.

  8. Fourier-space combination of Planck and Herschel images

    NASA Astrophysics Data System (ADS)

    Abreu-Vicente, J.; Stutz, A.; Henning, Th.; Keto, E.; Ballesteros-Paredes, J.; Robitaille, T.

    2017-08-01

    Context. Herschel has revolutionized our ability to measure column densities (NH) and temperatures (T) of molecular clouds thanks to its far infrared multiwavelength coverage. However, the lack of a well defined background intensity level in the Herschel data limits the accuracy of the NH and T maps. Aims: We aim to provide a method that corrects the missing Herschel background intensity levels using the Planck model for foreground Galactic thermal dust emission. For the Herschel/PACS data, both the constant-offset as well as the spatial dependence of the missing background must be addressed. For the Herschel/SPIRE data, the constant-offset correction has already been applied to the archival data so we are primarily concerned with the spatial dependence, which is most important at 250 μm. Methods: We present a Fourier method that combines the publicly available Planck model on large angular scales with the Herschel images on smaller angular scales. Results: We have applied our method to two regions spanning a range of Galactic environments: Perseus and the Galactic plane region around l = 11deg (HiGal-11). We post-processed the combined dust continuum emission images to generate column density and temperature maps. We compared these to previously adopted constant-offset corrections. We find significant differences (≳20%) over significant ( 15%) areas of the maps, at low column densities (NH ≲ 1022 cm-2) and relatively high temperatures (T ≳ 20 K). We have also applied our method to synthetic observations of a simulated molecular cloud to validate our method. Conclusions: Our method successfully corrects the Herschel images, including both the constant-offset intensity level and the scale-dependent background variations measured by Planck. Our method improves the previous constant-offset corrections, which did not account for variations in the background emission levels. The image FITS files used in this paper are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A65

  9. Characterization of polyploid wheat genomic diversity using a high-density 90 000 single nucleotide polymorphism array

    PubMed Central

    Wang, Shichen; Wong, Debbie; Forrest, Kerrie; Allen, Alexandra; Chao, Shiaoman; Huang, Bevan E; Maccaferri, Marco; Salvi, Silvio; Milner, Sara G; Cattivelli, Luigi; Mastrangelo, Anna M; Whan, Alex; Stephen, Stuart; Barker, Gary; Wieseke, Ralf; Plieske, Joerg; International Wheat Genome Sequencing Consortium; Lillemo, Morten; Mather, Diane; Appels, Rudi; Dolferus, Rudy; Brown-Guedira, Gina; Korol, Abraham; Akhunova, Alina R; Feuillet, Catherine; Salse, Jerome; Morgante, Michele; Pozniak, Curtis; Luo, Ming-Cheng; Dvorak, Jan; Morell, Matthew; Dubcovsky, Jorge; Ganal, Martin; Tuberosa, Roberto; Lawley, Cindy; Mikoulitch, Ivan; Cavanagh, Colin; Edwards, Keith J; Hayden, Matthew; Akhunov, Eduard

    2014-01-01

    High-density single nucleotide polymorphism (SNP) genotyping arrays are a powerful tool for studying genomic patterns of diversity, inferring ancestral relationships between individuals in populations and studying marker–trait associations in mapping experiments. We developed a genotyping array including about 90 000 gene-associated SNPs and used it to characterize genetic variation in allohexaploid and allotetraploid wheat populations. The array includes a significant fraction of common genome-wide distributed SNPs that are represented in populations of diverse geographical origin. We used density-based spatial clustering algorithms to enable high-throughput genotype calling in complex data sets obtained for polyploid wheat. We show that these model-free clustering algorithms provide accurate genotype calling in the presence of multiple clusters including clusters with low signal intensity resulting from significant sequence divergence at the target SNP site or gene deletions. Assays that detect low-intensity clusters can provide insight into the distribution of presence–absence variation (PAV) in wheat populations. A total of 46 977 SNPs from the wheat 90K array were genetically mapped using a combination of eight mapping populations. The developed array and cluster identification algorithms provide an opportunity to infer detailed haplotype structure in polyploid wheat and will serve as an invaluable resource for diversity studies and investigating the genetic basis of trait variation in wheat. PMID:24646323

  10. Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.

    PubMed

    Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir

    2016-08-01

    Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.

  11. Combined spectroscopic, DFT, TD-DFT and MD study of newly synthesized thiourea derivative

    NASA Astrophysics Data System (ADS)

    Menon, Vidya V.; Sheena Mary, Y.; Shyma Mary, Y.; Panicker, C. Yohannan; Bielenica, Anna; Armaković, Stevan; Armaković, Sanja J.; Van Alsenoy, Christian

    2018-03-01

    A novel thiourea derivative, 1-(3-bromophenyl)-3-[3-(trifluoromethyl)phenyl]thiourea (ANF-22) is synthesized and characterized by FTIR, FT-Raman and NMR spectroscopy experimentally and theoretically. A detailed conformational analysis of the title molecule has been conducted in order to locate the lowest energy geometry, which was further subjected to the detailed investigation of spectroscopic, reactive, degradation and docking studies by density functional theory (DFT) calculations and molecular dynamics (MD) simulations. Time dependent DFT (TD-DFT) calculations have been used also in order to simulate UV spectra and investigate charge transfer within molecule. Natural bond orbital analysis has been performed analyzing the charge delocalization and using HOMO and LUMO energies the electronic properties are analyzed. Molecular electrostatic potential map is used for the quantitative measurement of active sites in the molecule. In order to determine the locations possibly prone to electrophilic attacks we have calculated average local ionization energies and mapped them to the electron density surface. Further insight into the local reactivity properties have been obtained by calculation of Fukui functions, also mapped to the electron density surface. Possible degradation properties by the autoxidation mechanism have been assessed by calculations of bond dissociation energies for hydrogen abstraction. Atoms of title molecule with significant interactions with water molecules have been determined by calculations of radial distribution functions. The title compound can be a lead compound for developing new analgesic drug.

  12. Explosive plasma flows in a solar flare

    NASA Technical Reports Server (NTRS)

    Zarro, Dominic M.; Canfield, Richard C.; Metcalf, Thomas R.; Strong, Keith T.

    1988-01-01

    Solar Maximum Mission soft X-ray data and Sacramento Peak Observatory H-alpha observations are combined in a study of the impulsive phase of a solar flare. A blue asymmetry, indicative of upflows, was observed in the coronal Ca XIX line during the soft X-ray rise phase. A red asymmetry, indicative of downflows, was observed simultaneously in chromospheric H-alpha emitted from bright flare kernels during the period of hard X-ray emission. Combining the velocity data with a measurement of coronal electron density, it is shown that the impulsive phase momentum of upflowing soft X-ray-emitting plasma equalled that of the downflowing H-alpha-emitting plasma to within one order of magnitude. In particular, the momentum of the upflowing plasma was 2 x 10 to the 21st g cm/s while that of the downflowing plasma was 7 x 10 to the 21st g cm/s, with a factor of 2 uncertainty on each value. This equality supports the explosive chromospheric evaporation model of solar flares, in which a sudden pressure increase at the footprint of a coronal loop produces oppositely directed flows in the heated plasma.

  13. The Price of Precision: Large-Scale Mapping of Forest Structure and Biomass Using Airborne Lidar

    NASA Astrophysics Data System (ADS)

    Dubayah, R.

    2015-12-01

    Lidar remote sensing provides one of the best means for acquiring detailed information on forest structure. However, its application over large areas has been limited largely because of its expense. Nonetheless, extant data exist over many states in the U.S., funded largely by state and federal consortia and mainly for infrastructure, emergency response, flood plain and coastal mapping. These lidar data are almost always acquired in leaf-off seasons, and until recently, usually with low point count densities. Even with these limitations, they provide unprecedented wall-to-wall mappings that enable development of appropriate methodologies for large-scale deployment of lidar. In this talk we summarize our research and lessons learned in deriving forest structure over regional areas as part of NASA's Carbon Monitoring System (CMS). We focus on two areas: the entire state of Maryland and Sonoma County, California. The Maryland effort used low density, leaf-off data acquired by each county in varying epochs, while the on-going Sonoma work employs state-of-the-art, high density, wall-to-wall, leaf-on lidar data. In each area we combine these lidar coverages with high-resolution multispectral imagery from the National Agricultural Imagery Program (NAIP) and in situ plot data to produce maps of canopy height, tree cover and biomass, and compare our results against FIA plot data and national biomass maps. Our work demonstrates that large-scale mapping of forest structure at high spatial resolution is achievable but products may be complex to produce and validate over large areas. Furthermore, fundamental issues involving statistical approaches, plot types and sizes, geolocation, modeling scales, allometry, and even the definitions of "forest" and "non-forest" must be approached carefully. Ultimately, determining the "price of precision", that is, does the value of wall-to-wall forest structure data justify their expense, should consider not only carbon market applications, but the other ways the underlying lidar data may be used.

  14. A componential model of human interaction with graphs: 1. Linear regression modeling

    NASA Technical Reports Server (NTRS)

    Gillan, Douglas J.; Lewis, Robert

    1994-01-01

    Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.

  15. Three-dimensional quantification of vorticity and helicity from 3D cine PC-MRI using finite-element interpolations.

    PubMed

    Sotelo, Julio; Urbina, Jesús; Valverde, Israel; Mura, Joaquín; Tejos, Cristián; Irarrazaval, Pablo; Andia, Marcelo E; Hurtado, Daniel E; Uribe, Sergio

    2018-01-01

    We propose a 3D finite-element method for the quantification of vorticity and helicity density from 3D cine phase-contrast (PC) MRI. By using a 3D finite-element method, we seamlessly estimate velocity gradients in 3D. The robustness and convergence were analyzed using a combined Poiseuille and Lamb-Ossen equation. A computational fluid dynamics simulation was used to compared our method with others available in the literature. Additionally, we computed 3D maps for different 3D cine PC-MRI data sets: phantom without and with coarctation (18 healthy volunteers and 3 patients). We found a good agreement between our method and both the analytical solution of the combined Poiseuille and Lamb-Ossen. The computational fluid dynamics results showed that our method outperforms current approaches to estimate vorticity and helicity values. In the in silico model, we observed that for a tetrahedral element of 2 mm of characteristic length, we underestimated the vorticity in less than 5% with respect to the analytical solution. In patients, we found higher values of helicity density in comparison to healthy volunteers, associated with vortices in the lumen of the vessels. We proposed a novel method that provides entire 3D vorticity and helicity density maps, avoiding the used of reformatted 2D planes from 3D cine PC-MRI. Magn Reson Med 79:541-553, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. Implementation of digital equality comparator circuit on memristive memory crossbar array using material implication logic

    NASA Astrophysics Data System (ADS)

    Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al

    2018-03-01

    One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.

  17. JASMINE design and method of data reduction

    NASA Astrophysics Data System (ADS)

    Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito

    2008-07-01

    Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.

  18. Elemental X-ray mapping of agglutinated foraminifer tests: A non- destructive technique for determining compositional characteristics.

    USGS Publications Warehouse

    Commeau, R.F.; Reynolds, Leslie A.; Poag, C.W.

    1985-01-01

    The composition of agglutinated foraminiferal tests vary remarkably in response to local substrate characteristics, physiochemical properties of the water column and species- dependant selectivity of test components. We have employed a technique that combines a scanning electron microscope with an energy dispersive X-ray spectrometer system to identify major and minor elemental constituents of agglutinated foraminiferal walls. As a sample is bombarded with a beam of high energy electrons, X-rays are generated that are characteristic of the elements present. As a result, X- ray density maps can be produced for each of several elements present in the tests of agglutinated foraminifers. 

  19. Combining area-based and individual-level data in the geostatistical mapping of late-stage cancer incidence.

    PubMed

    Goovaerts, Pierre

    2009-01-01

    This paper presents a geostatistical approach to incorporate individual-level data (e.g. patient residences) and area-based data (e.g. rates recorded at census tract level) into the mapping of late-stage cancer incidence, with an application to breast cancer in three Michigan counties. Spatial trends in cancer incidence are first estimated from census data using area-to-point binomial kriging. This prior model is then updated using indicator kriging and individual-level data. Simulation studies demonstrate the benefits of this two-step approach over methods (kernel density estimation and indicator kriging) that process only residence data.

  20. Electronic response of rare-earth magnetic-refrigeration compounds GdX2 (X = Fe and Co)

    NASA Astrophysics Data System (ADS)

    Bhatt, Samir; Ahuja, Ushma; Kumar, Kishor; Heda, N. L.

    2018-05-01

    We present the Compton profiles (CPs) of rare-earth-transition metal compounds GdX2 (X = Fe and Co) using 740 GBq 137Cs Compton spectrometer. To compare the experimental momentum densities, we have also computed the CPs, electronic band structure, density of states (DOS) and Mulliken population (MP) using linear combination of atomic orbitals (LCAO) method. Local density and generalized gradient approximations within density functional theory (DFT) along with the hybridization of Hartree-Fock and DFT (B3LYP and PBE0) have been considered under the framework of LCAO scheme. It is seen that the LCAO-B3LYP based momentum densities give a better agreement with the experimental data for both the compounds. The energy bands and DOS for both the spin-up and spin-down states show metallic like character of the reported intermetallic compounds. The localization of 3d electrons of Co and Fe has also been discussed in terms of equally normalized CPs and MP data. Discussion on magnetization using LCAO method is also included.

  1. Study of electronic structure and Compton profiles of transition metal diborides

    NASA Astrophysics Data System (ADS)

    Bhatt, Samir; Heda, N. L.; Kumar, Kishor; Ahuja, B. L.

    2017-08-01

    We report Compton profiles (CPs) of transition metal diborides (MB2; M= Ti and Zr) using a 740 GBq 137Cs Compton spectrometer measured at an intermediate resolution of 0.34 a.u. To validate the experimental momentum densities, we have employed the linear combination of atomic orbitals (LCAO) method to compute the theoretical CPs along with the energy bands, density of states (DOS) and Mulliken's population response. The LCAO computations have been performed in the frame work of density functional theory (DFT) and hybridization of Hartree-Fock and DFT (namely B3LYP and PBE0). For both the diborides, the CPs based on revised Perdew-Burke-Ernzerhof exchange and correlation functions (DFT-PBESol) lead to a better agreement with the experimental momentum densities than other reported approximations. Energy bands, DOS and real space analysis of CPs confirm a metallic-like character of both the borides. Further, a comparison of DFT-PBESol and experimental data on equal-valence-electron-density scale shows more ionicity in ZrB2 than that in TiB2, which is also supported by the Mulliken's population based charge transfer data.

  2. Dense Gas, Dynamical Equilibrium Pressure, and Star Formation in Nearby Star-forming Galaxies

    NASA Astrophysics Data System (ADS)

    Gallagher, Molly J.; Leroy, Adam K.; Bigiel, Frank; Cormier, Diane; Jiménez-Donaire, María J.; Ostriker, Eve; Usero, Antonio; Bolatto, Alberto D.; García-Burillo, Santiago; Hughes, Annie; Kepley, Amanda A.; Krumholz, Mark; Meidt, Sharon E.; Meier, David S.; Murphy, Eric J.; Pety, Jérôme; Rosolowsky, Erik; Schinnerer, Eva; Schruba, Andreas; Walter, Fabian

    2018-05-01

    We use new ALMA observations to investigate the connection between dense gas fraction, star formation rate (SFR), and local environment across the inner region of four local galaxies showing a wide range of molecular gas depletion times. We map HCN (1–0), HCO+ (1–0), CS (2–1), 13CO (1–0), and C18O (1–0) across the inner few kiloparsecs of each target. We combine these data with short-spacing information from the IRAM large program EMPIRE, archival CO maps, tracers of stellar structure and recent star formation, and recent HCN surveys by Bigiel et al. and Usero et al. We test the degree to which changes in the dense gas fraction drive changes in the SFR. {I}HCN}/{I}CO} (tracing the dense gas fraction) correlates strongly with I CO (tracing molecular gas surface density), stellar surface density, and dynamical equilibrium pressure, P DE. Therefore, {I}HCN}/{I}CO} becomes very low and HCN becomes very faint at large galactocentric radii, where ratios as low as {I}HCN}/{I}CO}∼ 0.01 become common. The apparent ability of dense gas to form stars, {{{Σ }}}SFR}/{{{Σ }}}dense} (where Σdense is traced by the HCN intensity and the star formation rate is traced by a combination of Hα and 24 μm emission), also depends on environment. {{{Σ }}}SFR}/{{{Σ }}}dense} decreases in regions of high gas surface density, high stellar surface density, and high P DE. Statistically, these correlations between environment and both {{{Σ }}}SFR}/{{{Σ }}}dense} and {I}HCN}/{I}CO} are stronger than that between apparent dense gas fraction ({I}HCN}/{I}CO}) and the apparent molecular gas star formation efficiency {{{Σ }}}SFR}/{{{Σ }}}mol}. We show that these results are not specific to HCN.

  3. Dislocation density of pure copper processed by accumulative roll bonding and equal-channel angular pressing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyajima, Yoji, E-mail: miyajima.y.ab@m.titech.ac.jp; Okubo, Satoshi; Abe, Hiroki

    The dislocation density of pure copper fabricated by two severe plastic deformation (SPD) processes, i.e., accumulative roll bonding and equal-channel angular pressing, was evaluated using scanning transmission electron microscopy/transmission electron microscopy observations. The dislocation density drastically increased from ~ 10{sup 13} m{sup −} {sup 2} to about 5 × 10{sup 14} m{sup −} {sup 2}, and then saturated, for both SPD processes.

  4. A contrast enhancement method for improving the segmentation of breast lesions on ultrasonography.

    PubMed

    Flores, Wilfrido Gómez; Pereira, Wagner Coelho de Albuquerque

    2017-01-01

    This paper presents an adaptive contrast enhancement method based on sigmoidal mapping function (SACE) used for improving the computerized segmentation of breast lesions on ultrasound. First, from the original ultrasound image an intensity variation map is obtained, which is used to generate local sigmoidal mapping functions related to distinct contextual regions. Then, a bilinear interpolation scheme is used to transform every original pixel to a new gray level value. Also, four contrast enhancement techniques widely used in breast ultrasound enhancement are implemented: histogram equalization (HEQ), contrast limited adaptive histogram equalization (CLAHE), fuzzy enhancement (FEN), and sigmoid based enhancement (SEN). In addition, these contrast enhancement techniques are considered in a computerized lesion segmentation scheme based on watershed transformation. The performance comparison among techniques is assessed in terms of both the quality of contrast enhancement and the segmentation accuracy. The former is quantified by the measure, where the greater the value, the better the contrast enhancement, whereas the latter is calculated by the Jaccard index, which should tend towards unity to indicate adequate segmentation. The experiments consider a data set with 500 breast ultrasound images. The results show that SACE outperforms its counterparts, where the median values for the measure are: SACE: 139.4, SEN: 68.2, HEQ: 64.1, CLAHE: 62.8, and FEN: 7.9. Considering the segmentation performance results, the SACE method presents the largest accuracy, where the median values for the Jaccard index are: SACE: 0.81, FEN: 0.80, CLAHE: 0.79, HEQ: 77, and SEN: 0.63. The SACE method performs well due to the combination of three elements: (1) the intensity variation map reduces intensity variations that could distort the real response of the mapping function, (2) the sigmoidal mapping function enhances the gray level range where the transition between lesion and background is found, and (3) the adaptive enhancing scheme for coping with local contrasts. Hence, the SACE approach is appropriate for enhancing contrast before computerized lesion segmentation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Construction of Ultradense Linkage Maps with Lep-MAP2: Stickleback F2 Recombinant Crosses as an Example

    PubMed Central

    Rastas, Pasi; Calboli, Federico C. F.; Guo, Baocheng; Shikano, Takahito; Merilä, Juha

    2016-01-01

    High-density linkage maps are important tools for genome biology and evolutionary genetics by quantifying the extent of recombination, linkage disequilibrium, and chromosomal rearrangements across chromosomes, sexes, and populations. They provide one of the best ways to validate and refine de novo genome assemblies, with the power to identify errors in assemblies increasing with marker density. However, assembly of high-density linkage maps is still challenging due to software limitations. We describe Lep-MAP2, a software for ultradense genome-wide linkage map construction. Lep-MAP2 can handle various family structures and can account for achiasmatic meiosis to gain linkage map accuracy. Simulations show that Lep-MAP2 outperforms other available mapping software both in computational efficiency and accuracy. When applied to two large F2-generation recombinant crosses between two nine-spined stickleback (Pungitius pungitius) populations, it produced two high-density (∼6 markers/cM) linkage maps containing 18,691 and 20,054 single nucleotide polymorphisms. The two maps showed a high degree of synteny, but female maps were 1.5–2 times longer than male maps in all linkage groups, suggesting genome-wide recombination suppression in males. Comparison with the genome sequence of the three-spined stickleback (Gasterosteus aculeatus) revealed a high degree of interspecific synteny with a low frequency (<5%) of interchromosomal rearrangements. However, a fairly large (ca. 10 Mb) translocation from autosome to sex chromosome was detected in both maps. These results illustrate the utility and novel features of Lep-MAP2 in assembling high-density linkage maps, and their usefulness in revealing evolutionarily interesting properties of genomes, such as strong genome-wide sex bias in recombination rates. PMID:26668116

  6. Multitracer CMB delensing maps from Planck and WISE data

    NASA Astrophysics Data System (ADS)

    Yu, Byeonghee; Hill, J. Colin; Sherwin, Blake D.

    2017-12-01

    Delensing, the removal of the limiting lensing B -mode background, is crucial for the success of future cosmic microwave background (CMB) surveys in constraining inflationary gravitational waves (IGWs). In recent work, delensing with large-scale structure tracers has emerged as a promising method both for improving constraints on IGWs and for testing delensing methods for future use. However, the delensing fractions (i.e., the fraction of the lensing-B mode power removed) achieved by recent efforts have been only 20%-30%. In this work, we provide a detailed characterization of a full-sky, dust-cleaned cosmic infrared background (CIB) map for delensing and construct a further-improved delensing template by adding additional tracers to increase delensing performance. In particular, we build a multitracer delensing template by combining the dust-cleaned Planck CIB map with a reconstructed CMB lensing map from Planck and a galaxy number density map from the Wide-field Infrared Survey Explorer (WISE) satellite. For this combination, we calculate the relevant weightings by fitting smooth templates to measurements of all the cross-spectra and autospectra of these maps. On a large fraction of the sky (fsky=0.43 ), we demonstrate that our maps are capable of providing a delensing factor of 43 ±1 % ; using a more restrictive mask (fsky=0.11 ), the delensing factor reaches 48 ±1 % . For low-noise surveys, our delensing maps, which cover much of the sky, can thus improve constraints on the tensor-to-scalar ratio (r ) by nearly a factor of 2. The delensing tracer maps are made publicly available, and we encourage their use in ongoing and upcoming B -mode surveys.

  7. Molecular surface mesh generation by filtering electron density map.

    PubMed

    Giard, Joachim; Macq, Benoît

    2010-01-01

    Bioinformatics applied to macromolecules are now widely spread and in continuous expansion. In this context, representing external molecular surface such as the Van der Waals Surface or the Solvent Excluded Surface can be useful for several applications. We propose a fast and parameterizable algorithm giving good visual quality meshes representing molecular surfaces. It is obtained by isosurfacing a filtered electron density map. The density map is the result of the maximum of Gaussian functions placed around atom centers. This map is filtered by an ideal low-pass filter applied on the Fourier Transform of the density map. Applying the marching cubes algorithm on the inverse transform provides a mesh representation of the molecular surface.

  8. Updated sesame genome assembly and fine mapping of plant height and seed coat color QTLs using a new high-density genetic map.

    PubMed

    Wang, Linhai; Xia, Qiuju; Zhang, Yanxin; Zhu, Xiaodong; Zhu, Xiaofeng; Li, Donghua; Ni, Xuemei; Gao, Yuan; Xiang, Haitao; Wei, Xin; Yu, Jingyin; Quan, Zhiwu; Zhang, Xiurong

    2016-01-05

    Sesame is an important high-quality oil seed crop. The sesame genome was de novo sequenced and assembled in 2014 (version 1.0); however, the number of anchored pseudomolecules was higher than the chromosome number (2n = 2x = 26) due to the lack of a high-density genetic map with 13 linkage groups. We resequenced a permanent population consisting of 430 recombinant inbred lines and constructed a genetic map to improve the sesame genome assembly. We successfully anchored 327 scaffolds onto 13 pseudomolecules. The new genome assembly (version 2.0) included 97.5 % of the scaffolds greater than 150 kb in size present in assembly version 1.0 and increased the total pseudomolecule length from 233.7 to 258.4 Mb with 94.3 % of the genome assembled and 97.2 % of the predicted gene models anchored. Based on the new genome assembly, a bin map including 1,522 bins spanning 1090.99 cM was generated and used to identified 41 quantitative trait loci (QTLs) for sesame plant height and 9 for seed coat color. The plant height-related QTLs explained 3-24 % the phenotypic variation (mean value, 8 %), and 29 of them were detected in at least two field trials. Two major loci (qPH-8.2 and qPH-3.3) that contributed 23 and 18 % of the plant height were located in 350 and 928-kb spaces on Chr8 and Chr3, respectively. qPH-3.3, is predicted to be responsible for the semi-dwarf sesame plant phenotype and contains 102 candidate genes. This is the first report of a sesame semi-dwarf locus and provides an interesting opportunity for a plant architecture study of the sesame. For the sesame seed coat color, the QTLs of the color spaces L*, a*, and b* were detected with contribution rates of 3-46 %. qSCb-4.1 contributed approximately 39 % of the b* value and was located on Chr4 in a 199.9-kb space. A list of 32 candidate genes for the locus, including a predicted black seed coat-related gene, was determined by screening the newly anchored genome. This study offers a high-density genetic map and an improved assembly of the sesame genome. The number of linkage groups and pseudomolecules in this assembly equals the number of sesame chromosomes for the first time. The map and updated genome assembly are expected to serve as a platform for future comparative genomics and genetic studies.

  9. Earth mapping - aerial or satellite imagery comparative analysis

    NASA Astrophysics Data System (ADS)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  10. The NOD3 software package: A graphical user interface-supported reduction package for single-dish radio continuum and polarisation observations

    NASA Astrophysics Data System (ADS)

    Müller, Peter; Krause, Marita; Beck, Rainer; Schmidt, Philip

    2017-10-01

    Context. The venerable NOD2 data reduction software package for single-dish radio continuum observations, which was developed for use at the 100-m Effelsberg radio telescope, has been successfully applied over many decades. Modern computing facilities, however, call for a new design. Aims: We aim to develop an interactive software tool with a graphical user interface for the reduction of single-dish radio continuum maps. We make a special effort to reduce the distortions along the scanning direction (scanning effects) by combining maps scanned in orthogonal directions or dual- or multiple-horn observations that need to be processed in a restoration procedure. The package should also process polarisation data and offer the possibility to include special tasks written by the individual user. Methods: Based on the ideas of the NOD2 package we developed NOD3, which includes all necessary tasks from the raw maps to the final maps in total intensity and linear polarisation. Furthermore, plot routines and several methods for map analysis are available. The NOD3 package is written in Python, which allows the extension of the package via additional tasks. The required data format for the input maps is FITS. Results: The NOD3 package is a sophisticated tool to process and analyse maps from single-dish observations that are affected by scanning effects from clouds, receiver instabilities, or radio-frequency interference. The "basket-weaving" tool combines orthogonally scanned maps into a final map that is almost free of scanning effects. The new restoration tool for dual-beam observations reduces the noise by a factor of about two compared to the NOD2 version. Combining single-dish with interferometer data in the map plane ensures the full recovery of the total flux density. Conclusions: This software package is available under the open source license GPL for free use at other single-dish radio telescopes of the astronomical community. The NOD3 package is designed to be extendable to multi-channel data represented by data cubes in Stokes I, Q, and U.

  11. The Magnetic Field Structure of W3(OH)

    NASA Astrophysics Data System (ADS)

    El-Batal, Adham M.; Clemens, Dan P.; Montgomery, Jordan

    2018-06-01

    Situated in the Perseus arm of the Galaxy, the W3 molecular cloud is a high-mass star-forming region with low foreground optical extinction. Near-infrared H- and K-band polarimetric observations of a 10' × 10' field of view of W3 were obtained using the Mimir instrument on the 1.8 m Perkins Telescope. This field of view encompasses W3(OH), a region of OH and H2O masers as well as an HII region. The H-band data were used in conjunction with Spitzer M-band data to map extinction via H-M color excess. In total, 2654 stellar objects were found in the Mimir field of view, of which 1013 had H and M magnitudes with low errors. Using the extinction map and the individual stellar H-M color excess values, 429 stars with polarized signals were found to be background to the molecular cloud. These were useful for mapping the magnetic field structure and estimating the magnetic field strength of the cloud. Mid- to far-infrared (70 - 870 μm) archival Herschel and Planck data were used to map dust extinction at 850 µm and create an H2 column density map. Combined, maps of magnetic field strength and hydrogen column density can be used to infer the ratio of gravitational potential to magnetic potential ( M/Φ ). Findings are discussed in the context of M/Φ ratio in models and the stability of high-mass star-forming regions.This work has been supported by NSF AST14-12269 and NASA NNX15AE51G grants.

  12. The ATLASGAL survey: distribution of cold dust in the Galactic plane. Combination with Planck data

    NASA Astrophysics Data System (ADS)

    Csengeri, T.; Weiss, A.; Wyrowski, F.; Menten, K. M.; Urquhart, J. S.; Leurini, S.; Schuller, F.; Beuther, H.; Bontemps, S.; Bronfman, L.; Henning, Th.; Schneider, N.

    2016-01-01

    Context. Sensitive ground-based submillimeter surveys, such as ATLASGAL, provide a global view on the distribution of cold dense gas in the Galactic plane at up to two-times better angular-resolution compared to recent space-based surveys with Herschel. However, a drawback of ground-based continuum observations is that they intrinsically filter emission, at angular scales larger than a fraction of the field-of-view of the array, when subtracting the sky noise in the data processing. The lost information on the distribution of diffuse emission can be, however, recovered from space-based, all-sky surveys with Planck. Aims: Here we aim to demonstrate how this information can be used to complement ground-based bolometer data and present reprocessed maps of the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey. Methods: We use the maps at 353 GHz from the Planck/HFI instrument, which performed a high sensitivity all-sky survey at a frequency close to that of the APEX/LABOCA array, which is centred on 345 GHz. Complementing the ground-based observations with information on larger angular scales, the resulting maps reveal the distribution of cold dust in the inner Galaxy with a larger spatial dynamic range. We visually describe the observed features and assess the global properties of dust distribution. Results: Adding information from large angular scales helps to better identify the global properties of the cold Galactic interstellar medium. To illustrate this, we provide mass estimates from the dust towards the W43 star-forming region and estimate a column density contrast of at least a factor of five between a low intensity halo and the star-forming ridge. We also show examples of elongated structures extending over angular scales of 0.5°, which we refer to as thin giant filaments. Corresponding to > 30 pc structures in projection at a distance of 3 kpc, these dust lanes are very extended and show large aspect ratios. We assess the fraction of dense gas by determining the contribution of the APEX/LABOCA maps to the combined maps, and estimate 2-5% for the dense gas fraction (corresponding to Av> 7 mag) on average in the Galactic plane. We also show probability distribution functions of the column density (N-PDF), which reveal the typically observed log-normal distribution for low column density and exhibit an excess at high column densities. As a reference for extragalactic studies, we show the line-of-sight integrated N-PDF of the inner Galaxy, and derive a contribution of this excess to the total column density of ~ 2.2%, corresponding to NH2 = 2.92 × 1022 cm-2. Taking the total flux density observed in the maps, we provide an independent estimate of the mass of molecular gas in the inner Galaxy of ~ 1 × 109 M⊙, which is consistent with previous estimates using CO emission. From the mass and dense gas fraction (fDG), we estimate a Galactic SFR of Ṁ = 1.3 M⊙ yr-1. Conclusions: Retrieving the extended emission helps to better identify massive giant filaments which are elongated and confined structures. We show that the log-normal distribution of low column density gas is ubiquitous in the inner Galaxy. While the distribution of diffuse gas is relatively homogenous in the inner Galaxy, the central molecular zone (CMZ) stands out with a higher dense gas fraction despite its low star formation efficiency.Altogether our findings explain well the observed low star formation efficiency of the Milky Way by the low fDG in the Galactic ISM. In contrast, the high fDG observed towards the CMZ, despite its low star formation activity, suggests that, in that particular region of our Galaxy, high density gas is not the bottleneck for star formation.

  13. The CARMA-NRO Orion Survey

    NASA Astrophysics Data System (ADS)

    Kong, Shuo; Arce, Héctor G.; Feddersen, Jesse R.; Carpenter, John M.; Nakamura, Fumitaka; Shimajiri, Yoshito; Isella, Andrea; Ossenkopf-Okada, Volker; Sargent, Anneila I.; Sánchez-Monge, Álvaro; Suri, Sümeyye T.; Kauffmann, Jens; Pillai, Thushara; Pineda, Jaime E.; Koda, Jin; Bally, John; Lis, Dariusz C.; Padoan, Paolo; Klessen, Ralf; Mairs, Steve; Goodman, Alyssa; Goldsmith, Paul; McGehee, Peregrine; Schilke, Peter; Teuben, Peter J.; José Maureira, María; Hara, Chihomi; Ginsburg, Adam; Burkhart, Blakesley; Smith, Rowan J.; Schmiedeke, Anika; Pineda, Jorge L.; Ishii, Shun; Sasaki, Kazushige; Kawabe, Ryohei; Urasawa, Yumiko; Oyamada, Shuri; Tanabe, Yoshihiro

    2018-06-01

    We present the first results from a new, high-resolution 12CO(1–0), 13CO(1–0), and C18O(1–0) molecular-line survey of the Orion A cloud, hereafter referred to as the CARMA-NRO Orion Survey. CARMA observations have been combined with single-dish data from the Nobeyama 45 m telescope to provide extended images at about 0.01 pc resolution, with a dynamic range of approximately 1200 in spatial scale. Here we describe the practical details of the data combination in uv space, including flux scale matching, the conversion of single-dish data to visibilities, and joint deconvolution of single-dish and interferometric data. A Δ-variance analysis indicates that no artifacts are caused by combining data from the two instruments. Initial analysis of the data cubes, including moment maps, average spectra, channel maps, position–velocity diagrams, excitation temperature, column density, and line ratio maps, provides evidence of complex and interesting structures such as filaments, bipolar outflows, shells, bubbles, and photo-eroded pillars. The implications for star formation processes are profound, and follow-up scientific studies by the CARMA-NRO Orion team are now underway. We plan to make all the data products described here generally accessible; some are already available at https://dataverse.harvard.edu/dataverse/CARMA-NRO-Orion.

  14. Pyroclastic density current hazard maps at Campi Flegrei caldera (Italy): the effects of event scale, vent location and time forecasts.

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Neri, Augusto; Esposti Ongaro, Tomaso; Isaia, Roberto; Flandoli, Franco; Bisson, Marina

    2016-04-01

    Today hundreds of thousands people live inside the Campi Flegrei caldera (Italy) and in the adjacent part of the city of Naples making a future eruption of such volcano an event with huge consequences. Very high risks are associated with the occurrence of pyroclastic density currents (PDCs). Mapping of background or long-term PDC hazard in the area is a great challenge due to the unknown eruption time, scale and vent location of the next event as well as the complex dynamics of the flow over the caldera topography. This is additionally complicated by the remarkable epistemic uncertainty on the eruptive record, affecting the time of past events, the location of vents as well as the PDCs areal extent estimates. First probability maps of PDC invasion were produced combining a vent-opening probability map, statistical estimates concerning the eruptive scales and a Cox-type temporal model including self-excitement effects, based on the eruptive record of the last 15 kyr. Maps were produced by using a Monte Carlo approach and adopting a simplified inundation model based on the "box model" integral approximation tested with 2D transient numerical simulations of flow dynamics. In this presentation we illustrate the independent effects of eruption scale, vent location and time of forecast of the next event. Specific focus was given to the remarkable differences between the eastern and western sectors of the caldera and their effects on the hazard maps. The analysis allowed to identify areas with elevated probabilities of flow invasion as a function of the diverse assumptions made. With the quantification of some sources of uncertainty in relation to the system, we were also able to provide mean and percentile maps of PDC hazard levels.

  15. Predicting and mapping soil available water capacity in Korea.

    PubMed

    Hong, Suk Young; Minasny, Budiman; Han, Kyung Hwa; Kim, Yihyun; Lee, Kyungdo

    2013-01-01

    The knowledge on the spatial distribution of soil available water capacity at a regional or national extent is essential, as soil water capacity is a component of the water and energy balances in the terrestrial ecosystem. It controls the evapotranspiration rate, and has a major impact on climate. This paper demonstrates a protocol for mapping soil available water capacity in South Korea at a fine scale using data available from surveys. The procedures combined digital soil mapping technology with the available soil map of 1:25,000. We used the modal profile data from the Taxonomical Classification of Korean Soils. The data consist of profile description along with physical and chemical analysis for the modal profiles of the 380 soil series. However not all soil samples have measured bulk density and water content at -10 and -1500 kPa. Thus they need to be predicted using pedotransfer functions. Furthermore, water content at -10 kPa was measured using ground samples. Thus a correction factor is derived to take into account the effect of bulk density. Results showed that Andisols has the highest mean water storage capacity, followed by Entisols and Inceptisols which have loamy texture. The lowest water retention is Entisols which are dominated by sandy materials. Profile available water capacity to a depth of 1 m was calculated and mapped for Korea. The western part of the country shows higher available water capacity than the eastern part which is mountainous and has shallower soils. The highest water storage capacity soils are the Ultisols and Alfisols (mean of 206 and 205 mm, respectively). Validation of the maps showed promising results. The map produced can be used as an indication of soil physical quality of Korean soils.

  16. Mapping the integrated Sachs-Wolfe effect

    NASA Astrophysics Data System (ADS)

    Manzotti, A.; Dodelson, S.

    2014-12-01

    On large scales, the anisotropies in the cosmic microwave background (CMB) reflect not only the primordial density field but also the energy gain when photons traverse decaying gravitational potentials of large scale structure, what is called the integrated Sachs-Wolfe (ISW) effect. Decomposing the anisotropy signal into a primordial piece and an ISW component, the main secondary effect on large scales, is more urgent than ever as cosmologists strive to understand the Universe on those scales. We present a likelihood technique for extracting the ISW signal combining measurements of the CMB, the distribution of galaxies, and maps of gravitational lensing. We test this technique with simulated data showing that we can successfully reconstruct the ISW map using all the data sets together. Then we present the ISW map obtained from a combination of real data: the NRAO VLA sky survey (NVSS) galaxy survey, temperature anisotropies, and lensing maps made by the Planck satellite. This map shows that, with the data sets used and assuming linear physics, there is no evidence, from the reconstructed ISW signal in the Cold Spot region, for an entirely ISW origin of this large scale anomaly in the CMB. However a large scale structure origin from low redshift voids outside the NVSS redshift range is still possible. Finally we show that future surveys, thanks to a better large scale lensing reconstruction will be able to improve the reconstruction signal to noise which is now mainly coming from galaxy surveys.

  17. Experimental Research on Selective Laser Melting AlSi10Mg Alloys: Process, Densification and Performance

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Wei, Zhengying; Wei, Pei; Chen, Shenggui; Lu, Bingheng; Du, Jun; Li, Junfeng; Zhang, Shuzhe

    2017-12-01

    In this work, a set of experiments was designed to investigate the effect of process parameters on the relative density of the AlSi10Mg parts manufactured by SLM. The influence of laser scan speed v, laser power P and hatch space H, which were considered as the dominant parameters, on the powder melting and densification behavior was also studied experimentally. In addition, the laser energy density was introduced to evaluate the combined effect of the above dominant parameters, so as to control the SLM process integrally. As a result, a high relative density (> 97%) was obtained by SLM at an optimized laser energy density of 3.5-5.5 J/mm2. Moreover, a parameter-densification map was established to visually select the optimum process parameters for the SLM-processed AlSi10Mg parts with elevated density and required mechanical properties. The results provide an important experimental guidance for obtaining AlSi10Mg components with full density and gradient functional porosity by SLM.

  18. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  19. Ab initio study of magnetocrystalline anisotropy, magnetostriction, and Fermi surface of L10 FeNi (tetrataenite)

    NASA Astrophysics Data System (ADS)

    Werwiński, Mirosław; Marciniak, Wojciech

    2017-12-01

    We present results of ab initio calculations of several L10 FeNi characteristics, such as the summary of the magnetocrystalline anisotropy energies (MAEs), the full potential calculations of the anisotropy constant K 3, and the combined analysis of the Fermi surface and 3D {k} -resolved MAE. Other calculated parameters are the spin and orbital magnetic moments, the magnetostrictive coefficient λ0 0 1 , the bulk modulus B 0, and the lattice parameters. The MAEs summary shows rather big discrepancies among the experimental MAEs from the literature and also among the calculated MAE’s. The MAEs calculated in this work with the full potential and generalized gradient approximation (GGA) are equal to 0.47 MJ m-3 from WIEN2k, 0.34 MJ m-3 from FPLO, and 0.23 MJ m-3 from FP-SPR-KKR code. These results suggest that the MAE in GGA is below 0.5 MJ m-3 . It is expected that due to the limitations of the GGA, this value is underestimated. The L10 FeNi has further potential to improve its MAE by modifications, like e.g. tetragonal strain or alloying. The presented 3D {k} -resolved map of the MAE combined with the Fermi surface gives a complete picture of the MAE contributions in the Brillouin zone. The obtained, from the full potential FP-SPR-KKR method, magnetocrystalline anisotropy constants K 2 and K 3 are several orders of magnitude smaller than the MAE/K 1 and equal to -2.0 kJ m-3 and 110 J m-3 , respectively. The calculated spin and orbital magnetic moments of the L10 FeNi are equal to 2.72 and 0.054 μB for Fe and 0.53 and 0.039 μB for Ni atoms, respectively. The calculations of geometry optimization lead to a c/a ratio equal to 1.0036, B 0 equal to 194 GPa, and λ0 0 1 equal to 9.4  ×  10-6.

  20. Flood Detection/Monitoring Using Adjustable Histogram Equalization Technique

    PubMed Central

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    Flood monitoring technique using adjustable histogram equalization is proposed. The technique overcomes the limitations (overenhancement, artifacts, and unnatural look) of existing technique by adjusting the contrast of images. The proposed technique takes pre- and postimages and applies different processing steps for generating flood map without user interaction. The resultant flood maps can be used for flood monitoring and detection. Simulation results show that the proposed technique provides better output quality compared to the state of the art existing technique. PMID:24558332

  1. Construction and Annotation of a High Density SNP Linkage Map of the Atlantic Salmon (Salmo salar) Genome.

    PubMed

    Tsai, Hsin Y; Robledo, Diego; Lowe, Natalie R; Bekaert, Michael; Taggart, John B; Bron, James E; Houston, Ross D

    2016-07-07

    High density linkage maps are useful tools for fine-scale mapping of quantitative trait loci, and characterization of the recombination landscape of a species' genome. Genomic resources for Atlantic salmon (Salmo salar) include a well-assembled reference genome, and high density single nucleotide polymorphism (SNP) arrays. Our aim was to create a high density linkage map, and to align it with the reference genome assembly. Over 96,000 SNPs were mapped and ordered on the 29 salmon linkage groups using a pedigreed population comprising 622 fish from 60 nuclear families, all genotyped with the 'ssalar01' high density SNP array. The number of SNPs per group showed a high positive correlation with physical chromosome length (r = 0.95). While the order of markers on the genetic and physical maps was generally consistent, areas of discrepancy were identified. Approximately 6.5% of the previously unmapped reference genome sequence was assigned to chromosomes using the linkage map. Male recombination rate was lower than females across the vast majority of the genome, but with a notable peak in subtelomeric regions. Finally, using RNA-Seq data to annotate the reference genome, the mapped SNPs were categorized according to their predicted function, including annotation of ∼2500 putative nonsynonymous variants. The highest density SNP linkage map for any salmonid species has been created, annotated, and integrated with the Atlantic salmon reference genome assembly. This map highlights the marked heterochiasmy of salmon, and provides a useful resource for salmonid genetics and genomics research. Copyright © 2016 Tsai et al.

  2. Effect of Etomidate Versus Combination of Propofol-Ketamine and Thiopental-Ketamine on Hemodynamic Response to Laryngoscopy and Intubation: A Randomized Double Blind Clinical Trial.

    PubMed

    Gholipour Baradari, Afshin; Firouzian, Abolfazl; Zamani Kiasari, Alieh; Aarabi, Mohsen; Emadi, Seyed Abdollah; Davanlou, Ali; Motamed, Nima; Yousefi Abdolmaleki, Ensieh

    2016-02-01

    Laryngoscopy and intubation frequently used for airway management during general anesthesia, is frequently associated with undesirable hemodynamic disturbances. The aim of this study was to compare the effects of etomidate, combination of propofol-ketamine and thiopental-ketamine as induction agents on hemodynamic response to laryngoscopy and intubation. In a double blind, randomized clinical trial a total of 120 adult patients of both sexes, aged 18 - 45 years, scheduled for elective surgery under general anesthesia were randomly assigned into three equally sized groups. Patients in group A received etomidate (0.3 mg/kg) plus normal saline as placebo. Patients in group B and C received propofol (1.5 mg/kg) plus ketamine (0.5 mg/kg) and thiopental sodium (3 mg/kg) plus ketamine (0.5 mg/kg), respectively for anesthesia induction. Before laryngoscopy and tracheal intubation, immediately after, and also one and three minutes after the procedures, hemodynamic values (SBP, DBP, MAP and HR) were measured. A repeated measurement ANOVA showed significant changes in mean SBP and DBP between the time points (P < 0.05). In addition, the main effect of MAP and HR were statistically significant during the course of study (P < 0.05). Furthermore, after induction of anesthesia, the three study groups had significantly different SBP, DBP and MAP changes overtime (P < 0.05). However, HR changes over time were not statistically significant (P > 0.05). Combination of propofol-ketamine had superior hemodynamic stability compared to other induction agents. Combination of propofol-ketamine may be recommended as an effective and safe induction agent for attenuating hemodynamic responses to laryngoscopy and intubation with better hemodynamic stability. Although, further well-designed randomized clinical trials to confirm the safety and efficacy of this combination, especially in critically ill patients or patients with cardiovascular disease, are warranted.

  3. Single Marker and Haplotype-Based Association Analysis of Semolina and Pasta Colour in Elite Durum Wheat Breeding Lines Using a High-Density Consensus Map.

    PubMed

    N'Diaye, Amidou; Haile, Jemanesh K; Cory, Aron T; Clarke, Fran R; Clarke, John M; Knox, Ron E; Pozniak, Curtis J

    2017-01-01

    Association mapping is usually performed by testing the correlation between a single marker and phenotypes. However, because patterns of variation within genomes are inherited as blocks, clustering markers into haplotypes for genome-wide scans could be a worthwhile approach to improve statistical power to detect associations. The availability of high-density molecular data allows the possibility to assess the potential of both approaches to identify marker-trait associations in durum wheat. In the present study, we used single marker- and haplotype-based approaches to identify loci associated with semolina and pasta colour in durum wheat, the main objective being to evaluate the potential benefits of haplotype-based analysis for identifying quantitative trait loci. One hundred sixty-nine durum lines were genotyped using the Illumina 90K Infinium iSelect assay, and 12,234 polymorphic single nucleotide polymorphism (SNP) markers were generated and used to assess the population structure and the linkage disequilibrium (LD) patterns. A total of 8,581 SNPs previously localized to a high-density consensus map were clustered into 406 haplotype blocks based on the average LD distance of 5.3 cM. Combining multiple SNPs into haplotype blocks increased the average polymorphism information content (PIC) from 0.27 per SNP to 0.50 per haplotype. The haplotype-based analysis identified 12 loci associated with grain pigment colour traits, including the five loci identified by the single marker-based analysis. Furthermore, the haplotype-based analysis resulted in an increase of the phenotypic variance explained (50.4% on average) and the allelic effect (33.7% on average) when compared to single marker analysis. The presence of multiple allelic combinations within each haplotype locus offers potential for screening the most favorable haplotype series and may facilitate marker-assisted selection of grain pigment colour in durum wheat. These results suggest a benefit of haplotype-based analysis over single marker analysis to detect loci associated with colour traits in durum wheat.

  4. Variants for HDL-C, LDL-C, and triglycerides identified from admixture mapping and fine-mapping analysis in African American families.

    PubMed

    Shetty, Priya B; Tang, Hua; Feng, Tao; Tayo, Bamidele; Morrison, Alanna C; Kardia, Sharon L R; Hanis, Craig L; Arnett, Donna K; Hunt, Steven C; Boerwinkle, Eric; Rao, Dabeeru C; Cooper, Richard S; Risch, Neil; Zhu, Xiaofeng

    2015-02-01

    Admixture mapping of lipids was followed-up by family-based association analysis to identify variants for cardiovascular disease in African Americans. The present study conducted admixture mapping analysis for total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, and triglycerides. The analysis was performed in 1905 unrelated African American subjects from the National Heart, Lung and Blood Institute's Family Blood Pressure Program (FBPP). Regions showing admixture evidence were followed-up with family-based association analysis in 3556 African American subjects from the FBPP. The admixture mapping and family-based association analyses were adjusted for age, age(2), sex, body mass index, and genome-wide mean ancestry to minimize the confounding caused by population stratification. Regions that were suggestive of local ancestry association evidence were found on chromosomes 7 (low-density lipoprotein cholesterol), 8 (high-density lipoprotein cholesterol), 14 (triglycerides), and 19 (total cholesterol and triglycerides). In the fine-mapping analysis, 52 939 single-nucleotide polymorphisms (SNPs) were tested and 11 SNPs (8 independent SNPs) showed nominal significant association with high-density lipoprotein cholesterol (2 SNPs), low-density lipoprotein cholesterol (4 SNPs), and triglycerides (5 SNPs). The family data were used in the fine-mapping to identify SNPs that showed novel associations with lipids and regions, including genes with known associations for cardiovascular disease. This study identified regions on chromosomes 7, 8, 14, and 19 and 11 SNPs from the fine-mapping analysis that were associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, and triglycerides for further studies of cardiovascular disease in African Americans. © 2014 American Heart Association, Inc.

  5. Contribution of Equal-Sign Instruction beyond Word-Problem Tutoring for Third-Grade Students with Mathematics Difficulty.

    PubMed

    Powell, Sarah R; Fuchs, Lynn S

    2010-05-01

    Elementary school students often misinterpret the equal sign (=) as an operational rather than a relational symbol. Such misunderstanding is problematic because solving equations with missing numbers may be important for higher-order mathematics skills including word problems. Research indicates equal-sign instruction can alter how typically-developing students use the equal sign, but no study has examined effects for students with mathematics difficulty (MD) or how equal-sign instruction contributes to word-problem skill for students with or without MD. The present study assessed the efficacy of equal-sign instruction within word-problem tutoring. Third-grade students with MD (n = 80) were assigned to word-problem tutoring, word-problem tutoring plus equal-sign instruction (combined) tutoring, or no-tutoring control. Combined tutoring produced better improvement on equal sign tasks and open equations compared to the other 2 conditions. On certain forms of word problems, combined tutoring but not word-problem tutoring alone produced better improvement than control. When compared at posttest to 3(rd)-grade students without MD on equal sign tasks and open equations, only combined tutoring students with MD performed comparably.

  6. Maps to estimate average streamflow and headwater limits for streams in U.S. Army Corps of Engineers, Mobile District, Alabama and adjacent states

    USGS Publications Warehouse

    Nelson, George H.

    1984-01-01

    U.S. Army Corps of Engineers permits are required for discharges of dredged or fill-material downstream from the ' headwaters ' of specified streams. The term ' headwaters ' is defined as the point of a freshwater (non-tidal) stream above which the average flow is less than 5 cu ft/s. Maps of the Mobile District area showing (1) lines of equal average streamflow, and (2) lines of equal drainage areas required to produce an average flow of 5 cu ft/s are contained in this report. These maps are for use by the Corps of Engineers in their permitting program. (USGS)

  7. How will Mahanarva spectabilis (Hemiptera: Cercopidae) Respond to Global Warming?

    PubMed

    Fonseca, M G; Auad, A M; Resende, T T; Hott, M C; Borges, C A V

    2016-01-01

    The aim of this study was to determine the favorable constant temperature range for Mahanarva spectabilis(Distant) (Hemiptera: Cercopidae) development as well as to generate geographic distribution maps of this insect pest for future climate scenarios. M. spectabilis eggs were reared on two host plants (Brachiaria ruziziensis(Germain and Edvard) and Pennisetum purpureum(Schumach)), with individual plants kept at temperatures of 16, 20, 24, 28, and 32 °C. Nymphal stage duration, nymphal survival, adult longevity, and egg production were recorded for each temperature*host plant combination. Using the favorable temperature ranges for M. spectabilis development, it was possible to generate geographic distribution. Nymphal survival was highest at 24.4 °C, with estimates of 44 and 8% on Pennisetum and Brachiaria, respectively. Nymphal stage duration was greater on Brachiaria than on Pennisetum at 20 and 24 °C but equal at 28 °C. Egg production was higher on Pennisetum at 24 and 28 °C than at 20 °C, and adult longevity on Pennisetum was higher at 28 °C than at 20 °C, whereas adult longevity at 24 °C did not differ from that at 20 and 28 °C. With these results, it was possible to predict a reduction in M. spectabilis densities in most regions of Brazil in future climate scenarios. © The Author 2016. Published by Oxford University Press on behalf of the Entomological Society of America.

  8. The EDGE-CALIFA Survey: Interferometric Observations of 126 Galaxies with CARMA

    NASA Astrophysics Data System (ADS)

    Bolatto, Alberto D.; Wong, Tony; Utomo, Dyas; Blitz, Leo; Vogel, Stuart N.; Sánchez, Sebastián F.; Barrera-Ballesteros, Jorge; Cao, Yixian; Colombo, Dario; Dannerbauer, Helmut; García-Benito, Rubén; Herrera-Camus, Rodrigo; Husemann, Bernd; Kalinova, Veselina; Leroy, Adam K.; Leung, Gigi; Levy, Rebecca C.; Mast, Damián; Ostriker, Eve; Rosolowsky, Erik; Sandstrom, Karin M.; Teuben, Peter; van de Ven, Glenn; Walter, Fabian

    2017-09-01

    We present interferometric CO observations, made with the Combined Array for Millimeter-wave Astronomy (CARMA) interferometer, of galaxies from the Extragalactic Database for Galaxy Evolution survey (EDGE). These galaxies are selected from the Calar Alto Legacy Integral Field Area (CALIFA) sample, mapped with optical integral field spectroscopy. EDGE provides good-quality CO data (3σ sensitivity {{{Σ }}}{mol}˜ 11 {M}⊙ {{pc}}-2 before inclination correction, resolution ˜1.4 kpc) for 126 galaxies, constituting the largest interferometric CO survey of galaxies in the nearby universe. We describe the survey and data characteristics and products, then present initial science results. We find that the exponential scale lengths of the molecular, stellar, and star-forming disks are approximately equal, and galaxies that are more compact in molecular gas than in stars tend to show signs of interaction. We characterize the molecular-to-stellar ratio as a function of Hubble type and stellar mass and present preliminary results on the resolved relations between the molecular gas, stars, and star-formation rate. We then discuss the dependence of the resolved molecular depletion time on stellar surface density, nebular extinction, and gas metallicity. EDGE provides a key data set to address outstanding topics regarding gas and its role in star formation and galaxy evolution, which will be publicly available on completion of the quality assessment.

  9. Whole brain analysis of postmortem density changes of grey and white matter on computed tomography by statistical parametric mapping.

    PubMed

    Nishiyama, Yuichi; Kanayama, Hidekazu; Mori, Hiroshi; Tada, Keiji; Yamamoto, Yasushi; Katsube, Takashi; Takeshita, Haruo; Kawakami, Kazunori; Kitagaki, Hajime

    2017-06-01

    This study examined the usefulness of statistical parametric mapping (SPM) for investigating postmortem changes on brain computed tomography (CT). This retrospective study included 128 patients (23 - 100 years old) without cerebral abnormalities who underwent unenhanced brain CT before and after death. The antemortem CT (AMCT) scans and postmortem CT (PMCT) scans were spatially normalized using our original brain CT template, and postmortem changes of CT values (in Hounsfield units; HU) were analysed by the SPM technique. Compared with AMCT scans, 58.6 % and 98.4 % of PMCT scans showed loss of the cerebral sulci and an unclear grey matter (GM)-white matter (WM) interface, respectively. SPM analysis revealed a significant decrease in cortical GM density within 70 min after death on PMCT scans, suggesting cytotoxic brain oedema. Furthermore, there was a significant increase in the density of the WM, lenticular nucleus and thalamus more than 120 min after death. The SPM technique demonstrated typical postmortem changes on brain CT scans, and revealed that the unclear GM-WM interface on early PMCT scans is caused by a rapid decrease in cortical GM density combined with a delayed increase in WM density. SPM may be useful for assessment of whole brain postmortem changes. • The original brain CT template achieved successful normalization of brain morphology. • Postmortem changes in the brain were independent of sex. • Cortical GM density decreased rapidly after death. • WM and deep GM densities increased following cortical GM density change. • SPM could be useful for assessment of whole brain postmortem changes.

  10. Mapping low- and high-density clouds in astrophysical nebulae by imaging forbidden line emission

    NASA Astrophysics Data System (ADS)

    Steiner, J. E.; Menezes, R. B.; Ricci, T. V.; Oliveira, A. S.

    2009-06-01

    Emission line ratios have been essential for determining physical parameters such as gas temperature and density in astrophysical gaseous nebulae. With the advent of panoramic spectroscopic devices, images of regions with emission lines related to these physical parameters can, in principle, also be produced. We show that, with observations from modern instruments, it is possible to transform images taken from density-sensitive forbidden lines into images of emission from high- and low-density clouds by applying a transformation matrix. In order to achieve this, images of the pairs of density-sensitive lines as well as the adjacent continuum have to be observed and combined. We have computed the critical densities for a series of pairs of lines in the infrared, optical, ultraviolet and X-rays bands, and calculated the pair line intensity ratios in the high- and low-density limit using a four- and five-level atom approximation. In order to illustrate the method, we applied it to Gemini Multi-Object Spectrograph (GMOS) Integral Field Unit (GMOS-IFU) data of two galactic nuclei. We conclude that this method provides new information of astrophysical interest, especially for mapping low- and high-density clouds; for this reason, we call it `the ld/hd imaging method'. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States); the Science and Technology Facilities Council (United Kingdom); the National Research Council (Canada), CONICYT (Chile); the Australian Research Council (Australia); Ministério da Ciência e Tecnologia (Brazil) and Secretaria de Ciencia y Tecnologia (Argentina). E-mail: steiner@astro.iag.usp.br

  11. Ponderomotive force on solitary structures created during radiation pressure acceleration of thin foils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Vipin K.; Sharma, Anamika

    2013-05-15

    We estimate the ponderomotive force on an expanded inhomogeneous electron density profile, created in the later phase of laser irradiated diamond like ultrathin foil. When ions are uniformly distributed along the plasma slab and electron density obeys the Poisson's equation with space charge potential equal to negative of ponderomotive potential, φ=−φ{sub p}=−(mc{sup 2}/e)(γ−1), where γ=(1+|a|{sup 2}){sup 1/2}, and |a| is the normalized local laser amplitude inside the slab; the net ponderomotive force on the slab per unit area is demonstrated analytically to be equal to radiation pressure force for both overdense and underdense plasmas. In case electron density is takenmore » to be frozen as a Gaussian profile with peak density close to relativistic critical density, the ponderomotive force has non-monotonic spatial variation and sums up on all electrons per unit area to equal radiation pressure force at all laser intensities. The same result is obtained for the case of Gaussian ion density profile and self consistent electron density profile, obeying Poisson's equation with φ=−φ{sub p}.« less

  12. Model-based local density sharpening of cryo-EM maps

    PubMed Central

    Jakobi, Arjen J; Wilmanns, Matthias

    2017-01-01

    Atomic models based on high-resolution density maps are the ultimate result of the cryo-EM structure determination process. Here, we introduce a general procedure for local sharpening of cryo-EM density maps based on prior knowledge of an atomic reference structure. The procedure optimizes contrast of cryo-EM densities by amplitude scaling against the radially averaged local falloff estimated from a windowed reference model. By testing the procedure using six cryo-EM structures of TRPV1, β-galactosidase, γ-secretase, ribosome-EF-Tu complex, 20S proteasome and RNA polymerase III, we illustrate how local sharpening can increase interpretability of density maps in particular in cases of resolution variation and facilitates model building and atomic model refinement. PMID:29058676

  13. An ultra-high density linkage map and QTL mapping for sex and growth-related traits of common carp (Cyprinus carpio)

    PubMed Central

    Peng, Wenzhu; Xu, Jian; Zhang, Yan; Feng, Jianxin; Dong, Chuanju; Jiang, Likun; Feng, Jingyan; Chen, Baohua; Gong, Yiwen; Chen, Lin; Xu, Peng

    2016-01-01

    High density genetic linkage maps are essential for QTL fine mapping, comparative genomics and high quality genome sequence assembly. In this study, we constructed a high-density and high-resolution genetic linkage map with 28,194 SNP markers on 14,146 distinct loci for common carp based on high-throughput genotyping with the carp 250 K single nucleotide polymorphism (SNP) array in a mapping family. The genetic length of the consensus map was 10,595.94 cM with an average locus interval of 0.75 cM and an average marker interval of 0.38 cM. Comparative genomic analysis revealed high level of conserved syntenies between common carp and the closely related model species zebrafish and medaka. The genome scaffolds were anchored to the high-density linkage map, spanning 1,357 Mb of common carp reference genome. QTL mapping and association analysis identified 22 QTLs for growth-related traits and 7 QTLs for sex dimorphism. Candidate genes underlying growth-related traits were identified, including important regulators such as KISS2, IGF1, SMTLB, NPFFR1 and CPE. Candidate genes associated with sex dimorphism were also identified including 3KSR and DMRT2b. The high-density and high-resolution genetic linkage map provides an important tool for QTL fine mapping and positional cloning of economically important traits, and improving common carp genome assembly. PMID:27225429

  14. Compiling Mercury relief map using several data sources

    NASA Astrophysics Data System (ADS)

    Zakharova, M.

    2015-12-01

    There are several data of Mercury topography obtained as the result of processing materials collected by two spacecraft - the Mariner-10 and the MESSENGER during their Mercury flybys.The history of the visual mapping of Mercury begins at the recent times as the first significant observations were made during the latter half of the 20th century, whereas today we have no data with 100% coverage of the entire surface of the Mercury except the global mosaic composed of the images acquired by MESSENGER. The main objective of this work is to provide the first Mercury relief map using all the existing elevation data. The workflow included collecting, combining and processing the existing data and afterwards merging them correctly for one single map compiling. The preference was given to topography data while the global mosaic was used to fill the gaps where there was insufficient topography.The Mercury relief map has been created with the help of four different types of data: - global mosaic with 100% coverage of Mercury's surface created from Messenger orbital images (36% of the final map);- Digital Terrain Models obtained by the treating stereo images made during the Mariner 10's flybys (15% of the map) (Cook and Robinson, 2000);- Digital Terrain Models obtained from images acquired during the Messenger flybys (24% of the map) (F. Preusker et al., 2011);- the data sets produced by the MESSENGER Mercury Laser Altimeter (MLA) (25 % of the map).The final map is created in the Lambert azimuthal Equal area projection and has the scale 1:18 000 000. It represents two hemispheres - western and eastern which are separated by the zero meridian. It mainly shows the hypsometric features of the planet and craters with a diameter more than 200 kilometers.

  15. Dark Energy and Key Physical Parameters of Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Bisnovatyi-Kogan, G. S.

    We discuss the physics of clusters of galaxies embedded in the cosmic dark energy background and show that 1) the halo cut-off radius of a cluster like the Virgo cluster is practically, if not exactly, equal to the zero-gravity radius at which the dark matter gravity is balanced by the dark energy antigravity; 2) the halo averaged density is equal to two densities of dark energy; 3) the halo edge (cut-off) density is the dark energy density with a numerical factor of the unity order slightly depending on the halo profile.

  16. Low density, resorcinol-formaldehyde aerogels

    DOEpatents

    Pekala, R.W.

    1988-05-26

    The polycondensation of resorcinol with formaldehyde under alkaline conditions results in the formation of surface functionalized polymer ''clusters''. The covalent crosslinking of these ''clusters'' produces gels which when processed under supercritical conditions, produce low density, organic aerogels (density less than or equal to100 mg/cc; cell size less than or equal to0.1 microns). The aerogels are transparent,dark red in color and consist of interconnected colloidal-like particles with diameters of about 100 A/degree/. These aerogels may be further carbonized to form low density carbon foams with cell size of about 0.1 micron. 1 fig., 1 tab.

  17. Fluctuation diagrams for hot-wire anemometry in subsonic compressible flows

    NASA Technical Reports Server (NTRS)

    Stainback, P. C.; Nagabushana, K. A.

    1991-01-01

    The concept of using 'fluctuation diagrams' for describing basic fluctuations in compressible flows was reported by Kovasznay in the 1950's. The application of this technique, for the most part, was restricted to supersonic flows. Recently, Zinovev and Lebiga published reports where they considered the fluctuation diagrams in subsonic compressible flows. For the above studies, the velocity and density sensitivities of the heated wires were equal. However, there are considerable data, much taken in the 1950's, which indicate that under some conditions the velocity and density sensitivities are not equal in subsonic compressible flows. Therefore, possible fluctuation diagrams are described for the cases where the velocity and density sensitivities are equal and the more general cases where they are unequal.

  18. The Mass of Saturn's B ring from hidden density waves

    NASA Astrophysics Data System (ADS)

    Hedman, M. M.; Nicholson, P. D.

    2015-12-01

    The B ring is Saturn's brightest and most opaque ring, but many of its fundamental parameters, including its total mass, are not well constrained. Elsewhere in the rings, the best mass density estimates come from spiral waves driven by mean-motion resonances with Saturn's various moons, but such waves have been hard to find in the B ring. We have developed a new wavelet-based technique, for combining data from multiple stellar occultations that allows us to isolate the density wave signals from other ring structures. This method has been applied to 5 density waves using 17 occultations of the star gamma Crucis observed by the Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft. Two of these waves (generated by the Janus 2:1 and Mimas 5:2 Inner Lindblad Resonances) are visible in individual occultation profiles, but the other three wave signatures ( associated with the Janus 3:2, Enceladus 3:1 and Pandora 3:2 Inner Lindblad Resonances ) are not visible in individual profiles and can only be detected in the combined dataset. Estimates of the ring's surface mass density derived from these five waves fall between 40 and 140 g/cm^2. Surprisingly, these mass density estimates show no obvious correlation with the ring's optical depth. Furthermore, these data indicate that the total mass of the B ring is probably between one-third and two-thirds the mass of Saturn's moon Mimas.

  19. Statistical parametric mapping of LORETA using high density EEG and individual MRI: application to mismatch negativities in schizophrenia.

    PubMed

    Park, Hae-Jeong; Kwon, Jun Soo; Youn, Tak; Pae, Ji Soo; Kim, Jae-Jin; Kim, Myung-Sun; Ha, Kyoo-Seob

    2002-11-01

    We describe a method for the statistical parametric mapping of low resolution electromagnetic tomography (LORETA) using high-density electroencephalography (EEG) and individual magnetic resonance images (MRI) to investigate the characteristics of the mismatch negativity (MMN) generators in schizophrenia. LORETA, using a realistic head model of the boundary element method derived from the individual anatomy, estimated the current density maps from the scalp topography of the 128-channel EEG. From the current density maps that covered the whole cortical gray matter (up to 20,000 points), volumetric current density images were reconstructed. Intensity normalization of the smoothed current density images was used to reduce the confounding effect of subject specific global activity. After transforming each image into a standard stereotaxic space, we carried out statistical parametric mapping of the normalized current density images. We applied this method to the source localization of MMN in schizophrenia. The MMN generators, produced by a deviant tone of 1,200 Hz (5% of 1,600 trials) under the standard tone of 1,000 Hz, 80 dB binaural stimuli with 300 msec of inter-stimulus interval, were measured in 14 right-handed schizophrenic subjects and 14 age-, gender-, and handedness-matched controls. We found that the schizophrenic group exhibited significant current density reductions of MMN in the left superior temporal gyrus and the left inferior parietal gyrus (P < 0. 0005). This study is the first voxel-by-voxel statistical mapping of current density using individual MRI and high-density EEG. Copyright 2002 Wiley-Liss, Inc.

  20. Hotspot detection in pancreatic neuroendocrine tumors: density approximation by α-shape maps

    NASA Astrophysics Data System (ADS)

    Niazi, M. K. K.; Hartman, Douglas J.; Pantanowitz, Liron; Gurcan, Metin N.

    2016-03-01

    The grading of neuroendocrine tumors of the digestive system is dependent on accurate and reproducible assessment of the proliferation with the tumor, either by counting mitotic figures or counting Ki-67 positive nuclei. At the moment, most pathologists manually identify the hotspots, a practice which is tedious and irreproducible. To better help pathologists, we present an automatic method to detect all potential hotspots in neuroendocrine tumors of the digestive system. The method starts by segmenting Ki-67 positive nuclei by entropy based thresholding, followed by detection of centroids for all Ki-67 positive nuclei. Based on geodesic distance, approximated by the nuclei centroids, we compute two maps: an amoeba map and a weighted amoeba map. These maps are later combined to generate the heat map, the segmentation of which results in the hotspots. The method was trained on three and tested on nine whole slide images of neuroendocrine tumors. When evaluated by two expert pathologists, the method reached an accuracy of 92.6%. The current method does not discriminate between tumor, stromal and inflammatory nuclei. The results show that α-shape maps may represent how hotspots are perceived.

  1. Insights Into Upland Cotton (Gossypium hirsutum L.) Genetic Recombination Based on 3 High-Density Single-Nucleotide Polymorphism and a Consensus Map Developed Independently With Common Parents. Genomics Insights

    USDA-ARS?s Scientific Manuscript database

    High-density linkage maps are vital to supporting the correct placement of scaffolds and gene sequences on chromosomes and fundamental to contemporary organismal research and scientific approaches to genetic improvement; high-density linkage maps are especially important in paleopolyploids with exce...

  2. Mapping the Baby Universe

    NASA Technical Reports Server (NTRS)

    Wanjek, Christopher

    2003-01-01

    In June, NASA plans to launch the Microwave Anisotropy Probe (MAP) to survey the ancient radiation in unprecedented detail. MAP will map slight temperature fluctuations within the microwave background that vary by only 0.00001 C across a chilly radiation that now averages 2.73 C above absolute zero. The temperature differences today point back to density differences in the fiery baby universe, in which there was a little more matter here and a little less matter there. Areas of slightly enhanced density had stronger gravity than low-density areas. The high-density areas pulled back on the background radiation, making it appear slightly cooler in those directions.

  3. Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping

    PubMed Central

    Yarnykh, Vasily L.

    2015-01-01

    Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097

  4. Resonant pairing between fermions with unequal masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Shin-Tza; Pao, C.-H.; Yip, S.-K.

    We study via mean-field theory the pairing between fermions of different masses, especially at the unitary limit. At equal populations, the thermodynamic properties are identical with the equal mass case provided an appropriate rescaling is made. At unequal populations, for sufficiently light majority species, the system does not phase separate. For sufficiently heavy majority species, the phase separated normal phase have a density larger than that of the superfluid. For atoms in harmonic traps, the density profiles for unequal mass fermions can be drastically different from their equal-mass counterparts.

  5. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, inmore » contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.« less

  6. Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities.

    PubMed

    Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib

    2016-03-01

    Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.

  7. HP2 survey. III. The California Molecular Cloud: A sleeping giant revisited

    NASA Astrophysics Data System (ADS)

    Lada, Charles J.; Lewis, John A.; Lombardi, Marco; Alves, João

    2017-10-01

    We present new high resolution and dynamic range dust column density and temperature maps of the California Molecular Cloud derived from a combination of Planck and Herschel dust-emission maps, and 2MASS NIR dust-extinction maps. We used these data to determine the ratio of the 2.2 μm extinction coefficient to the 850 μm opacity and found the value to be close to that found in similar studies of the Orion B and Perseus clouds but higher than that characterizing the Orion A cloud, indicating that variations in the fundamental optical properties of dust may exist between local clouds. We show that over a wide range of extinction, the column density probability distribution function (pdf) of the cloud can be well described by a simple power law (I.e., PDFN ∝ AK -n) with an index (n = 4.0 ± 0.1) that represents a steeper decline with AK than found (n ≈ 3) in similar studies of the Orion and Perseus clouds. Using only the protostellar population of the cloud and our extinction maps we investigate the Schmidt relation, that is, the relation between the protostellar surface density, Σ∗, and extinction, AK, within the cloud. We show that Σ∗ is directly proportional to the ratio of the protostellar and cloud pdfs, I.e., PDF∗(AK)/PDFN(AK). We use the cumulative distribution of protostars to infer the functional forms for both Σ∗ and PDF∗. We find that Σ∗ is best described by two power-law functions. At extinctions AK ≲ 2.5 mag, Σ∗ ∝ AK β with β = 3.3 while at higher extinctions β = 2.5, both values steeper than those (≈2) found in other local giant molecular clouds (GMCs). We find that PDF∗ is a declining function of extinction also best described by two power-laws whose behavior mirrors that of Σ∗. Our observations suggest that variations both in the slope of the Schmidt relation and in the sizes of the protostellar populations between GMCs are largely driven by variations in the slope, n, of PDFN(AK). This confirms earlier studies suggesting that cloud structure plays a major role in setting the global star formation rates in GMCs HP2 (Herschel-Planck-2MASS) survey is a continuation of the series originally entitled "Herschel-Planck dust opacity and column density maps" (Lombardi et al. 2014, Zari et al. 2016).The reduced Herschel and Planck map and the column density and temperature maps are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A100

  8. A Kennicutt-Schmidt relation at molecular cloud scales and beyond

    NASA Astrophysics Data System (ADS)

    Khoperskov, Sergey A.; Vasiliev, Evgenii O.

    2017-06-01

    Using N-body/gasdynamic simulations of a Milky Way-like galaxy, we analyse a Kennicutt-Schmidt (KS) relation, Σ _SFR ∝ Σ _gas^N, at different spatial scales. We simulate synthetic observations in CO lines and ultraviolet (UV) band. We adopt the star formation rate (SFR) defined in two ways: based on free fall collapse of a molecular cloud - ΣSFR, cl, and calculated by using a UV flux calibration - ΣSFR,UV. We study a KS relation for spatially smoothed maps with effective spatial resolution from molecular cloud scales to several hundred parsecs. We find that for spatially and kinematically resolved molecular clouds the Σ _{SFR, cl} ∝ σ _{gas}^N relation follows the power law with index N ≈ 1.4. Using UV flux as SFR calibrator, we confirm a systematic offset between the ΣSFR,UV and Σgas distributions on scales compared to molecular cloud sizes. Degrading resolution of our simulated maps for surface densities of gas and SFRs, we establish that there is no relation ΣSFR,UV -Σgas below the resolution ˜50 pc. We find a transition range around scales ˜50-120 pc, where the power-law index N increases from 0 to 1-1.8 and saturates for scales larger ˜120 pc. A value of the index saturated depends on a surface gas density threshold and it becomes steeper for higher Σgas threshold. Averaging over scales with size of ≳ 150 pc the power-law index N equals 1.3-1.4 for surface gas density threshold ˜5 M⊙ pc-2. At scales ≳ 120 pc surface SFR densities determined by using CO data and UV flux, ΣSFR,UV/SFR, cl, demonstrate a discrepancy about a factor of 3. We argue that this may be originated from overestimating (constant) values of conversion factor, star formation efficiency or UV calibration used in our analysis.

  9. An ultra-high-density bin map facilitates high-throughput QTL mapping of horticultural traits in pepper (Capsicum annuum).

    PubMed

    Han, Koeun; Jeong, Hee-Jin; Yang, Hee-Bum; Kang, Sung-Min; Kwon, Jin-Kyung; Kim, Seungill; Choi, Doil; Kang, Byoung-Cheorl

    2016-04-01

    Most agricultural traits are controlled by quantitative trait loci (QTLs); however, there are few studies on QTL mapping of horticultural traits in pepper (Capsicum spp.) due to the lack of high-density molecular maps and the sequence information. In this study, an ultra-high-density map and 120 recombinant inbred lines (RILs) derived from a cross between C. annuum'Perennial' and C. annuum'Dempsey' were used for QTL mapping of horticultural traits. Parental lines and RILs were resequenced at 18× and 1× coverage, respectively. Using a sliding window approach, an ultra-high-density bin map containing 2,578 bins was constructed. The total map length of the map was 1,372 cM, and the average interval between bins was 0.53 cM. A total of 86 significant QTLs controlling 17 horticultural traits were detected. Among these, 32 QTLs controlling 13 traits were major QTLs. Our research shows that the construction of bin maps using low-coverage sequence is a powerful method for QTL mapping, and that the short intervals between bins are helpful for fine-mapping of QTLs. Furthermore, bin maps can be used to improve the quality of reference genomes by elucidating the genetic order of unordered regions and anchoring unassigned scaffolds to linkage groups. © The Author 2016. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  10. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahoo, Satiprasad; Dhar, Anirban, E-mail: anirban.dhar@gmail.com; Kar, Amlanjyoti

    Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, windmore » speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.« less

  12. Covariance and correlation estimation in electron-density maps.

    PubMed

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  13. Meteoroid, and debris special investigation group preliminary results: Size-frequency distribution and spatial density of large impact features on LDEF

    NASA Technical Reports Server (NTRS)

    See, Thomas H.; Hoerz, Friedrich; Zolensky, Michael E.; Allbrooks, Martha K.; Atkinson, Dale R.; Simon, Charles G.

    1992-01-01

    All craters greater than or equal to 500 microns and penetration holes greater than or equal to 300 microns in diameter on the entire Long Duration Exposure Facility (LDEF) were documented. Summarized here are the observations on the LDEF frame, which exposed aluminum 6061-T6 in 26 specific directions relative to LDEF's velocity vector. In addition, the opportunity arose to characterize the penetration holes in the A0178 thermal blankets, which pointed in nine directions. For each of the 26 directions, LDEF provided time-area products that approach those afforded by all previous space-retrieved materials combined. The objective here is to provide a factual database pertaining to the largest collisional events on the entire LDEF spacecraft with a minimum of interpretation. This database may serve to encourage and guide more interpretative efforts and modeling attempts.

  14. Building the atomic model of a boreal lake virus of unknown fold in a 3.9 Å cryo-EM map.

    PubMed

    De Colibus, Luigi; Stuart, David I

    2018-04-01

    We report here the protocol adopted to build the atomic model of the newly discovered virus FLiP (Flavobacterium infecting, lipid-containing phage) into 3.9 Å cryo-electron microscopy (cryo-EM) maps. In particular, this report discusses the combination of density modification procedures, automatic model building and bioinformatics tools applied to guide the tracing of the major capsid protein (MCP) of this virus. The protocol outlined here may serve as a reference for future structural determination by cryo-EM of viruses lacking detectable structural homologues. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Construction of a high-density high-resolution genetic map and its integration with BAC-based physical map in channel catfish

    USDA-ARS?s Scientific Manuscript database

    Construction of genetic linkage map is essential for genetic and genomic studies. Recent advances in sequencing and genotyping technologies made it possible to generate high-density and high-resolution genetic linkage maps, especially for the organisms lacking extensive genomic resources. In the pre...

  16. Improved quantitation and reproducibility in multi-PET/CT lung studies by combining CT information.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Millner, Lynn; Endozo, Raymond; Maher, Toby M; Groves, Ashley M; Hutton, Brian F; Thielemans, Kris

    2018-06-05

    Matched attenuation maps are vital for obtaining accurate and reproducible kinetic and static parameter estimates from PET data. With increased interest in PET/CT imaging of diffuse lung diseases for assessing disease progression and treatment effectiveness, understanding the extent of the effect of respiratory motion and establishing methods for correction are becoming more important. In a previous study, we have shown that using the wrong attenuation map leads to large errors due to density mismatches in the lung, especially in dynamic PET scans. Here, we extend this work to the case where the study is sub-divided into several scans, e.g. for patient comfort, each with its own CT (cine-CT and 'snap shot' CT). A method to combine multi-CT information into a combined-CT has then been developed, which averages the CT information from each study section to produce composite CT images with the lung density more representative of that in the PET data. This combined-CT was applied to nine patients with idiopathic pulmonary fibrosis, imaged with dynamic 18 F-FDG PET/CT to determine the improvement in the precision of the parameter estimates. Using XCAT simulations, errors in the influx rate constant were found to be as high as 60% in multi-PET/CT studies. Analysis of patient data identified displacements between study sections in the time activity curves, which led to an average standard error in the estimates of the influx rate constant of 53% with conventional methods. This reduced to within 5% after use of combined-CTs for attenuation correction of the study sections. Use of combined-CTs to reconstruct the sections of a multi-PET/CT study, as opposed to using the individually acquired CTs at each study stage, produces more precise parameter estimates and may improve discrimination between diseased and normal lung.

  17. Development and validation of a critical gradient energetic particle driven Alfven eigenmode transport model for DIII-D tilted neutral beam experiments

    DOE PAGES

    Waltz, Ronald E.; Bass, Eric M.; Heidbrink, William W.; ...

    2015-10-30

    Recent experiments with the DIII-D tilted neutral beam injection (NBI) varying the beam energetic particle (EP) source profiles have provided strong evidence that unstable Alfven eigenmodes (AE) drive stiff EP transport at a critical EP density gradient. Here the critical gradient is identified by the local AE growth rate being equal to the local ITG/TEM growth rate at the same low toroidal mode number. The growth rates are taken from the gyrokinetic code GYRO. Simulation show that the slowing down beam-like EP distribution has a slightly lower critical gradient than the Maxwellian. The ALPHA EP density transport code, used tomore » validate the model, combines the low-n stiff EP critical density gradient AE mid-core transport with the energy independent high-n ITG/TEM density transport model controling the central core EP density profile. For the on-axis NBI heated DIII-D shot 146102, while the net loss to the edge is small, about half the birth fast ions are transported from the central core r/a < 0.5 and the central density is about half the slowing down density. Lastly, these results are in good agreement with experimental fast ion pressure profiles inferred from MSE constrained EFIT equilibria.« less

  18. Study of Equatorial Ionospheric irregularities and Mapping of Electron Density Profiles and Ionograms

    DTIC Science & Technology

    2012-03-09

    equation is a product of a complex basis vector in Jackson and a linear combination of plane wave functions. We convert both the amplitudes and the...wave function arguments from complex scalars to complex vectors . This conversion allows us to separate the electric field vector and the imaginary...magnetic field vector , because exponentials of imaginary scalars convert vectors to imaginary vectors and vice versa, while ex- ponentials of imaginary

  19. A high-density genetic map and growth related QTL mapping in bighead carp (Hypophthalmichthys nobilis)

    PubMed Central

    Fu, Beide; Liu, Haiyang; Yu, Xiaomu; Tong, Jingou

    2016-01-01

    Growth related traits in fish are controlled by quantitative trait loci (QTL), but no QTL for growth have been detected in bighead carp (Hypophthalmichthys nobilis) due to the lack of high-density genetic map. In this study, an ultra-high density genetic map was constructed with 3,121 SNP markers by sequencing 117 individuals in a F1 family using 2b-RAD technology. The total length of the map was 2341.27 cM, with an average marker interval of 0.75 cM. A high level of genomic synteny between our map and zebrafish was detected. Based on this genetic map, one genome-wide significant and 37 suggestive QTL for five growth-related traits were identified in 6 linkage groups (i.e. LG3, LG11, LG15, LG18, LG19, LG22). The phenotypic variance explained (PVE) by these QTL varied from 15.4% to 38.2%. Marker within the significant QTL region was surrounded by CRP1 and CRP2, which played an important role in muscle cell division. These high-density map and QTL information provided a solid base for QTL fine mapping and comparative genomics in bighead carp. PMID:27345016

  20. Contribution of Equal-Sign Instruction beyond Word-Problem Tutoring for Third-Grade Students with Mathematics Difficulty

    PubMed Central

    Powell, Sarah R.; Fuchs, Lynn S.

    2010-01-01

    Elementary school students often misinterpret the equal sign (=) as an operational rather than a relational symbol. Such misunderstanding is problematic because solving equations with missing numbers may be important for higher-order mathematics skills including word problems. Research indicates equal-sign instruction can alter how typically-developing students use the equal sign, but no study has examined effects for students with mathematics difficulty (MD) or how equal-sign instruction contributes to word-problem skill for students with or without MD. The present study assessed the efficacy of equal-sign instruction within word-problem tutoring. Third-grade students with MD (n = 80) were assigned to word-problem tutoring, word-problem tutoring plus equal-sign instruction (combined) tutoring, or no-tutoring control. Combined tutoring produced better improvement on equal sign tasks and open equations compared to the other 2 conditions. On certain forms of word problems, combined tutoring but not word-problem tutoring alone produced better improvement than control. When compared at posttest to 3rd-grade students without MD on equal sign tasks and open equations, only combined tutoring students with MD performed comparably. PMID:20640240

  1. Combining global positioning system and accelerometer data to determine the locations of physical activity in children.

    PubMed

    Oreskovic, Nicolas M; Blossom, Jeff; Field, Alison E; Chiang, Sylvia R; Winickoff, Jonathan P; Kleinman, Ronald E

    2012-05-01

    National trends indicate that children and adolescents are not achieving sufficient levels of physical activity. Combining global positioning system (GPS) technology with accelerometers has the potential to provide an objective determination in locations where youth engage in physical activity. The aim of this study was to identify the optimal methods for collecting combined accelerometer and GPS data in youth, to best locate where children spend time and are physically active. A convenience sample of 24 mid-school children in Massachusetts was included. Accelerometers and GPS units were used to quantify and locate childhood physical activity over 5 weekdays and 2 weekend days. Accelerometer and GPS data were joined by time and mapped with a geographical information system (GIS) using ArcGIS software. Data were collected in winter, spring, summer in 2009-2010, collecting a total of 26,406 matched datapoints overall. Matched data yield was low (19.1% total), regardless of season (winter, 12.8%; spring, 30.1%; summer, 14.3%). Teacher-provided, pre-charged equipment yielded the most matched (30.1%; range: 10.1-52.3%) and greatest average days (6.1 days) of data. Across all seasons, children spent most of their time at home. Outdoor use patterns appeared to vary by season, with street use increasing in spring, and park and playground use increasing in summer. Children spent equal amounts of physical activity time at home and walking in the streets. Overall, the various methods for combining GPS and accelerometer data provided similarly low amounts of combined data. No combined GPS and accelerometer data collection method proved superior in every data return category, but use of GIS to map joined accelerometer and GPS data can demarcate childhood physical activity locations.

  2. [Beneficial effect of thyrotropin-releasing hormone in combination with HSD on hemorrhagic shock with pulmonary edema at high altitude in the rat].

    PubMed

    Hu, De-yao; Liu, Liang-ming; Li, Ping; Liu, Jian-cang; Liu, Hou-dong; He, Yan-mei; Huo, Xiao-ping; Tian, Kun-lun; Shi, Quan-gui; Xiao, Nan; Zhou, Xue-wu

    2003-05-01

    To study the effects of thyrotropin-releasing hormone (TRH) in combination with hypertonic saline/dextran (7.5% NaCl + 6% Dextran 40, HSD ) on hemorrhagic shock with pulmonary edema in the rats which were recently brought to high altitude. Forty-nine SD rats, transported to Lasa, Tibet, which was 3,760 meters above the sea level, were anesthetized one week later with sodium pentobarbital (30 mg/kg, intraperitoneal). Hemorrhagic shock with pulmonary edema was induced by hemorrhage (50 mm Hg maintained for 1 hour,1 mm Hg=0.133 kPa) plus intravenous injection of oleic acid (50 microl/kg). They were equally divided into seven groups (n=7): normal control, hemorrhagic shock, hemorrhagic shock with pulmonary edema (HSPE), HSPE plus TRH (5 mg/kg), HSPE plus HSD (4 ml/kg), and HEPE plus TRH and HSD in combination. Hemodynamic parameters including mean arterial blood pressure(MAP), left intraventricular systolic pressure (LVSP) and the maximal change rate of intraventricular pressure rise or decline (+/- dp/dt max) were observed at 15, 30, 60 and 120 minutes, blood gases were analyzed at 30 and 120 minutes, and the water content of lung and brain was determined at 120 minutes after drug administration. TRH or HSD used alone or in combination significantly increased MAP, LVSP and +/- dp/dt max (P<0.05 or P<0.01 ), ameliorated acid-base imbalance, and decreased the water content of lung and brain. The effect of the two in combination was superior to either drug used alone. TRH in combination with HSD can be used in the treatment of hemorrhagic shock with pulmonary edema at high altitude.

  3. Fast flow-based algorithm for creating density-equalizing map projections

    PubMed Central

    Gastner, Michael T.; Seguy, Vivien; More, Pratyush

    2018-01-01

    Cartograms are maps that rescale geographic regions (e.g., countries, districts) such that their areas are proportional to quantitative demographic data (e.g., population size, gross domestic product). Unlike conventional bar or pie charts, cartograms can represent correctly which regions share common borders, resulting in insightful visualizations that can be the basis for further spatial statistical analysis. Computer programs can assist data scientists in preparing cartograms, but developing an algorithm that can quickly transform every coordinate on the map (including points that are not exactly on a border) while generating recognizable images has remained a challenge. Methods that translate the cartographic deformations into physics-inspired equations of motion have become popular, but solving these equations with sufficient accuracy can still take several minutes on current hardware. Here we introduce a flow-based algorithm whose equations of motion are numerically easier to solve compared with previous methods. The equations allow straightforward parallelization so that the calculation takes only a few seconds even for complex and detailed input. Despite the speedup, the proposed algorithm still keeps the advantages of previous techniques: With comparable quantitative measures of shape distortion, it accurately scales all areas, correctly fits the regions together, and generates a map projection for every point. We demonstrate the use of our algorithm with applications to the 2016 US election results, the gross domestic products of Indian states and Chinese provinces, and the spatial distribution of deaths in the London borough of Kensington and Chelsea between 2011 and 2014. PMID:29463721

  4. Construction of a High-Density Genetic Map from RNA-Seq Data for an Arabidopsis Bay-0 × Shahdara RIL Population

    PubMed Central

    Serin, Elise A. R.; Snoek, L. B.; Nijveen, Harm; Willems, Leo A. J.; Jiménez-Gómez, Jose M.; Hilhorst, Henk W. M.; Ligterink, Wilco

    2017-01-01

    High-density genetic maps are essential for high resolution mapping of quantitative traits. Here, we present a new genetic map for an Arabidopsis Bayreuth × Shahdara recombinant inbred line (RIL) population, built on RNA-seq data. RNA-seq analysis on 160 RILs of this population identified 30,049 single-nucleotide polymorphisms (SNPs) covering the whole genome. Based on a 100-kbp window SNP binning method, 1059 bin-markers were identified, physically anchored on the genome. The total length of the RNA-seq genetic map spans 471.70 centimorgans (cM) with an average marker distance of 0.45 cM and a maximum marker distance of 4.81 cM. This high resolution genotyping revealed new recombination breakpoints in the population. To highlight the advantages of such high-density map, we compared it to two publicly available genetic maps for the same population, comprising 69 PCR-based markers and 497 gene expression markers derived from microarray data, respectively. In this study, we show that SNP markers can effectively be derived from RNA-seq data. The new RNA-seq map closes many existing gaps in marker coverage, saturating the previously available genetic maps. Quantitative trait locus (QTL) analysis for published phenotypes using the available genetic maps showed increased QTL mapping resolution and reduced QTL confidence interval using the RNA-seq map. The new high-density map is a valuable resource that facilitates the identification of candidate genes and map-based cloning approaches. PMID:29259624

  5. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    PubMed

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  6. Mapping Physiological Suitability Limits for Malaria in Africa Under Climate Change.

    PubMed

    Ryan, Sadie J; McNally, Amy; Johnson, Leah R; Mordecai, Erin A; Ben-Horin, Tal; Paaijmans, Krijn; Lafferty, Kevin D

    2015-12-01

    We mapped current and future temperature suitability for malaria transmission in Africa using a published model that incorporates nonlinear physiological responses to temperature of the mosquito vector Anopheles gambiae and the malaria parasite Plasmodium falciparum. We found that a larger area of Africa currently experiences the ideal temperature for transmission than previously supposed. Under future climate projections, we predicted a modest increase in the overall area suitable for malaria transmission, but a net decrease in the most suitable area. Combined with human population density projections, our maps suggest that areas with temperatures suitable for year-round, highest-risk transmission will shift from coastal West Africa to the Albertine Rift between the Democratic Republic of Congo and Uganda, whereas areas with seasonal transmission suitability will shift toward sub-Saharan coastal areas. Mapping temperature suitability places important bounds on malaria transmissibility and, along with local level demographic, socioeconomic, and ecological factors, can indicate where resources may be best spent on malaria control.

  7. Mapping physiological suitability limits for malaria in Africa under climate change

    USGS Publications Warehouse

    Ryan, Sadie J.; McNally, Amy; Johnson, Leah R.; Mordecai, Erin A.; Ben-Horin, Tal; Paaijmans, Krijn P.; Lafferty, Kevin D.

    2015-01-01

    We mapped current and future temperature suitability for malaria transmission in Africa using a published model that incorporates nonlinear physiological responses to temperature of the mosquito vector Anopheles gambiae and the malaria parasite Plasmodium falciparum. We found that a larger area of Africa currently experiences the ideal temperature for transmission than previously supposed. Under future climate projections, we predicted a modest increase in the overall area suitable for malaria transmission, but a net decrease in the most suitable area. Combined with human population density projections, our maps suggest that areas with temperatures suitable for year-round, highest-risk transmission will shift from coastal West Africa to the Albertine Rift between the Democratic Republic of Congo and Uganda, whereas areas with seasonal transmission suitability will shift toward sub-Saharan coastal areas. Mapping temperature suitability places important bounds on malaria transmissibility and, along with local level demographic, socioeconomic, and ecological factors, can indicate where resources may be best spent on malaria control.

  8. Theory of Parabolic Arcs in Interstellar Scintillation Spectra

    NASA Astrophysics Data System (ADS)

    Cordes, James M.; Rickett, Barney J.; Stinebring, Daniel R.; Coles, William A.

    2006-01-01

    Interstellar scintillation (ISS), observed as time variation in the intensity of a compact radio source, is caused by small-scale structure in the electron density of the interstellar plasma. Dynamic spectra of ISS show modulation in radio frequency and time. Here we relate the (two-dimensional) power spectrum of the dynamic spectrum-the secondary spectrum-to the scattered image of the source. Recent work has identified remarkable parabolic arcs in secondary spectra. Each point in a secondary spectrum corresponds to interference between points in the scattered image with a certain Doppler shift and a certain delay. The parabolic arc corresponds to the quadratic relation between differential Doppler shift and delay through their common dependence on scattering angle. We show that arcs will occur in all media that scatter significant power at angles larger than the rms angle. Thus, effects such as source diameter, steep spectra, and dissipation scales, which truncate high angle scattering, also truncate arcs. Arcs are equally visible in simulations of nondispersive scattering. They are enhanced by anisotropic scattering when the spatial structure is elongated perpendicular to the velocity. In weak scattering the secondary spectrum is directly mapped from the scattered image, and this mapping can be inverted. We discuss additional observed phenomena including multiple arcs and reverse arclets oriented oppositely to the main arc. These phenomena persist for many refractive scattering times, suggesting that they are due to large-scale density structures, rather than low-frequency components of Kolmogorov turbulence.

  9. Identification of QTLs associated with oil content in a high-oil Brassica napus cultivar and construction of a high-density consensus map for QTLs comparison in B. napus.

    PubMed

    Wang, Xiaodong; Wang, Hao; Long, Yan; Li, Dianrong; Yin, Yongtai; Tian, Jianhua; Chen, Li; Liu, Liezhao; Zhao, Weiguo; Zhao, Yajun; Yu, Longjiang; Li, Maoteng

    2013-01-01

    Increasing seed oil content is one of the most important goals in breeding of rapeseed (B. napus L.). To dissect the genetic basis of oil content in B. napus, a large and new double haploid (DH) population containing 348 lines was obtained from a cross between 'KenC-8' and 'N53-2', two varieties with >10% difference in seed oil content, and this population was named the KN DH population. A genetic linkage map consisting of 403 markers was constructed, which covered a total length of 1783.9 cM with an average marker interval of 4.4 cM. The KN DH population was phenotyped in eight natural environments and subjected to quantitative trait loci (QTL) analysis for oil content. A total of 63 identified QTLs explaining 2.64-17.88% of the phenotypic variation were identified, and these QTLs were further integrated into 24 consensus QTLs located on 11 chromosomes using meta-analysis. A high-density consensus map with 1335 marker loci was constructed by combining the KN DH map with seven other published maps based on the common markers. Of the 24 consensus QTLs in the KN DH population, 14 were new QTLs including five new QTLs in A genome and nine in C genome. The analysis revealed that a larger population with significant differences in oil content gave a higher power detecting new QTLs for oil content, and the construction of the consensus map provided a new clue for comparing the QTLs detected in different populations. These findings enriched our knowledge of QTLs for oil content and should be a potential in marker-assisted breeding of B. napus.

  10. Identification of QTLs Associated with Oil Content in a High-Oil Brassica napus Cultivar and Construction of a High-Density Consensus Map for QTLs Comparison in B. napus

    PubMed Central

    Long, Yan; Li, Dianrong; Yin, Yongtai; Tian, Jianhua; Chen, Li; Liu, Liezhao; Zhao, Weiguo; Zhao, Yajun; Yu, Longjiang; Li, Maoteng

    2013-01-01

    Increasing seed oil content is one of the most important goals in breeding of rapeseed (B. napus L.). To dissect the genetic basis of oil content in B. napus, a large and new double haploid (DH) population containing 348 lines was obtained from a cross between ‘KenC-8’ and ‘N53-2’, two varieties with >10% difference in seed oil content, and this population was named the KN DH population. A genetic linkage map consisting of 403 markers was constructed, which covered a total length of 1783.9 cM with an average marker interval of 4.4 cM. The KN DH population was phenotyped in eight natural environments and subjected to quantitative trait loci (QTL) analysis for oil content. A total of 63 identified QTLs explaining 2.64–17.88% of the phenotypic variation were identified, and these QTLs were further integrated into 24 consensus QTLs located on 11 chromosomes using meta-analysis. A high-density consensus map with 1335 marker loci was constructed by combining the KN DH map with seven other published maps based on the common markers. Of the 24 consensus QTLs in the KN DH population, 14 were new QTLs including five new QTLs in A genome and nine in C genome. The analysis revealed that a larger population with significant differences in oil content gave a higher power detecting new QTLs for oil content, and the construction of the consensus map provided a new clue for comparing the QTLs detected in different populations. These findings enriched our knowledge of QTLs for oil content and should be a potential in marker-assisted breeding of B. napus. PMID:24312482

  11. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  12. Improving experimental phases for strong reflections prior to density modification

    DOE PAGES

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...

    2013-09-20

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  13. Urban local climate zone mapping and apply in urban environment study

    NASA Astrophysics Data System (ADS)

    He, Shan; Zhang, Yunwei; Zhang, Jili

    2018-02-01

    The city’s local climate zone (LCZ) was considered to be a powerful tool for urban climate mapping. But for cities in different countries and regions, the LCZ division methods and results were different, thus targeted researches should be performed. In the current work, a LCZ mapping method was proposed, which is convenient in operation and city planning oriented. In this proposed method, the local climate zoning types were adjusted firstly, according to the characteristics of Chinese city, that more tall buildings and high density. Then the classification method proposed by WUDAPT based on remote sensing data was performed on Xi’an city, as an example, for LCZ mapping. Combined with the city road network, a reasonable expression of the dividing results was provided, to adapt to the characteristics in city planning that land parcels are usually recognized as the basic unit. The proposed method was validated against the actual land use and construction data that surveyed in Xi’an, with results indicating the feasibility of the proposed method for urban LCZ mapping in China.

  14. Genotyping-by-Sequencing derived High-Density Linkage Map and its Application to QTL Mapping of Flag Leaf Traits in Bread Wheat

    USDA-ARS?s Scientific Manuscript database

    Hard red winter wheat parents ‘Harry’ (drought tolerant) and ‘Wesley’ (drought susceptible) was used to develop a recombinant inbred population to identify genomic regions associated with drought and adaptation. To precisely map genomic regions high-density linkage maps are a prerequisite. In this s...

  15. Case studies combined with or without concept maps improve critical thinking in hospital-based nurses: a randomized-controlled trial.

    PubMed

    Huang, Yu-Chuan; Chen, Hsing-Hsia; Yeh, Mei-Ling; Chung, Yu-Chu

    2012-06-01

    Critical thinking (CT) is essential to the exercise of professional judgment. As nurses face increasingly complex health-care situations, critical thinking can promote appropriate clinical decision-making and improve the quality of nursing care. This study aimed to evaluate the effects of a program of case studies, alone (CS) or combined with concept maps (CSCM), on improving CT in clinical nurses. The study was a randomized controlled trial. The experimental group participated in a 16-week CSCM program, whereas the control group participated in a CS program of equal duration. A randomized-controlled trial with a multistage randomization process was used to select and to assign participants, ultimately resulting in 67 nurses in each group. Data were collected before and after the program using the California Critical Thinking Skill Test (CCTST) and the California Critical Thinking Disposition Inventory (CCTDI). After the programs, there were significant differences between the two groups in the critical thinking skills of analysis, evaluation, inference, deduction, and induction. There was also an overall significant difference, and a significant difference in the specific disposition of open-mindedness. This study supports the application of case studies combined with concept maps as a hospital-based teaching strategy to promote development of critical thinking skills and encourage dispositions for nurses. The CSCM resulted in greater improvements in all critical thinking skills of as well as the overall and open-minded affective dispositions toward critical thinking, compared with the case studies alone. An obvious improvement in the CSCM participants was the analytic skill and disposition. Further longitudinal studies and data collection from multisite evaluations in a range of geographic locales are warranted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing

    PubMed Central

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-01-01

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications. PMID:26437410

  17. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    PubMed

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications.

  18. Multivariate η-μ fading distribution with arbitrary correlation model

    NASA Astrophysics Data System (ADS)

    Ghareeb, Ibrahim; Atiani, Amani

    2018-03-01

    An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.

  19. Ventilation requirements in buildings—I. Control of occupancy odor and tobacco smoke odor

    NASA Astrophysics Data System (ADS)

    Cain, William S.; Leaderer, Brian P.; Isseroff, Ruth; Berglund, Larry G.; Huey, Raymond J.; Lipsitt, Eric D.; Perlman, Dan

    Psychophysical measurements of odor, supplemented with certain physical measurements, were taken to examine ventilation requirements during smoking and nonsmoking occupancy in an environmental chamber. The facility provided the means to compare impressions of visitors (persons who inhaled air from the chamber only briefly) with impressions of occupants. For nonsmoking occupancy, 47 combinations of temperature, humidity, ventilation rate and occupancy density were examined. Odor level depended entirely on ventilation rate per person irrespective of the number of persons in the chamber. The ventilation necessary to satisfy 75 % of visitors equalled only about 4 ℓ s -1 per person. Occupants, however, were satisfied with far less. In an array of 38 conditions of smoking occupancy, the ventilation deemed necessary to satisfy 75 % of visitors under customary conditions of occupancy equalled 17.5 ℓ s -1 per person. For both smoking and nonsmoking conditions, a combination of high temperature (25.5°C) and humidity (r.h. > 70 %) exacerbated the odor problem. During smoking, carbon monoxide rarely reached dangerous levels, but suspended particulate matter often reached levels considered unacceptable outdoors. The results highlight the energy penalty incurred in ventilation for smoking occupancy.

  20. A high-resolution map of the H1 locus harbouring resistance to the potato cyst nematode Globodera rostochiensis.

    PubMed

    Bakker, Erin; Achenbach, Ute; Bakker, Jeroen; van Vliet, Joke; Peleman, Johan; Segers, Bart; van der Heijden, Stefan; van der Linde, Piet; Graveland, Robert; Hutten, Ronald; van Eck, Herman; Coppoolse, Eric; van der Vossen, Edwin; Bakker, Jaap; Goverse, Aska

    2004-06-01

    The resistance gene H1 confers resistance to the potato cyst nematode Globodera rostochiensis and is located at the distal end of the long arm of chromosome V of potato. For marker enrichment of the H1 locus, a bulked segregant analysis (BSA) was carried out using 704 AFLP primer combinations. A second source of markers tightly linked to H1 is the ultra-high-density (UHD) genetic map of the potato cross SH x RH. This map has been produced with 387 AFLP primer combinations and consists of 10,365 AFLP markers in 1,118 bins (http://www.dpw.wageningen-ur.nl/uhd/). Comparing these two methods revealed that BSA resulted in one marker/cM and the UHD map in four markers/cM in the H1 interval. Subsequently, a high-resolution genetic map of the H1 locus has been developed using a segregating F(1) SH x RH population consisting of 1,209 genotypes. Two PCR-based markers were designed at either side of the H1 gene to screen the 1,209 genotypes for recombination events. In the high-resolution genetic map, two of the four co-segregating AFLP markers could be separated from the H1 gene. Marker EM1 is located at a distance of 0.2 cM, and marker EM14 is located at a distance of 0.8 cM. The other two co-segregating markers CM1 (in coupling) and EM15 (in repulsion) could not be separated from the H1 gene.

  1. Investigation on effect of Populus alba stands distance on density of pests and their natural enemies population under poplar/alfalfa agroforestry system.

    PubMed

    Khabir, Z H; Sadeghi, S E; Hanifeh, S; Eivazi, A

    2009-01-15

    This study was carried out in order to distinguish the effect of agroforestry system (combination of agriculture and forestry) on pests and natural enemy's population in poplar research station. Wood is one of the first substances that naturally was used for a long period of time. Forage is an important production of natural resources too. Some factors such as proper lands deficit, lack of economy, pest and disease attacks and faced production of these materials with serious challenges. Agroforestry is a method for decrease of the mentioned problems. The stands of poplar had have planted by complete randomized design with 4 treatments (stand distance) of poplar/alfalfa include 3x4, 3x6.7, 3x8, 3x10 m and 2 control treatments, alfalfa and poplar. The results showed that Chaitophorus populeti had the highest density in poplar and 3x10 m treatments. Monosteira unicostata is another insect pest that had most density in 3x10 m treatment. And alfalfa had high density of Chrysoperla carnea. The density of Coccinella septempunctata, were almost equal in all treatments.

  2. Qualitative landslide susceptibility assessment by multicriteria analysis: A case study from San Antonio del Sur, Guantánamo, Cuba

    NASA Astrophysics Data System (ADS)

    Castellanos Abella, Enrique A.; Van Westen, Cees J.

    Geomorphological information can be combined with decision-support tools to assess landslide hazard and risk. A heuristic model was applied to a rural municipality in eastern Cuba. The study is based on a terrain mapping units (TMU) map, generated at 1:50,000 scale by interpretation of aerial photos, satellite images and field data. Information describing 603 terrain units was collected in a database. Landslide areas were mapped in detail to classify the different failure types and parts. Three major landslide regions are recognized in the study area: coastal hills with rockfalls, shallow debris flows and old rotational rockslides denudational slopes in limestone, with very large deep-seated rockslides related to tectonic activity and the Sierra de Caujerí scarp, with large rockslides. The Caujerí scarp presents the highest hazard, with recent landslides and various signs of active processes. The different landforms and the causative factors for landslides were analyzed and used to develop the heuristic model. The model is based on weights assigned by expert judgment and organized in a number of components such as slope angle, internal relief, slope shape, geological formation, active faults, distance to drainage, distance to springs, geomorphological subunits and existing landslide zones. From these variables a hierarchical heuristic model was applied in which three levels of weights were designed for classes, variables, and criteria. The model combines all weights into a single hazard value for each pixel of the landslide hazard map. The hazard map was then divided by two scales, one with three classes for disaster managers and one with 10 detailed hazard classes for technical staff. The range of weight values and the number of existing landslides is registered for each class. The resulting increasing landslide density with higher hazard classes indicates that the output map is reliable. The landslide hazard map was used in combination with existing information on buildings and infrastructure to prepare a qualitative risk map. The complete lack of historical landslide information and geotechnical data precludes the development of quantitative deterministic or probabilistic models.

  3. Multidate, multisensor remote sensing reveals high density of carbon-rich mountain peatlands in the páramo of Ecuador.

    PubMed

    Hribljan, John A; Suarez, Esteban; Bourgeau-Chavez, Laura; Endres, Sarah; Lilleskov, Erik A; Chimbolema, Segundo; Wayson, Craig; Serocki, Eleanor; Chimner, Rodney A

    2017-12-01

    Tropical peatlands store a significant portion of the global soil carbon (C) pool. However, tropical mountain peatlands contain extensive peat soils that have yet to be mapped or included in global C estimates. This lack of data hinders our ability to inform policy and apply sustainable management practices to these peatlands that are experiencing unprecedented high rates of land use and land cover change. Rapid large-scale mapping activities are urgently needed to quantify tropical wetland extent and rate of degradation. We tested a combination of multidate, multisensor radar and optical imagery (Landsat TM/PALSAR/RADARSAT-1/TPI image stack) for detecting peatlands in a 2715 km 2 area in the high elevation mountains of the Ecuadorian páramo. The map was combined with an extensive soil coring data set to produce the first estimate of regional peatland soil C storage in the páramo. Our map displayed a high coverage of peatlands (614 km 2 ) containing an estimated 128.2 ± 9.1 Tg of peatland belowground soil C within the mapping area. Scaling-up to the country level, páramo peatlands likely represent less than 1% of the total land area of Ecuador but could contain as much as ~23% of the above- and belowground vegetation C stocks in Ecuadorian forests. These mapping approaches provide an essential methodological improvement applicable to mountain peatlands across the globe, facilitating mapping efforts in support of effective policy and sustainable management, including national and global C accounting and C management efforts. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  4. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  5. The influence of environmental and lithologic factors on rockfall at a regional scale: an evaluation using GIS

    NASA Astrophysics Data System (ADS)

    Menéndez Duarte, Rosana; Marquínez, Jorge

    2002-02-01

    Analysis of the spatial distribution of rockfall deposits at a regional scale (over an area of 250 km 2 of northern Spain) using a cartographic database supported by a Geographic Information System (GIS) reveals several relationships between rockfall activity and environmental variables. Recent rockfall activity is inferred when recent scree is preserved at the bottom of the rock slopes. In order to identify the slope source areas of the scree we have mapped the deposit's drainage basin, applying topographic criteria, and we have combined these basins with the rock slopes map. A method for setting the basin boundaries automatically will replace manual cartography. This method is based on algorithms available within many commercial software programs and originally planned to analyse the behaviour of fluids over a topographic surface. The results obtained by combining the rockfall area source map with the geology and DTM show the relationships between the distribution of rockfall deposits and lithology, elevation and slope of the rockwall and a strong control of the joint type and density. Elevation influence on rockfall has been associated with climatic variations with elevation. Other variables, such as orientation, show complex influences that are difficult to interpret.

  6. Cosmology from Cosmic Microwave Background and large- scale structure

    NASA Astrophysics Data System (ADS)

    Xu, Yongzhong

    2003-10-01

    This dissertation consists of a series of studies, constituting four published papers, involving the Cosmic Microwave Background and the large scale structure, which help constrain Cosmological parameters and potential systematic errors. First, we present a method for comparing and combining maps with different resolutions and beam shapes, and apply it to the Saskatoon, QMAP and COBE/DMR data sets. Although the Saskatoon and QMAP maps detect signal at the 21σ and 40σ, levels, respectively, their difference is consistent with pure noise, placing strong limits on possible systematic errors. In particular, we obtain quantitative upper limits on relative calibration and pointing errors. Splitting the combined data by frequency shows similar consistency between the Ka- and Q-bands, placing limits on foreground contamination. The visual agreement between the maps is equally striking. Our combined QMAP+Saskatoon map, nicknamed QMASK, is publicly available at www.hep.upenn.edu/˜xuyz/qmask.html together with its 6495 x 6495 noise covariance matrix. This thoroughly tested data set covers a large enough area (648 square degrees—at the time, the largest degree-scale map available) to allow a statistical comparison with LOBE/DMR, showing good agreement. By band-pass-filtering the QMAP and Saskatoon maps, we are also able to spatially compare them scale-by-scale to check for beam- and pointing-related systematic errors. Using the QMASK map, we then measure the cosmic microwave background (CMB) power spectrum on angular scales ℓ ˜ 30 200 (1° 6°), and we test it for non-Gaussianity using morphological statistics known as Minkowski functionals. We conclude that the QMASK map is neither a very typical nor a very exceptional realization of a Gaussian random field. At least about 20% of the 1000 Gaussian Monte Carlo maps differ more than the QMASK map from the mean morphological parameters of the Gaussian fields. Finally, we compute the real-space power spectrum and the redshift-space distortions of galaxies in the 2dF 100k galaxy redshift survey using pseudo-Karhunen-Loève eigenmodes and the stochastic bias formalism. Our results agree well with those published by the 2dFGRS team, and have the added advantage of producing easy-to-interpret uncorrelated minimum-variance measurements of the galaxy- galaxy, galaxy-velocity and velocity-velocity power spectra in 27 k-bands, with narrow and well-behaved window functions in the range 0.01 h /Mpc < k < 0.8 h/Mpc. We find no significant detection of baryonic wiggles. We measure the galaxy-matter correlation coefficient r > 0.4 and the redshift-distortion parameter β = 0.49 ± 0.16 for r = 1.

  7. Combined modified atmosphere packaging and low temperature storage delay lignification and improve the defense response of minimally processed water bamboo shoot.

    PubMed

    Song, Lili; Chen, Hangjun; Gao, Haiyan; Fang, Xiangjun; Mu, Honglei; Yuan, Ya; Yang, Qian; Jiang, Yueming

    2013-09-04

    Minimally processed water bamboo shoot (WBS) lignifies and deteriorates rapidly at room temperature, which limits greatly its marketability. This study was to investigate the effect of modified atmosphere packaging (MAP) on the sensory quality index, lignin formation, production of radical oxygen species (ROS) and activities of scavenging enzymes, membrane integrity and energy status of minimally processed WBS when packaged with or without the sealed low-density polyethylene (LDPE) bags, and then stored at 20°C for 9 days or 2°C for 60 days. The sensory quality of minimally processed WBS decreased quickly after 6 days of storage at 20°C. Low temperature storage maintained a higher sensory quality index within the first 30 days, but exhibited higher contents of lignin and hydrogen peroxide (H2O2) as compared with non-MAP shoots at 20°C. Combined MAP and low temperature storage not only maintained good sensory quality after 30 days, but also reduced significantly the increases in lignin content, superoxide anion (O2.-) production rate, H2O2 content and membrane permeability, maintained high activities of superoxide dismutase (SOD), catalase (CAT) and ascorbate peroxidase (APX), and reduced the increase in activities of lipase, phospholipase D (PLD) and lipoxygenase (LOX). Furthermore, the minimally processed WBS under MAP condition exhibited higher energy charge (EC) and lower adenosine monophosphate (AMP) content by the end of storage (60 days) at 2°C than those without MAP or stored for 9 days at 20°C. These results indicated that MAP in combination with low temperature storage reduced lignification of minimally processed WBS, which was closely associated with maintenance of energy status and enhanced activities of antioxidant enzymes, as well as reduced alleviation of membrane damage caused by ROS.

  8. Combined modified atmosphere packaging and low temperature storage delay lignification and improve the defense response of minimally processed water bamboo shoot

    PubMed Central

    2013-01-01

    Background Minimally processed water bamboo shoot (WBS) lignifies and deteriorates rapidly at room temperature, which limits greatly its marketability. This study was to investigate the effect of modified atmosphere packaging (MAP) on the sensory quality index, lignin formation, production of radical oxygen species (ROS) and activities of scavenging enzymes, membrane integrity and energy status of minimally processed WBS when packaged with or without the sealed low-density polyethylene (LDPE) bags, and then stored at 20°C for 9 days or 2°C for 60 days. Results The sensory quality of minimally processed WBS decreased quickly after 6 days of storage at 20°C. Low temperature storage maintained a higher sensory quality index within the first 30 days, but exhibited higher contents of lignin and hydrogen peroxide (H2O2) as compared with non-MAP shoots at 20°C. Combined MAP and low temperature storage not only maintained good sensory quality after 30 days, but also reduced significantly the increases in lignin content, superoxide anion (O2.-) production rate, H2O2 content and membrane permeability, maintained high activities of superoxide dismutase (SOD), catalase (CAT) and ascorbate peroxidase (APX), and reduced the increase in activities of lipase, phospholipase D (PLD) and lipoxygenase (LOX). Furthermore, the minimally processed WBS under MAP condition exhibited higher energy charge (EC) and lower adenosine monophosphate (AMP) content by the end of storage (60 days) at 2°C than those without MAP or stored for 9 days at 20°C. Conclusion These results indicated that MAP in combination with low temperature storage reduced lignification of minimally processed WBS, which was closely associated with maintenance of energy status and enhanced activities of antioxidant enzymes, as well as reduced alleviation of membrane damage caused by ROS. PMID:24006941

  9. Lack of small-scale clustering in 21-cm intensity maps crossed with 2dF galaxy densities at z ~ 0.08

    NASA Astrophysics Data System (ADS)

    Anderson, Christopher; Luciw, Nicholas; Li, Yi-Chao; Kuo, Cheng-Yu; Yadav, Jaswant; Masui, Kiyoshi; Chang, Tzu-Ching; Chen, Xuelei; Oppermann, Niels; Pen, Ue-Li; Timbie, Peter T.

    2017-06-01

    I report results from 21-cm intensity maps acquired from the Parkes radio telescope and cross-correlated with galaxy maps from the 2dF galaxy survey. The data span the redshift range 0.057

  10. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  11. High-density genetic map using whole-genome re-sequencing for fine mapping and candidate gene discovery for disease resistance in peanut

    USDA-ARS?s Scientific Manuscript database

    High-density genetic linkage maps are essential for fine mapping QTLs controlling disease resistance traits, such as early leaf spot (ELS), late leaf spot (LLS), and Tomato spotted wilt virus (TSWV). With completion of the genome sequences of two diploid ancestors of cultivated peanut, we could use ...

  12. UGbS-Flex, a novel bioinformatics pipeline for imputation-free SNP discovery in polyploids without a reference genome: finger millet as a case study.

    PubMed

    Qi, Peng; Gimode, Davis; Saha, Dipnarayan; Schröder, Stephan; Chakraborty, Debkanta; Wang, Xuewen; Dida, Mathews M; Malmberg, Russell L; Devos, Katrien M

    2018-06-15

    Research on orphan crops is often hindered by a lack of genomic resources. With the advent of affordable sequencing technologies, genotyping an entire genome or, for large-genome species, a representative fraction of the genome has become feasible for any crop. Nevertheless, most genotyping-by-sequencing (GBS) methods are geared towards obtaining large numbers of markers at low sequence depth, which excludes their application in heterozygous individuals. Furthermore, bioinformatics pipelines often lack the flexibility to deal with paired-end reads or to be applied in polyploid species. UGbS-Flex combines publicly available software with in-house python and perl scripts to efficiently call SNPs from genotyping-by-sequencing reads irrespective of the species' ploidy level, breeding system and availability of a reference genome. Noteworthy features of the UGbS-Flex pipeline are an ability to use paired-end reads as input, an effective approach to cluster reads across samples with enhanced outputs, and maximization of SNP calling. We demonstrate use of the pipeline for the identification of several thousand high-confidence SNPs with high representation across samples in an F 3 -derived F 2 population in the allotetraploid finger millet. Robust high-density genetic maps were constructed using the time-tested mapping program MAPMAKER which we upgraded to run efficiently and in a semi-automated manner in a Windows Command Prompt Environment. We exploited comparative GBS with one of the diploid ancestors of finger millet to assign linkage groups to subgenomes and demonstrate the presence of chromosomal rearrangements. The paper combines GBS protocol modifications, a novel flexible GBS analysis pipeline, UGbS-Flex, recommendations to maximize SNP identification, updated genetic mapping software, and the first high-density maps of finger millet. The modules used in the UGbS-Flex pipeline and for genetic mapping were applied to finger millet, an allotetraploid selfing species without a reference genome, as a case study. The UGbS-Flex modules, which can be run independently, are easily transferable to species with other breeding systems or ploidy levels.

  13. Depression and Suicide Publication Analysis, Using Density Equalizing Mapping and Output Benchmarking

    PubMed Central

    Vogelzang, B. H.; Scutaru, C.; Mache, S.; Vitzthum, K.; Quarcoo, David; Groneberg, D. A.

    2011-01-01

    Background: Depression is a major cause of suicide worldwide. This association has been reflected by numerous scientific publications reporting about studies to this theme. There is currently no overall evaluation of the global research activities in this field. Aim: The aim of the current study was to analyze long-term developments and recent research trends in this area. Material and Methods: We searched the Web of Science databases developed by the Thompson Institute of Scientific Information for items concerning depression and suicide published between 1900 and 2007 and analyzed the results using scientometric methods and density-equalizing calculations. Results: We found that publications on this topic increased dramatically in the time period 1990 to 2007. The comparison of the different Journals showed that the Archives of General Psychiatry had the highest average citation rate (more than twice that of any other Journal). When comparing authors, we found that not all the authors who had high h-indexes cooperated much with other authors. The analysis of countries who published papers on this topic showed that they published papers in relation to their Gross Domestic Product and Purchasing Power Parity. Among the G8 countries, Russia had the highest male suicide rate in 1999 (more than twice that of any of the other G8 countries), despite having published least papers and cooperating least with other countries among the G8. Conclusion: We conclude that, although there has been an increase in publications on this topic from 1990 to 2006, this increase is of a lower gradient than that of psoriasis and rheumatoid arthritis. PMID:22021955

  14. Mapping Students' Spoken Conceptions of Equality

    ERIC Educational Resources Information Center

    Anakin, Megan

    2013-01-01

    This study expands contemporary theorising about students' conceptions of equality. A nationally representative sample of New Zealand students' were asked to provide a spoken numerical response and an explanation as they solved an arithmetic additive missing number problem. Students' responses were conceptualised as acts of communication and…

  15. Visualizing the Logistic Map with a Microcontroller

    ERIC Educational Resources Information Center

    Serna, Juan D.; Joshi, Amitabh

    2012-01-01

    The logistic map is one of the simplest nonlinear dynamical systems that clearly exhibits the route to chaos. In this paper, we explore the evolution of the logistic map using an open-source microcontroller connected to an array of light-emitting diodes (LEDs). We divide the one-dimensional domain interval [0,1] into ten equal parts, an associate…

  16. Experimental Phasing: Substructure Solution and Density Modification as Implemented in SHELX.

    PubMed

    Thorn, Andrea

    2017-01-01

    This chapter describes experimental phasing methods as implemented in SHELX. After introducing fundamental concepts underlying all experimental phasing approaches, the methods used by SHELXC/D/E are described in greater detail, such as dual-space direct methods, Patterson seeding and density modification with the sphere of influence algorithm. Intensity differences from data for experimental phasing can also be used for the generation and usage of difference maps with ANODE for validation and phasing purposes. A short section describes how molecular replacement can be combined with experimental phasing methods. The second half covers practical challenges, such as prerequisites for successful experimental phasing, evaluation of potential solutions, and what to do if substructure search or density modification fails. It is also shown how auto-tracing in SHELXE can improve automation and how it ties in with automatic model building after phasing.

  17. Inferring physical properties of galaxies from their emission-line spectra

    NASA Astrophysics Data System (ADS)

    Ucci, G.; Ferrara, A.; Gallerani, S.; Pallottini, A.

    2017-02-01

    We present a new approach based on Supervised Machine Learning algorithms to infer key physical properties of galaxies (density, metallicity, column density and ionization parameter) from their emission-line spectra. We introduce a numerical code (called GAME, GAlaxy Machine learning for Emission lines) implementing this method and test it extensively. GAME delivers excellent predictive performances, especially for estimates of metallicity and column densities. We compare GAME with the most widely used diagnostics (e.g. R23, [N II] λ6584/Hα indicators) showing that it provides much better accuracy and wider applicability range. GAME is particularly suitable for use in combination with Integral Field Unit spectroscopy, both for rest-frame optical/UV nebular lines and far-infrared/sub-millimeter lines arising from photodissociation regions. Finally, GAME can also be applied to the analysis of synthetic galaxy maps built from numerical simulations.

  18. A procedure for combining acoustically induced and mechanically induced loads (first passage failure design criterion)

    NASA Technical Reports Server (NTRS)

    Crowe, D. R.; Henricks, W.

    1983-01-01

    The combined load statistics are developed by taking the acoustically induced load to be a random population, assumed to be stationary. Each element of this ensemble of acoustically induced loads is assumed to have the same power spectral density (PSD), obtained previously from a random response analysis employing the given acoustic field in the STS cargo bay as a stationary random excitation. The mechanically induced load is treated as either (1) a known deterministic transient, or (2) a nonstationary random variable of known first and second statistical moments which vary with time. A method is then shown for determining the probability that the combined load would, at any time, have a value equal to or less than a certain level. Having obtained a statistical representation of how the acoustic and mechanical loads are expected to combine, an analytical approximation for defining design levels for these loads is presented using the First Passage failure criterion.

  19. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  20. A Study on the Priority Selection of Sediment-related Desaster Evacuation Using Debris Flow Combination Degree of Risk

    NASA Astrophysics Data System (ADS)

    Woo, C.; Kang, M.; Seo, J.; Kim, D.; Lee, C.

    2017-12-01

    As the mountainous urbanization has increased the concern about landslides in the living area, it is essential to develop the technology to minimize the damage through quick identification and sharing of the disaster occurrence information. In this study, to establish an effective system of alert evacuation that has influence on the residents, we used the debris flow combination degree of risk to predict the risk of the disaster and the level of damage and to select evacuation priorities. Based on the GIS information, the physical strength and social vulnerability were determined by following the debris flow combination of the risk formula. The results classify the physical strength hazard rating of the debris flow combination of the through the normalization process. Debris flow the estimated residential population included in the damage range of the damage prediction map is based on the area and the unit size data. Prediction of occupant formula was calculated by applying different weighting to the resident population and users, and the result was classified into 5 classes as the debris flow physical strength. The debris flow occurrence physical strength and social and psychological vulnerability were classified into the classifications to be reflected in the debris flow integrated risk map using the matrix technique. In addition, to supplement the risk of incorporation of debris flow, we added weight to disaster vulnerable facilities that require a lot of time and manpower to evacuate. The basic model of welfare facilities was supplemented by using basic data, population density, employment density and GDP. First, evacuate areas with high integrated degree of risk level, and evacuate with consideration of physical class differences if classification difficult because of the same or similar grade among the management areas. When the physical hazard class difference is similar, the population difference of the area including the welfare facility is considered first, and the priority is decided in order of age distribution, population density by period, and class difference of residential facility. The results of this study are expected be used as basic data for establishing a safety net for landslide by evacuation systems for disasters. Keyword: Landslide, Debris flow, Early warning system, evacuation

  1. Construction of a high-density, high-resolution genetic map and its integration with BAC-based physical map in channel catfish

    PubMed Central

    Li, Yun; Liu, Shikai; Qin, Zhenkui; Waldbieser, Geoff; Wang, Ruijia; Sun, Luyang; Bao, Lisui; Danzmann, Roy G.; Dunham, Rex; Liu, Zhanjiang

    2015-01-01

    Construction of genetic linkage map is essential for genetic and genomic studies. Recent advances in sequencing and genotyping technologies made it possible to generate high-density and high-resolution genetic linkage maps, especially for the organisms lacking extensive genomic resources. In the present work, we constructed a high-density and high-resolution genetic map for channel catfish with three large resource families genotyped using the catfish 250K single-nucleotide polymorphism (SNP) array. A total of 54,342 SNPs were placed on the linkage map, which to our knowledge had the highest marker density among aquaculture species. The estimated genetic size was 3,505.4 cM with a resolution of 0.22 cM for sex-averaged genetic map. The sex-specific linkage maps spanned a total of 4,495.1 cM in females and 2,593.7 cM in males, presenting a ratio of 1.7 : 1 between female and male in recombination fraction. After integration with the previously established physical map, over 87% of physical map contigs were anchored to the linkage groups that covered a physical length of 867 Mb, accounting for ∼90% of the catfish genome. The integrated map provides a valuable tool for validating and improving the catfish whole-genome assembly and facilitates fine-scale QTL mapping and positional cloning of genes responsible for economically important traits. PMID:25428894

  2. Convection in an ideal gas at high Rayleigh numbers.

    PubMed

    Tilgner, A

    2011-08-01

    Numerical simulations of convection in a layer filled with ideal gas are presented. The control parameters are chosen such that there is a significant variation of density of the gas in going from the bottom to the top of the layer. The relations between the Rayleigh, Peclet, and Nusselt numbers depend on the density stratification. It is proposed to use a data reduction which accounts for the variable density by introducing into the scaling laws an effective density. The relevant density is the geometric mean of the maximum and minimum densities in the layer. A good fit to the data is then obtained with power laws with the same exponent as for fluids in the Boussinesq limit. Two relations connect the top and bottom boundary layers: The kinetic energy densities computed from free fall velocities are equal at the top and bottom, and the products of free fall velocities and maximum horizontal velocities are equal for both boundaries.

  3. Information Processing Capacity of Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-07-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory.

  4. Information Processing Capacity of Dynamical Systems

    PubMed Central

    Dambre, Joni; Verstraeten, David; Schrauwen, Benjamin; Massar, Serge

    2012-01-01

    Many dynamical systems, both natural and artificial, are stimulated by time dependent external signals, somehow processing the information contained therein. We demonstrate how to quantify the different modes in which information can be processed by such systems and combine them to define the computational capacity of a dynamical system. This is bounded by the number of linearly independent state variables of the dynamical system, equaling it if the system obeys the fading memory condition. It can be interpreted as the total number of linearly independent functions of its stimuli the system can compute. Our theory combines concepts from machine learning (reservoir computing), system modeling, stochastic processes, and functional analysis. We illustrate our theory by numerical simulations for the logistic map, a recurrent neural network, and a two-dimensional reaction diffusion system, uncovering universal trade-offs between the non-linearity of the computation and the system's short-term memory. PMID:22816038

  5. Imaging electron wave functions inside open quantum rings.

    PubMed

    Martins, F; Hackens, B; Pala, M G; Ouisse, T; Sellier, H; Wallart, X; Bollaert, S; Cappy, A; Chevrier, J; Bayot, V; Huant, S

    2007-09-28

    Combining scanning gate microscopy (SGM) experiments and simulations, we demonstrate low temperature imaging of the electron probability density |Psi|(2)(x,y) in embedded mesoscopic quantum rings. The tip-induced conductance modulations share the same temperature dependence as the Aharonov-Bohm effect, indicating that they originate from electron wave function interferences. Simulations of both |Psi|(2)(x,y) and SGM conductance maps reproduce the main experimental observations and link fringes in SGM images to |Psi|(2)(x,y).

  6. ISO deep far-infrared survey in the Lockman Hole

    NASA Astrophysics Data System (ADS)

    Kawara, K.; Sato, Y.; Matsuhara, H.; Taniguchi, Y.; Okuda, H.; Sofue, Y.; Matsumoto, T.; Wakamatsu, K.; Cowie, L. L.; Joseph, R. D.; Sanders, D. B.

    1999-03-01

    Two 44 arcmin x 44 arcmin fields in the Lockman Hole were mapped at 95 and 175 μm using ISOPHOT. A simple program code combined with PIA works well to correct for the drift in the detector responsivity. The number density of 175 μm sources is 3 - 10 times higher than expected from the no-evolution model. The source counts at 95 and 175 μm are consistent with the cosmic infrared background.

  7. Comparative hi-density intraspecific linkage mapping using three elite populations from common parents

    USDA-ARS?s Scientific Manuscript database

    High-density linkage maps are fundamental to contemporary organismal research and scientific approaches to genetic improvement, especially in paleopolyploids with exceptionally complex genomes, e.g., Upland cotton (Gossypium hirsutum L., 2n=52). Using 3 full-sib intra-specific mapping populations fr...

  8. Extended shelf life of soy bread using modified atmosphere packaging.

    PubMed

    Fernandez, Ursula; Vodovotz, Yael; Courtney, Polly; Pascall, Melvin A

    2006-03-01

    This study investigated the use of modified atmosphere packaging (MAP) to extend the shelf life of soy bread with and without calcium propionate as a chemical preservative. The bread samples were packaged in pouches made from low-density polyethylene (LDPE) as the control (film 1), high-barrier laminated linear low-density polyethylene (LLDPE)-nylon-ethylene vinyl alcohol-nylon-LLDPE (film 2), and medium-barrier laminated LLDPE-nylon-LLDPE (film 3). The headspace gases used were atmosphere (air) as control, 50% CO2-50% N2, or 20% CO2-80% N2. The shelf life was determined by monitoring mold and yeast (M+Y) and aerobic plate counts (APC) in soy bread samples stored at 21 degrees C +/- 3 degrees C and 38% +/- 2% relative humidity. At 0, 2, 4, 6, 8, 10, and 12 days of storage, soy bread samples were removed, and the M+Y and APC were determined. The preservative, the films, and the headspace gases had significant effects on both the M+Y counts and the APC of soy bread samples. The combination of film 2 in the 50% CO2-50% N2 or 20% CO2-80% N2 headspace gases without calcium propionate as the preservative inhibited the M+Y growth by 6 days and the APC by 4 days. It was thus concluded that MAP using film 2 with either the 50% CO2-50% N2 or 20% CO2-80% N2 was the best combination for shelf-life extension of the soy bread without the need for a chemical preservative. These MAP treatments extended the shelf life by at least 200%.

  9. Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study

    PubMed Central

    Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.

    2011-01-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011

  10. Noise characteristics of CT perfusion imaging: how does noise propagate from source images to final perfusion maps?

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2016-03-01

    Cerebral CT perfusion (CTP) imaging is playing an important role in the diagnosis and treatment of acute ischemic strokes. Meanwhile, the reliability of CTP-based ischemic lesion detection has been challenged due to the noisy appearance and low signal-to-noise ratio of CTP maps. To reduce noise and improve image quality, a rigorous study on the noise transfer properties of CTP systems is highly desirable to provide the needed scientific guidance. This paper concerns how noise in the CTP source images propagates to the final CTP maps. Both theoretical deviations and subsequent validation experiments demonstrated that, the noise level of background frames plays a dominant role in the noise of the cerebral blood volume (CBV) maps. This is in direct contradiction with the general belief that noise of non-background image frames is of greater importance in CTP imaging. The study found that when radiation doses delivered to the background frames and to all non-background frames are equal, lowest noise variance is achieved in the final CBV maps. This novel equality condition provides a practical means to optimize radiation dose delivery in CTP data acquisition: radiation exposures should be modulated between background frames and non-background frames so that the above equality condition is satisïnAed. For several typical CTP acquisition protocols, numerical simulations and in vivo canine experiment demonstrated that noise of CBV can be effectively reduced using the proposed exposure modulation method.

  11. Construction and analysis of a high-density genetic linkage map in cabbage (Brassica oleracea L. var. capitata)

    PubMed Central

    2012-01-01

    Background Brassica oleracea encompass a family of vegetables and cabbage that are among the most widely cultivated crops. In 2009, the B. oleracea Genome Sequencing Project was launched using next generation sequencing technology. None of the available maps were detailed enough to anchor the sequence scaffolds for the Genome Sequencing Project. This report describes the development of a large number of SSR and SNP markers from the whole genome shotgun sequence data of B. oleracea, and the construction of a high-density genetic linkage map using a double haploid mapping population. Results The B. oleracea high-density genetic linkage map that was constructed includes 1,227 markers in nine linkage groups spanning a total of 1197.9 cM with an average of 0.98 cM between adjacent loci. There were 602 SSR markers and 625 SNP markers on the map. The chromosome with the highest number of markers (186) was C03, and the chromosome with smallest number of markers (99) was C09. Conclusions This first high-density map allowed the assembled scaffolds to be anchored to pseudochromosomes. The map also provides useful information for positional cloning, molecular breeding, and integration of information of genes and traits in B. oleracea. All the markers on the map will be transferable and could be used for the construction of other genetic maps. PMID:23033896

  12. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population.

    PubMed

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat.

  13. Genome-Wide QTL Mapping for Wheat Processing Quality Parameters in a Gaocheng 8901/Zhoumai 16 Recombinant Inbred Line Population

    PubMed Central

    Jin, Hui; Wen, Weie; Liu, Jindong; Zhai, Shengnan; Zhang, Yan; Yan, Jun; Liu, Zhiyong; Xia, Xianchun; He, Zhonghu

    2016-01-01

    Dough rheological and starch pasting properties play an important role in determining processing quality in bread wheat (Triticum aestivum L.). In the present study, a recombinant inbred line (RIL) population derived from a Gaocheng 8901/Zhoumai 16 cross grown in three environments was used to identify quantitative trait loci (QTLs) for dough rheological and starch pasting properties evaluated by Mixograph, Rapid Visco-Analyzer (RVA), and Mixolab parameters using the wheat 90 and 660 K single nucleotide polymorphism (SNP) chip assays. A high-density linkage map constructed with 46,961 polymorphic SNP markers from the wheat 90 and 660 K SNP assays spanned a total length of 4121 cM, with an average chromosome length of 196.2 cM and marker density of 0.09 cM/marker; 6596 new SNP markers were anchored to the bread wheat linkage map, with 1046 and 5550 markers from the 90 and 660 K SNP assays, respectively. Composite interval mapping identified 119 additive QTLs on 20 chromosomes except 4D; among them, 15 accounted for more than 10% of the phenotypic variation across two or three environments. Twelve QTLs for Mixograph parameters, 17 for RVA parameters and 55 for Mixolab parameters were new. Eleven QTL clusters were identified. The closely linked SNP markers can be used in marker-assisted wheat breeding in combination with the Kompetitive Allele Specific PCR (KASP) technique for improvement of processing quality in bread wheat. PMID:27486464

  14. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  15. AGM2015: Antineutrino Global Map 2015

    PubMed Central

    Usman, S.M.; Jocher, G.R.; Dye, S.T.; McDonough, W.F.; Learned, J.G.

    2015-01-01

    Every second greater than 1025 antineutrinos radiate to space from Earth, shining like a faint antineutrino star. Underground antineutrino detectors have revealed the rapidly decaying fission products inside nuclear reactors, verified the long-lived radioactivity inside our planet, and informed sensitive experiments for probing fundamental physics. Mapping the anisotropic antineutrino flux and energy spectrum advance geoscience by defining the amount and distribution of radioactive power within Earth while critically evaluating competing compositional models of the planet. We present the Antineutrino Global Map 2015 (AGM2015), an experimentally informed model of Earth’s surface antineutrino flux over the 0 to 11 MeV energy spectrum, along with an assessment of systematic errors. The open source AGM2015 provides fundamental predictions for experiments, assists in strategic detector placement to determine neutrino mass hierarchy, and aids in identifying undeclared nuclear reactors. We use cosmochemically and seismologically informed models of the radiogenic lithosphere/mantle combined with the estimated antineutrino flux, as measured by KamLAND and Borexino, to determine the Earth’s total antineutrino luminosity at . We find a dominant flux of geo-neutrinos, predict sub-equal crust and mantle contributions, with ~1% of the total flux from man-made nuclear reactors. PMID:26323507

  16. AGM2015: Antineutrino Global Map 2015.

    PubMed

    Usman, S M; Jocher, G R; Dye, S T; McDonough, W F; Learned, J G

    2015-09-01

    Every second greater than 10(25) antineutrinos radiate to space from Earth, shining like a faint antineutrino star. Underground antineutrino detectors have revealed the rapidly decaying fission products inside nuclear reactors, verified the long-lived radioactivity inside our planet, and informed sensitive experiments for probing fundamental physics. Mapping the anisotropic antineutrino flux and energy spectrum advance geoscience by defining the amount and distribution of radioactive power within Earth while critically evaluating competing compositional models of the planet. We present the Antineutrino Global Map 2015 (AGM2015), an experimentally informed model of Earth's surface antineutrino flux over the 0 to 11 MeV energy spectrum, along with an assessment of systematic errors. The open source AGM2015 provides fundamental predictions for experiments, assists in strategic detector placement to determine neutrino mass hierarchy, and aids in identifying undeclared nuclear reactors. We use cosmochemically and seismologically informed models of the radiogenic lithosphere/mantle combined with the estimated antineutrino flux, as measured by KamLAND and Borexino, to determine the Earth's total antineutrino luminosity at . We find a dominant flux of geo-neutrinos, predict sub-equal crust and mantle contributions, with ~1% of the total flux from man-made nuclear reactors.

  17. Inter-track interference mitigation with two-dimensional variable equalizer for bit patterned media recording

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Vijaya Kumar, B. V. K.

    2017-05-01

    The increased track density in bit patterned media recording (BPMR) causes increased inter-track interference (ITI), which degrades the bit error rate (BER) performance. In order to mitigate the effect of the ITI, signals from multiple tracks can be equalized by a 2D equalizer with 1D target. Usually, the 2D fixed equalizer coefficients are obtained by using a pseudo-random bit sequence (PRBS) for training. In this study, a 2D variable equalizer is proposed, where various sets of 2D equalizer coefficients are predetermined and stored for different ITI patterns besides the usual PRBS training. For data detection, as the ITI patterns are unknown in the first global iteration, the main and adjacent tracks are equalized with the conventional 2D fixed equalizer, detected with Bahl-Cocke-Jelinek-Raviv (BCJR) detector and decoded with low-density parity-check (LDPC) decoder. Then using the estimated bit information from main and adjacent tracks, the ITI pattern for each island of the main track can be estimated and the corresponding 2D variable equalizers are used to better equalize the bits on the main track. This process is executed iteratively by feeding back the main track information. Simulation results indicate that for both single-track and two-track detection, the proposed 2D variable equalizer can achieve better BER and frame error rate (FER) compared to that with the 2D fixed equalizer.

  18. The Cancer Cluster - An unbound collection of groups

    NASA Technical Reports Server (NTRS)

    Geller, M. J.; Beers, T. C.; Bothun, G. D.; Huchra, J. P.

    1983-01-01

    A surface density contour map of the Cancer Cluster derived from galaxy counts in the Zwicky catalog is presented. The contour map shows that the galaxy distribution is clumpy. When this spatial distribution is combined with nearly complete velocity information, the clumps stand out more clearly; there are significant differences in the mean velocities of the clumps which exceed their internal velocity dispersions. The Cancer Cluster is not a proper 'cluster' but is a collection of discrete groups, each with a velocity dispersion of approximately 300 km/s, separating from one another with the cosmological flow. The mass-to-light ratio for galaxies in the main concentration is approximately 320 solar masses/solar luminosities (H sub 0 = 100 km/s Mpc).

  19. N-(6-Methylpyridin-2-yl)mesitylenesulfonamide and acetic acid--a salt, a cocrystal or both?

    PubMed

    Pan, Fangfang; Kalf, Irmgard; Englert, Ulli

    2015-08-01

    In the solid obtained from N-(6-methylpyridin-2-yl)mesitylenesulfonamide and acetic acid, the constituents interact via two N-H···O hydrogen bonds. The H atom situated in one of these short contacts is disordered over two positions: one of these positions is formally associated with an adduct of the neutral sulfonamide molecule and the neutral acetic acid molecule, and corresponds to a cocrystal, while the alternative site is associated with salt formation between a protonated sulfonamide molecule and deprotonated acetic acid molecule. Site-occupancy refinements and electron densities from difference Fourier maps suggest a trend with temperature, albeit of limited significance; the cocrystal is more relevant at 100 K, whereas the intensity data collected at room temperature match the description as cocrystal and salt equally well.

  20. A wood density and aboveground biomass variability assessment using pre-felling inventory data in Costa Rica.

    PubMed

    Svob, Sienna; Arroyo-Mora, J Pablo; Kalacska, Margaret

    2014-12-01

    The high spatio-temporal variability of aboveground biomass (AGB) in tropical forests is a large source of uncertainty in forest carbon stock estimation. Due to their spatial distribution and sampling intensity, pre-felling inventories are a potential source of ground level data that could help reduce this uncertainty at larger spatial scales. Further, exploring the factors known to influence tropical forest biomass, such as wood density and large tree density, will improve our knowledge of biomass distribution across tropical regions. Here, we evaluate (1) the variability of wood density and (2) the variability of AGB across five ecosystems of Costa Rica. Using forest management (pre-felling) inventories we found that, of the regions studied, Huetar Norte had the highest mean wood density of trees with a diameter at breast height (DBH) greater than or equal to 30 cm, 0.623 ± 0.182 g cm -3 (mean ± standard deviation). Although the greatest wood density was observed in Huetar Norte, the highest mean estimated AGB (EAGB) of trees with a DBH greater than or equal to 30 cm was observed in Osa peninsula (173.47 ± 60.23 Mg ha -1 ). The density of large trees explained approximately 50% of EAGB variability across the five ecosystems studied. Comparing our study's EAGB to published estimates reveals that, in the regions of Costa Rica where AGB has been previously sampled, our forest management data produced similar values. This study presents the most spatially rich analysis of ground level AGB data in Costa Rica to date. Using forest management data, we found that EAGB within and among five Costa Rican ecosystems is highly variable. Combining commercial logging inventories with ecological plots will provide a more representative ground level dataset for the calibration of the models and remotely sensed data used to EAGB at regional and national scales. Additionally, because the non-protected areas of the tropics offer the greatest opportunity to reduce rates of deforestation and forest degradation, logging inventories offer a promising source of data to support mechanisms such as the United Nations REDD + (Reducing Emissions from Tropical Deforestation and Degradation) program.

  1. In vivo mapping of current density distribution in brain tissues during deep brain stimulation (DBS)

    NASA Astrophysics Data System (ADS)

    Sajib, Saurav Z. K.; Oh, Tong In; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je

    2017-01-01

    New methods for in vivo mapping of brain responses during deep brain stimulation (DBS) are indispensable to secure clinical applications. Assessment of current density distribution, induced by internally injected currents, may provide an alternative method for understanding the therapeutic effects of electrical stimulation. The current flow and pathway are affected by internal conductivity, and can be imaged using magnetic resonance-based conductivity imaging methods. Magnetic resonance electrical impedance tomography (MREIT) is an imaging method that can enable highly resolved mapping of electromagnetic tissue properties such as current density and conductivity of living tissues. In the current study, we experimentally imaged current density distribution of in vivo canine brains by applying MREIT to electrical stimulation. The current density maps of three canine brains were calculated from the measured magnetic flux density data. The absolute current density values of brain tissues, including gray matter, white matter, and cerebrospinal fluid were compared to assess the active regions during DBS. The resulting current density in different tissue types may provide useful information about current pathways and volume activation for adjusting surgical planning and understanding the therapeutic effects of DBS.

  2. In situ electrical and thermal monitoring of printed electronics by two-photon mapping.

    PubMed

    Pastorelli, Francesco; Accanto, Nicolò; Jørgensen, Mikkel; van Hulst, Niek F; Krebs, Frederik C

    2017-06-19

    Printed electronics is emerging as a new, large scale and cost effective technology that will be disruptive in fields such as energy harvesting, consumer electronics and medical sensors. The performance of printed electronic devices relies principally on the carrier mobility and molecular packing of the polymer semiconductor material. Unfortunately, the analysis of such materials is generally performed with destructive techniques, which are hard to make compatible with in situ measurements, and pose a great obstacle for the mass production of printed electronics devices. A rapid, in situ, non-destructive and low-cost testing method is needed. In this study, we demonstrate that nonlinear optical microscopy is a promising technique to achieve this goal. Using ultrashort laser pulses we stimulate two-photon absorption in a roll coated polymer semiconductor and map the resulting two-photon induced photoluminescence and second harmonic response. We show that, in our experimental conditions, it is possible to relate the total amount of photoluminescence detected to important material properties such as the charge carrier density and the molecular packing of the printed polymer material, all with a spatial resolution of 400 nm. Importantly, this technique can be extended to the real time mapping of the polymer semiconductor film, even during the printing process, in which the high printing speed poses the need for equally high acquisition rates.

  3. Plastic Responses of a Sessile Prey to Multiple Predators: A Field and Experimental Study

    PubMed Central

    Hirsch, Philipp Emanuel; Cayon, David; Svanbäck, Richard

    2014-01-01

    Background Theory predicts that prey facing a combination of predators with different feeding modes have two options: to express a response against the feeding mode of the most dangerous predator, or to express an intermediate response. Intermediate phenotypes protect equally well against several feeding modes, rather than providing specific protection against a single predator. Anti-predator traits that protect against a common feeding mode displayed by all predators should be expressed regardless of predator combination, as there is no need for trade-offs. Principal Findings We studied phenotypic anti-predator responses of zebra mussels to predation threat from a handling-time-limited (crayfish) and a gape-size-limited (roach) predator. Both predators dislodge mussels from the substrate but diverge in their further feeding modes. Mussels increased expression of a non-specific defense trait (attachment strength) against all combinations of predators relative to a control. In response to roach alone, mussels showed a tendency to develop a weaker and more elongated shell. In response to crayfish, mussels developed a harder and rounder shell. When exposed to either a combination of predators or no predator, mussels developed an intermediate phenotype. Mussel growth rate was positively correlated with an elongated weaker shell and negatively correlated with a round strong shell, indicating a trade-off between anti-predator responses. Field observations of prey phenotypes revealed the presence of both anti-predator phenotypes and the trade-off with growth, but intra-specific population density and bottom substrate had a greater influence than predator density. Conclusions Our results show that two different predators can exert both functionally equivalent and inverse selection pressures on a single prey. Our field study suggests that abiotic factors and prey population density should be considered when attempting to explain phenotypic diversity in the wild. PMID:25517986

  4. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  5. Dynamics of total electron content distribution during strong geomagnetic storms

    NASA Astrophysics Data System (ADS)

    Astafyeva, E. I.; Afraimovich, E. L.; Kosogorov, E. A.

    We worked out a new method of mapping of total electron content TEC equal lines displacement velocity The method is based on the technique of global absolute vertical TEC value mapping Global Ionospheric Maps technique GIM GIM with 2-hours time resolution are available from Internet underline ftp cddisa gsfc nasa gov in standard IONEX-files format We determine the displacement velocity absolute value as well as its wave vector orientation from increments of TEC x y derivatives and TEC time derivative for each standard GIM cell 5 in longitude to 2 5 in latitude Thus we observe global traveling of TEC equal lines but we also can estimate the velocity of these line traveling Using the new method we observed anomalous rapid accumulation of the ionosphere plasma at some confined area due to the depletion of the ionization at the other spacious territories During the main phase of the geomagnetic storm on 29-30 October 2003 very large TEC enhancements appeared in the southwest of North America TEC value in that area reached up to 200 TECU 1 TECU 10 16 m -2 It was found that maximal velocity of TEC equal lines motion exceeded 1500 m s and the mean value of the velocity was about 400 m s Azimuth of wave vectors of TEC equal lines were orientated toward the center of region with anomaly high values of TEC the southwest of North America It should be noted that maximal TEC values during geomagnetically quiet conditions is about 60-80 TECU the value of TEC equal lines

  6. A method to derive vegetation distribution maps for pollen dispersion models using birch as an example

    NASA Astrophysics Data System (ADS)

    Pauling, A.; Rotach, M. W.; Gehrig, R.; Clot, B.

    2012-09-01

    Detailed knowledge of the spatial distribution of sources is a crucial prerequisite for the application of pollen dispersion models such as, for example, COSMO-ART (COnsortium for Small-scale MOdeling - Aerosols and Reactive Trace gases). However, this input is not available for the allergy-relevant species such as hazel, alder, birch, grass or ragweed. Hence, plant distribution datasets need to be derived from suitable sources. We present an approach to produce such a dataset from existing sources using birch as an example. The basic idea is to construct a birch dataset using a region with good data coverage for calibration and then to extrapolate this relationship to a larger area by using land use classes. We use the Swiss forest inventory (1 km resolution) in combination with a 74-category land use dataset that covers the non-forested areas of Switzerland as well (resolution 100 m). Then we assign birch density categories of 0%, 0.1%, 0.5% and 2.5% to each of the 74 land use categories. The combination of this derived dataset with the birch distribution from the forest inventory yields a fairly accurate birch distribution encompassing entire Switzerland. The land use categories of the Global Land Cover 2000 (GLC2000; Global Land Cover 2000 database, 2003, European Commission, Joint Research Centre; resolution 1 km) are then calibrated with the Swiss dataset in order to derive a Europe-wide birch distribution dataset and aggregated onto the 7 km COSMO-ART grid. This procedure thus assumes that a certain GLC2000 land use category has the same birch density wherever it may occur in Europe. In order to reduce the strict application of this crucial assumption, the birch density distribution as obtained from the previous steps is weighted using the mean Seasonal Pollen Index (SPI; yearly sums of daily pollen concentrations). For future improvement, region-specific birch densities for the GLC2000 categories could be integrated into the mapping procedure.

  7. Intensive Training Course on Microplanning and School Mapping (Arusha, United Republic of Tanzania, March 8-26, 1982). Report.

    ERIC Educational Resources Information Center

    Caillods, F.; Heyman, S.

    This manual contains documentation of a 3-week course conducted jointly in March 1982 by the Tanzanian Ministry of Education and the International Institute for Educational Planning on the subject of the school map (or micro-plan). Prepared at the regional or subregional level, the school map aims at equalizing educational opportunities and…

  8. A hot topic: the genetics of adaptation to geothermal vents in Mimulus guttatus.

    PubMed

    Ferris, Kathleen G

    2016-11-01

    Identifying the individual loci and mutations that underlie adaptation to extreme environments has long been a goal of evolutionary biology. However, finding the genes that underlie adaptive traits is difficult for several reasons. First, because many traits and genes evolve simultaneously as populations diverge, it is difficult to disentangle adaptation from neutral demographic processes. Second, finding the individual loci involved in any trait is challenging given the respective limitations of quantitative and population genetic methods. In this issue of Molecular Ecology, Hendrick et al. (2016) overcome these difficulties and determine the genetic basis of microgeographic adaptation between geothermal vent and nonthermal populations of Mimulus guttatus in Yellowstone National Park. The authors accomplish this by combining population and quantitative genetic techniques, a powerful, but labour-intensive, strategy for identifying individual causative adaptive loci that few studies have used (Stinchcombe & Hoekstra ). In a previous common garden experiment (Lekberg et al. 2012), thermal M. guttatus populations were found to differ from their closely related nonthermal neighbours in various adaptive phenotypes including trichome density. Hendrick et al. (2016) combine quantitative trait loci (QTL) mapping, population genomic scans for selection and admixture mapping to identify a single genetic locus underlying differences in trichome density between thermal and nonthermal M. guttatus. The candidate gene, R2R3 MYB, is homologous to genes involved in trichome development across flowering plants. The major trichome QTL, Tr14, is also involved in trichome density differences in an independent M. guttatus population comparison (Holeski et al. 2010) making this an example of parallel genetic evolution. © 2016 John Wiley & Sons Ltd.

  9. Connecting the dots: a correlation between ionizing radiation and cloud mass-loss rate traced by optical integral field spectroscopy

    NASA Astrophysics Data System (ADS)

    McLeod, A. F.; Gritschneder, M.; Dale, J. E.; Ginsburg, A.; Klaassen, P. D.; Mottram, J. C.; Preibisch, T.; Ramsay, S.; Reiter, M.; Testi, L.

    2016-11-01

    We present an analysis of the effect of feedback from O- and B-type stars with data from the integral field spectrograph Multi Unit Spectroscopic Explorer (MUSE) mounted on the Very Large Telescope of pillar-like structures in the Carina Nebular Complex, one of the most massive star-forming regions in the Galaxy. For the observed pillars, we compute gas electron densities and temperatures maps, produce integrated line and velocity maps of the ionized gas, study the ionization fronts at the pillar tips, analyse the properties of the single regions, and detect two ionized jets originating from two distinct pillar tips. For each pillar tip, we determine the incident ionizing photon flux Q0, pil originating from the nearby massive O- and B-type stars and compute the mass-loss rate dot{M} of the pillar tips due to photoevaporation caused by the incident ionizing radiation. We combine the results of the Carina data set with archival MUSE data of a pillar in NGC 3603 and with previously published MUSE data of the Pillars of Creation in M16, and with a total of 10 analysed pillars, find tight correlations between the ionizing photon flux and the electron density, the electron density and the distance from the ionizing sources, and the ionizing photon flux and the mass-loss rate. The combined MUSE data sets of pillars in regions with different physical conditions and stellar content therefore yield an empirical quantification of the feedback effects of ionizing radiation. In agreement with models, we find that dot{M}∝ Q_0,pil^{1/2}.

  10. Isostatic gravity map with simplified geology of the Los Angeles 30 x 60 minute quadrangle

    USGS Publications Warehouse

    Wooley, R.J.; Yerkes, R.F.; Langenheim, V.E.; Chuang, F.C.

    2003-01-01

    This isostatic residual gravity map is part of the Southern California Areal Mapping Project (SCAMP) and is intended to promote further understanding of the geology in the Los Angeles 30 x 60 minute quadrangle, California, by serving as a basis for geophysical interpretations and by supporting both geological mapping and topical (especially earthquake) studies. Local spatial variations in the Earth's gravity field (after various corrections for elevation, terrain, and deep crustal structure explained below) reflect the lateral variation in density in the mid- to upper crust. Densities often can be related to rock type, and abrupt spatial changes in density commonly mark lithologic boundaries. The map shows contours of isostatic gravity overlain on a simplified geology including faults and rock types. The map is draped over shaded-relief topography to show landforms.

  11. The equal combination synchronization of a class of chaotic systems with discontinuous output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Runzi; Zeng, Yanhui

    This paper investigates the equal combination synchronization of a class of chaotic systems. The chaotic systems are assumed that only the output state variable is available and the output may be discontinuous state variable. By constructing proper observers, some novel criteria for the equal combination synchronization are proposed. The Lorenz chaotic system is taken as an example to demonstrate the efficiency of the proposed approach.

  12. Effect of Co-segregating Markers on High-Density Genetic Maps and Prediction of Map Expansion Using Machine Learning Algorithms.

    PubMed

    N'Diaye, Amidou; Haile, Jemanesh K; Fowler, D Brian; Ammar, Karim; Pozniak, Curtis J

    2017-01-01

    Advances in sequencing and genotyping methods have enable cost-effective production of high throughput single nucleotide polymorphism (SNP) markers, making them the choice for linkage mapping. As a result, many laboratories have developed high-throughput SNP assays and built high-density genetic maps. However, the number of markers may, by orders of magnitude, exceed the resolution of recombination for a given population size so that only a minority of markers can accurately be ordered. Another issue attached to the so-called 'large p, small n' problem is that high-density genetic maps inevitably result in many markers clustering at the same position (co-segregating markers). While there are a number of related papers, none have addressed the impact of co-segregating markers on genetic maps. In the present study, we investigated the effects of co-segregating markers on high-density genetic map length and marker order using empirical data from two populations of wheat, Mohawk × Cocorit (durum wheat) and Norstar × Cappelle Desprez (bread wheat). The maps of both populations consisted of 85% co-segregating markers. Our study clearly showed that excess of co-segregating markers can lead to map expansion, but has little effect on markers order. To estimate the inflation factor (IF), we generated a total of 24,473 linkage maps (8,203 maps for Mohawk × Cocorit and 16,270 maps for Norstar × Cappelle Desprez). Using seven machine learning algorithms, we were able to predict with an accuracy of 0.7 the map expansion due to the proportion of co-segregating markers. For example in Mohawk × Cocorit, with 10 and 80% co-segregating markers the length of the map inflated by 4.5 and 16.6%, respectively. Similarly, the map of Norstar × Cappelle Desprez expanded by 3.8 and 11.7% with 10 and 80% co-segregating markers. With the increasing number of markers on SNP-chips, the proportion of co-segregating markers in high-density maps will continue to increase making map expansion unavoidable. Therefore, we suggest developers improve linkage mapping algorithms for efficient analysis of high-throughput data. This study outlines a practical strategy to estimate the IF due to the proportion of co-segregating markers and outlines a method to scale the length of the map accordingly.

  13. Effect of Co-segregating Markers on High-Density Genetic Maps and Prediction of Map Expansion Using Machine Learning Algorithms

    PubMed Central

    N’Diaye, Amidou; Haile, Jemanesh K.; Fowler, D. Brian; Ammar, Karim; Pozniak, Curtis J.

    2017-01-01

    Advances in sequencing and genotyping methods have enable cost-effective production of high throughput single nucleotide polymorphism (SNP) markers, making them the choice for linkage mapping. As a result, many laboratories have developed high-throughput SNP assays and built high-density genetic maps. However, the number of markers may, by orders of magnitude, exceed the resolution of recombination for a given population size so that only a minority of markers can accurately be ordered. Another issue attached to the so-called ‘large p, small n’ problem is that high-density genetic maps inevitably result in many markers clustering at the same position (co-segregating markers). While there are a number of related papers, none have addressed the impact of co-segregating markers on genetic maps. In the present study, we investigated the effects of co-segregating markers on high-density genetic map length and marker order using empirical data from two populations of wheat, Mohawk × Cocorit (durum wheat) and Norstar × Cappelle Desprez (bread wheat). The maps of both populations consisted of 85% co-segregating markers. Our study clearly showed that excess of co-segregating markers can lead to map expansion, but has little effect on markers order. To estimate the inflation factor (IF), we generated a total of 24,473 linkage maps (8,203 maps for Mohawk × Cocorit and 16,270 maps for Norstar × Cappelle Desprez). Using seven machine learning algorithms, we were able to predict with an accuracy of 0.7 the map expansion due to the proportion of co-segregating markers. For example in Mohawk × Cocorit, with 10 and 80% co-segregating markers the length of the map inflated by 4.5 and 16.6%, respectively. Similarly, the map of Norstar × Cappelle Desprez expanded by 3.8 and 11.7% with 10 and 80% co-segregating markers. With the increasing number of markers on SNP-chips, the proportion of co-segregating markers in high-density maps will continue to increase making map expansion unavoidable. Therefore, we suggest developers improve linkage mapping algorithms for efficient analysis of high-throughput data. This study outlines a practical strategy to estimate the IF due to the proportion of co-segregating markers and outlines a method to scale the length of the map accordingly. PMID:28878789

  14. Hydration of Li+ -ion in atom-bond electronegativity equalization method-7P water: a molecular dynamics simulation study.

    PubMed

    Li, Xin; Yang, Zhong-Zhi

    2005-02-22

    We have carried out molecular dynamics simulations of a Li(+) ion in water over a wide range of temperature (from 248 to 368 K). The simulations make use of the atom-bond electronegativity equalization method-7P water model, a seven-site flexible model with fluctuating charges, which has accurately reproduced many bulk water properties. The recently constructed Li(+)-water interaction potential through fitting to the experimental and ab initio gas-phase binding energies and to the measured structures for Li(+)-water clusters is adopted in the simulations. ABEEM was proposed and developed in terms of partitioning the electron density into atom and bond regions and using the electronegativity equalization method (EEM) and the density functional theory (DFT). Based on a combination of the atom-bond electronegativity equalization method and molecular mechanics (ABEEM/MM), a new set of water-water and Li(+)-water potentials, successfully applied to ionic clusters Li(+)(H(2)O)(n)(n=1-6,8), are further investigated in an aqueous solution of Li(+) in the present paper. Two points must be emphasized in the simulations: first, the model allows for the charges on the interacting sites fluctuating as a function of time; second, the ABEEM-7P model has applied the parameter k(lp,H)(R(lp,H)) to explicitly describe the short-range interaction of hydrogen bond in the hydrogen bond interaction region, and has a new description for the hydrogen bond. The static, dynamic, and thermodynamic properties have been studied in detail. In addition, at different temperatures, the structural properties such as radial distribution functions, and the dynamical properties such as diffusion coefficients and residence times of the water molecules in the first hydration shell of Li(+), are also simulated well. These simulation results show that the ABEEM/MM-based water-water and Li(+)-water potentials appear to be robust giving the overall characteristic hydration properties in excellent agreement with experiments and other molecular dynamics simulations on similar system.

  15. Direct phase selection of initial phases from single-wavelength anomalous dispersion (SAD) for the improvement of electron density and ab initio structure determination.

    PubMed

    Chen, Chung-De; Huang, Yen-Chieh; Chiang, Hsin-Lin; Hsieh, Yin-Cheng; Guan, Hong-Hsiang; Chuankhayan, Phimonphan; Chen, Chun-Jung

    2014-09-01

    Optimization of the initial phasing has been a decisive factor in the success of the subsequent electron-density modification, model building and structure determination of biological macromolecules using the single-wavelength anomalous dispersion (SAD) method. Two possible phase solutions (φ1 and φ2) generated from two symmetric phase triangles in the Harker construction for the SAD method cause the well known phase ambiguity. A novel direct phase-selection method utilizing the θ(DS) list as a criterion to select optimized phases φ(am) from φ1 or φ2 of a subset of reflections with a high percentage of correct phases to replace the corresponding initial SAD phases φ(SAD) has been developed. Based on this work, reflections with an angle θ(DS) in the range 35-145° are selected for an optimized improvement, where θ(DS) is the angle between the initial phase φ(SAD) and a preliminary density-modification (DM) phase φ(DM)(NHL). The results show that utilizing the additional direct phase-selection step prior to simple solvent flattening without phase combination using existing DM programs, such as RESOLVE or DM from CCP4, significantly improves the final phases in terms of increased correlation coefficients of electron-density maps and diminished mean phase errors. With the improved phases and density maps from the direct phase-selection method, the completeness of residues of protein molecules built with main chains and side chains is enhanced for efficient structure determination.

  16. A predator equalizes rate of capture of a schooling prey in a patchy environment.

    PubMed

    Vijayan, Sundararaj; Kotler, Burt P; Abramsky, Zvika

    2017-05-01

    Prey individuals are often distributed heterogeneously in the environment, and their abundances and relative availabilities vary among patches. A foraging predator should maximize energetic gains by selectively choosing patches with higher prey density. However, catching behaviorally responsive and group-forming prey in patchy environments can be a challenge for predators. First, they have to identify the profitable patches, and second, they must manage the prey's sophisticated anti-predator behavior. Thus, the forager and its prey have to continuously adjust their behavior to that of their opponent. Given these conditions, the foraging predator's behavior should be dynamic with time in terms of foraging effort and prey capture rates across different patches. Theoretically, the allocation of its time among patches of behaviorally responsive prey should be such that it equalizes its prey capture rates across patches through time. We tested this prediction in a model system containing a predator (little egret) and group-forming prey (common gold fish) in two sets of experiments in which (1) patches (pools) contained equal numbers of prey, or in which (2) patches contained unequal densities of prey. The egret equalized the prey capture rate through time in both equal and different density experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Near equality of ion phase space densities at earth, Jupiter, and Saturn

    NASA Technical Reports Server (NTRS)

    Cheng, A. F.; Krimigis, S. M.; Armstrong, T. P.

    1985-01-01

    Energetic-ion phase-space density profiles are strikingly similar in the inner magnetospheres of earth, Jupiter, and Saturn for ions of first adiabatic invariant near 100 MeV/G and small mirror latitudes. Losses occur inside L approximately equal to 7 for Jupiter and Saturn and inside L approximately equal to 5 at earth. At these L values there exist steep plasma-density gradients at all three planets, associated with the Io plasma torus at Jupiter, the Rhea-Dione-Tethys torus at Saturn, and the plasmasphere at earth. Measurements of ion flux-tube contents at Jupiter and Saturn by the low-energy charged-particle experiment show that these are similar (for O ions at L = 5-9) to those at earth (for protons at L = 2-6). Furthermore, the thermal-ion flux-tube contents from Voyager plasma-science data at Jupiter and Saturn are also very nearly equal, and again similar to those at earth, differing by less than a factor of 3 at the respective L values. The near equality of energetic and thermal ion flux-tube contents at earth, Jupiter, and Saturn suggests the possibility of strong physical analogies in the interaction between plasma and energetic particles at the plasma tori/plasma sheets of Jupiter and Saturn and the plasmasphere of earth.

  18. Laryngeal cancer: quantitative and qualitative assessment of research output, 1945-2010.

    PubMed

    Glynn, Ronan W; Lowery, Aoife J; Scutaru, Cristian; O'Dwyer, Tadhg; Keogh, Ivan

    2012-09-01

    To provide an in-depth evaluation of research yield in laryngeal cancer from 1945 to 2010, using large-scale data analysis, employment of bibliometric indicators of production and quality, and density equalizing mapping. Bibliometic analysis incorporating the Web of Science Database. The search strategy employed was as follows; "TS = ((Laryngeal Neoplasm$) OR (Larynx Neoplasm$) OR (Larynx Cancer$) OR (Laryngeal Cancer$))." Author and journal data and cooperation networks were computed following analysis of combinations of countries and institutions that registered cooperation during the study period. Mapping was performed as described by Groneberg-Kloft in 2004. A total of 8,658 items relating to laryngeal cancer were published over the study period, accounting for 139,700 citations. The United States was the most prolific country, accounting for 28.83% (n = 2,496) of total output. Other prolific nations included Italy (n = 794) and Germany (n = 792). There were 973 items published as a consequence of international cooperation; this practice increased steadily over time and accounted for 15.58% (88 of 565) of output in 2010. There were 1,073 different journals publishing articles on laryngeal cancer, although the top 20 (1.8%) most prolific titles were together responsible for more than 43% of the total output; these were led by Laryngoscope (n = 368) and Head and Neck, Journal of the Scientific Specialties (n = 364). A total of 24,682 authors contributed to the literature on laryngeal cancer; the leading author by output was Alfio Ferlito (n = 120); Carlo La Vecchia recorded the highest h-index (h = 32). This work represents the first attempt to provide quantitative and qualitative analysis of laryngeal cancer research output, whilst in tandem identifying the key bibliometric benchmarks to which those involved in the production of that output might aspire. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  19. Mapping the benefit-cost ratios of interventions against bovine trypanosomosis in Eastern Africa.

    PubMed

    Shaw, A P M; Wint, G R W; Cecchi, G; Torr, S J; Mattioli, R C; Robinson, T P

    2015-12-01

    This study builds upon earlier work mapping the potential benefits from bovine trypanosomosis control and analysing the costs of different approaches. Updated costs were derived for five intervention techniques: trypanocides, targets, insecticide-treated cattle, aerial spraying and the release of sterile males. Two strategies were considered: continuous control and elimination. For mapping the costs, cattle densities, environmental constraints, and the presence of savannah or riverine tsetse species were taken into account. These were combined with maps of potential benefits to produce maps of benefit-cost ratios. The results illustrate a diverse picture, and they clearly indicate that no single technique or strategy is universally profitable. For control using trypanocide prophylaxis, returns are modest, even without accounting for the risk of drug resistance but, in areas of low cattle densities, this is the only approach that yields a positive return. Where cattle densities are sufficient to support it, the use of insecticide-treated cattle stands out as the most consistently profitable technique, widely achieving benefit-cost ratios above 5. In parts of the high-potential areas such as the mixed farming, high-oxen-use zones of western Ethiopia, the fertile crescent north of Lake Victoria and the dairy production areas in western and central Kenya, all tsetse control strategies achieve benefit-cost ratios from 2 to over 15, and for elimination strategies, ratios from 5 to over 20. By contrast, in some areas, notably where cattle densities are below 20per km(2), the costs of interventions against tsetse match or even outweigh the benefits, especially for control scenarios using aerial spraying or the deployment of targets where both savannah and riverine flies are present. If the burden of human African trypanosomosis were factored in, the benefit-cost ratios of some of the low-return areas would be considerably increased. Comparatively, elimination strategies give rise to higher benefit-cost ratios than do those for continuous control. However, the costs calculated for elimination assume problem-free, large scale operations, and they rest on the outputs of entomological models that are difficult to validate in the field. Experience indicates that the conditions underlying successful and sustained elimination campaigns are seldom met. By choosing the most appropriate thresholds for benefit-cost ratios, decision-makers and planners can use the maps to define strategies, assist in prioritising areas for intervention, and help choose among intervention techniques and approaches. The methodology would have wider applicability in analysing other disease constraints with a strong spatial component. Copyright © 2015 A.P.M Shaw. Published by Elsevier B.V. All rights reserved.

  20. A high-density, SNP-based consensus map of tetraploid wheat as a bridge to integrate durum and bread wheat genomics and breeding

    USDA-ARS?s Scientific Manuscript database

    Consensus linkage maps are important tools in crop genomics. We have assembled a high-density tetraploid wheat consensus map by integrating 13 datasets from independent biparental populations involving durum wheat cultivars (Triticum turgidum ssp. durum), cultivated emmer (T. turgidum ssp. dicoccum...

  1. Genome survey and high-density genetic map construction provide genomic and genetic resources for the Pacific White Shrimp Litopenaeus vannamei

    PubMed Central

    Yu, Yang; Zhang, Xiaojun; Yuan, Jianbo; Li, Fuhua; Chen, Xiaohan; Zhao, Yongzhen; Huang, Long; Zheng, Hongkun; Xiang, Jianhai

    2015-01-01

    The Pacific white shrimp Litopenaeus vannamei is the dominant crustacean species in global seafood mariculture. Understanding the genome and genetic architecture is useful for deciphering complex traits and accelerating the breeding program in shrimp. In this study, a genome survey was conducted and a high-density linkage map was constructed using a next-generation sequencing approach. The genome survey was used to identify preliminary genome characteristics and to generate a rough reference for linkage map construction. De novo SNP discovery resulted in 25,140 polymorphic markers. A total of 6,359 high-quality markers were selected for linkage map construction based on marker coverage among individuals and read depths. For the linkage map, a total of 6,146 markers spanning 4,271.43 cM were mapped to 44 sex-averaged linkage groups, with an average marker distance of 0.7 cM. An integration analysis linked 5,885 genome scaffolds and 1,504 BAC clones to the linkage map. Based on the high-density linkage map, several QTLs for body weight and body length were detected. This high-density genetic linkage map reveals basic genomic architecture and will be useful for comparative genomics research, genome assembly and genetic improvement of L. vannamei and other penaeid shrimp species. PMID:26503227

  2. Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) data products user's guide

    NASA Technical Reports Server (NTRS)

    Mcpeters, Richard D.; Krueger, Arlin J.; Bhartia, P. K.; Herman, Jay R.; Oaks, Arnold; Ahmad, Ziuddin; Cebula, Richard P.; Schlesinger, Barry M.; Swissler, Tom; Taylor, Steven L.

    1993-01-01

    Two tape products from the Total Ozone Mapping Spectrometer (TOMS) aboard the Nimbus-7 have been archived at the National Space Science Data Center. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio -- the albedo -- is used in ozone retrievals. In-flight measurements are used to monitor changes in the instrument sensitivity. The algorithm to retrieve total column ozone compares the observed ratios of albedos at pairs of wavelengths with pair ratios calculated for different ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard-deviation random error is 2 percent, and the drift is +/- 1.5 percent over 14.5 years. The High Density TOMS (HDTOMS) tape contains the measured albedos, the derived total ozone amount, reflectivity, and cloud-height information for each scan position. It also contains an index of SO2 contamination for each position. The Gridded TOMS (GRIDTOMS) tape contains daily total ozone and reflectivity in roughly equal area grids (110 km in latitude by about 100-150 km in longitude). Detailed descriptions of the tape structure and record formats are provided.

  3. AMOBH: Adaptive Multiobjective Black Hole Algorithm.

    PubMed

    Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen

    2017-01-01

    This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.

  4. SNP discovery by high-throughput sequencing in soybean

    PubMed Central

    2010-01-01

    Background With the advance of new massively parallel genotyping technologies, quantitative trait loci (QTL) fine mapping and map-based cloning become more achievable in identifying genes for important and complex traits. Development of high-density genetic markers in the QTL regions of specific mapping populations is essential for fine-mapping and map-based cloning of economically important genes. Single nucleotide polymorphisms (SNPs) are the most abundant form of genetic variation existing between any diverse genotypes that are usually used for QTL mapping studies. The massively parallel sequencing technologies (Roche GS/454, Illumina GA/Solexa, and ABI/SOLiD), have been widely applied to identify genome-wide sequence variations. However, it is still remains unclear whether sequence data at a low sequencing depth are enough to detect the variations existing in any QTL regions of interest in a crop genome, and how to prepare sequencing samples for a complex genome such as soybean. Therefore, with the aims of identifying SNP markers in a cost effective way for fine-mapping several QTL regions, and testing the validation rate of the putative SNPs predicted with Solexa short sequence reads at a low sequencing depth, we evaluated a pooled DNA fragment reduced representation library and SNP detection methods applied to short read sequences generated by Solexa high-throughput sequencing technology. Results A total of 39,022 putative SNPs were identified by the Illumina/Solexa sequencing system using a reduced representation DNA library of two parental lines of a mapping population. The validation rates of these putative SNPs predicted with low and high stringency were 72% and 85%, respectively. One hundred sixty four SNP markers resulted from the validation of putative SNPs and have been selectively chosen to target a known QTL, thereby increasing the marker density of the targeted region to one marker per 42 K bp. Conclusions We have demonstrated how to quickly identify large numbers of SNPs for fine mapping of QTL regions by applying massively parallel sequencing combined with genome complexity reduction techniques. This SNP discovery approach is more efficient for targeting multiple QTL regions in a same genetic population, which can be applied to other crops. PMID:20701770

  5. Optimization of Empirical Force Fields by Parameter Space Mapping: A Single-Step Perturbation Approach.

    PubMed

    Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E

    2017-12-12

    A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.

  6. A Hybrid Approach to Composite Damage and Failure Analysis Combining Synergistic Damage Mechanics and Peridynamics

    DTIC Science & Technology

    2017-06-30

    along the intermetallic component or at the interface between the two components of the composite. The availability of rnicroscale experimental data in...obtained with the PD model; (c) map of strain energy density; (d) the new quasi -index damage is a predictor of fai lure. As in the case of FRCs, one...which points are most likely to fail, before actual failure happens. The " quasi -damage index", shown in the formula below, is a point-wise measure

  7. Strain induced adatom correlations

    NASA Astrophysics Data System (ADS)

    Kappus, Wolfgang

    2012-12-01

    A Born-Green-Yvon type model for adatom density correlations is combined with a model for adatom interactions mediated by the strain in elastic anisotropic substrates. The resulting nonlinear integral equation is solved numerically for coverages from zero to a limit given by stability constraints. W, Nb, Ta and Au surfaces are taken as examples to show the effects of different elastic anisotropy regions. Results of the calculation are shown by appropriate plots and discussed. A mapping to superstructures is tried. Corresponding adatom configurations from Monte Carlo simulations are shown.

  8. Interstitial diffuse radiance spectroscopy of gold nanocages and nanorods in bulk muscle tissues

    PubMed Central

    Grabtchak, Serge; Montgomery, Logan G; Pang, Bo; Wang, Yi; Zhang, Chao; Li, Zhiyuan; Xia, Younan; Whelan, William M

    2015-01-01

    Radiance spectroscopy was applied to the interstitial detection of localized inclusions containing Au nanocages or nanorods with various concentrations embedded in porcine muscle phantoms. The radiance was quantified using a perturbation approach, which enabled the separation of contributions from the porcine phantom and the localized inclusion, with the inclusion serving as a perturbation probe of photon distributions in the turbid medium. Positioning the inclusion at various places in the phantom allowed for tracking of photons that originated from a light source, passed through the inclusion’s location, and reached a detector. The inclusions with high extinction coefficients were able to absorb nearly all photons in the range of 650–900 nm, leading to a spectrally flat radiance signal. This signal could be converted to the relative density of photons incident on the inclusion. Finally, the experimentally measured quantities were expressed via the relative perturbation and arranged into the classical Beer–Lambert law that allowed one to extract the extinction coefficients of various types of Au nanoparticles in both the transmission and back reflection geometries. It was shown that the spatial variation of perturbation could be described as 1/r dependence, where r is the distance between the inclusion and the detector. Due to a larger absorption cross section, Au nanocages produced greater perturbations than Au nanorods of equal particle concentration, indicating a better suitability of Au nanocages as contrast agents for optical measurements in turbid media. Individual measurements from different inclusions were combined into detectability maps. PMID:25709450

  9. Interstitial diffuse radiance spectroscopy of gold nanocages and nanorods in bulk muscle tissues.

    PubMed

    Grabtchak, Serge; Montgomery, Logan G; Pang, Bo; Wang, Yi; Zhang, Chao; Li, Zhiyuan; Xia, Younan; Whelan, William M

    2015-01-01

    Radiance spectroscopy was applied to the interstitial detection of localized inclusions containing Au nanocages or nanorods with various concentrations embedded in porcine muscle phantoms. The radiance was quantified using a perturbation approach, which enabled the separation of contributions from the porcine phantom and the localized inclusion, with the inclusion serving as a perturbation probe of photon distributions in the turbid medium. Positioning the inclusion at various places in the phantom allowed for tracking of photons that originated from a light source, passed through the inclusion's location, and reached a detector. The inclusions with high extinction coefficients were able to absorb nearly all photons in the range of 650-900 nm, leading to a spectrally flat radiance signal. This signal could be converted to the relative density of photons incident on the inclusion. Finally, the experimentally measured quantities were expressed via the relative perturbation and arranged into the classical Beer-Lambert law that allowed one to extract the extinction coefficients of various types of Au nanoparticles in both the transmission and back reflection geometries. It was shown that the spatial variation of perturbation could be described as 1/r dependence, where r is the distance between the inclusion and the detector. Due to a larger absorption cross section, Au nanocages produced greater perturbations than Au nanorods of equal particle concentration, indicating a better suitability of Au nanocages as contrast agents for optical measurements in turbid media. Individual measurements from different inclusions were combined into detectability maps.

  10. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  11. Modelling the distribution of chickens, ducks, and geese in China

    USGS Publications Warehouse

    Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.

  12. The Flint Food Store Survey: combining spatial analysis with a modified Nutrition Environment Measures Survey in Stores (NEMS-S) to measure the community and consumer nutrition environments.

    PubMed

    Shaver, Erika R; Sadler, Richard C; Hill, Alex B; Bell, Kendall; Ray, Myah; Choy-Shin, Jennifer; Lerner, Joy; Soldner, Teresa; Jones, Andrew D

    2018-06-01

    The goal of the present study was to use a methodology that accurately and reliably describes the availability, price and quality of healthy foods at both the store and community levels using the Nutrition Environment Measures Survey in Stores (NEMS-S), to propose a spatial methodology for integrating these store and community data into measures for defining objective food access. Two hundred and sixty-five retail food stores in and within 2 miles (3·2 km) of Flint, Michigan, USA, were mapped using ArcGIS mapping software. A survey based on the validated NEMS-S was conducted at each retail food store. Scores were assigned to each store based on a modified version of the NEMS-S scoring system and linked to the mapped locations of stores. Neighbourhood characteristics (race and socio-economic distress) were appended to each store. Finally, spatial and kernel density analyses were run on the mapped store scores to obtain healthy food density metrics. Regression analyses revealed that neighbourhoods with higher socio-economic distress had significantly lower dairy sub-scores compared with their lower-distress counterparts (β coefficient=-1·3; P=0·04). Additionally, supermarkets were present only in neighbourhoods with <60 % African-American population and low socio-economic distress. Two areas in Flint had an overall NEMS-S score of 0. By identifying areas with poor access to healthy foods via a validated metric, this research can be used help local government and organizations target interventions to high-need areas. Furthermore, the methodology used for the survey and the mapping exercise can be replicated in other cities to provide comparable results.

  13. Modelling the distribution of chickens, ducks, and geese in China

    PubMed Central

    Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567

  14. Crystal step edges can trap electrons on the surfaces of n-type organic semiconductors.

    PubMed

    He, Tao; Wu, Yanfei; D'Avino, Gabriele; Schmidt, Elliot; Stolte, Matthias; Cornil, Jérôme; Beljonne, David; Ruden, P Paul; Würthner, Frank; Frisbie, C Daniel

    2018-05-30

    Understanding relationships between microstructure and electrical transport is an important goal for the materials science of organic semiconductors. Combining high-resolution surface potential mapping by scanning Kelvin probe microscopy (SKPM) with systematic field effect transport measurements, we show that step edges can trap electrons on the surfaces of single crystal organic semiconductors. n-type organic semiconductor crystals exhibiting positive step edge surface potentials display threshold voltages that increase and carrier mobilities that decrease with increasing step density, characteristic of trapping, whereas crystals that do not have positive step edge surface potentials do not have strongly step density dependent transport. A device model and microelectrostatics calculations suggest that trapping can be intrinsic to step edges for crystals of molecules with polar substituents. The results provide a unique example of a specific microstructure-charge trapping relationship and highlight the utility of surface potential imaging in combination with transport measurements as a productive strategy for uncovering microscopic structure-property relationships in organic semiconductors.

  15. Combination of High-density Microelectrode Array and Patch Clamp Recordings to Enable Studies of Multisynaptic Integration.

    PubMed

    Jäckel, David; Bakkum, Douglas J; Russell, Thomas L; Müller, Jan; Radivojevic, Milos; Frey, Urs; Franke, Felix; Hierlemann, Andreas

    2017-04-20

    We present a novel, all-electric approach to record and to precisely control the activity of tens of individual presynaptic neurons. The method allows for parallel mapping of the efficacy of multiple synapses and of the resulting dynamics of postsynaptic neurons in a cortical culture. For the measurements, we combine an extracellular high-density microelectrode array, featuring 11'000 electrodes for extracellular recording and stimulation, with intracellular patch-clamp recording. We are able to identify the contributions of individual presynaptic neurons - including inhibitory and excitatory synaptic inputs - to postsynaptic potentials, which enables us to study dendritic integration. Since the electrical stimuli can be controlled at microsecond resolution, our method enables to evoke action potentials at tens of presynaptic cells in precisely orchestrated sequences of high reliability and minimum jitter. We demonstrate the potential of this method by evoking short- and long-term synaptic plasticity through manipulation of multiple synaptic inputs to a specific neuron.

  16. Phase imaging using highly coherent X-rays: radiography, tomography, diffraction topography.

    PubMed

    Baruchel, J; Cloetens, P; Härtwig, J; Ludwig, W; Mancini, L; Pernot, P; Schlenker, M

    2000-05-01

    Several hard X-rays imaging techniques greatly benefit from the coherence of the beams delivered by the modern synchrotron radiation sources. This is illustrated with examples recorded on the 'long' (145 m) ID19 'imaging' beamline of the ESRF. Phase imaging is directly related to the small angular size of the source as seen from one point of the sample ('effective divergence' approximately microradians). When using the ;propagation' technique, phase radiography and tomography are instrumentally very simple. They are often used in the 'edge detection' regime, where the jumps of density are clearly observed. The in situ damage assessment of micro-heterogeneous materials is one example of the many applications. Recently a more quantitative approach has been developed, which provides a three-dimensional density mapping of the sample ('holotomography'). The combination of diffraction topography and phase-contrast imaging constitutes a powerful tool. The observation of holes of discrete sizes in quasicrystals, and the investigation of poled ferroelectric materials, result from this combination.

  17. Nonlinear data assimilation using synchronization in a particle filter

    NASA Astrophysics Data System (ADS)

    Rodrigues-Pinheiro, Flavia; Van Leeuwen, Peter Jan

    2017-04-01

    Current data assimilation methods still face problems in strongly nonlinear cases. A promising solution is a particle filter, which provides a representation of the model probability density function by a discrete set of particles. However, the basic particle filter does not work in high-dimensional cases. The performance can be improved by considering the proposal density freedom. A potential choice of proposal density might come from the synchronisation theory, in which one tries to synchronise the model with the true evolution of a system using one-way coupling via the observations. In practice, an extra term is added to the model equations that damps growth of instabilities on the synchronisation manifold. When only part of the system is observed synchronization can be achieved via a time embedding, similar to smoothers in data assimilation. In this work, two new ideas are tested. First, ensemble-based time embedding, similar to an ensemble smoother or 4DEnsVar is used on each particle, avoiding the need for tangent-linear models and adjoint calculations. Tests were performed using Lorenz96 model for 20, 100 and 1000-dimension systems. Results show state-averaged synchronisation errors smaller than observation errors even in partly observed systems, suggesting that the scheme is a promising tool to steer model states to the truth. Next, we combine these efficient particles using an extension of the Implicit Equal-Weights Particle Filter, a particle filter that ensures equal weights for all particles, avoiding filter degeneracy by construction. Promising results will be shown on low- and high-dimensional Lorenz96 models, and the pros and cons of these new ideas will be discussed.

  18. PRIMORDIAL GRAVITATIONAL WAVE DETECTABILITY WITH DEEP SMALL-SKY COSMIC MICROWAVE BACKGROUND EXPERIMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farhang, M.; Bond, J. R.; Netterfield, C. B.

    2013-07-01

    We use the Bayesian estimation on direct T - Q - U cosmic microwave background (CMB) polarization maps to forecast errors on the tensor-to-scalar power ratio r, and hence on primordial gravitational waves, as a function of sky coverage f{sub sky}. This map-based likelihood filters the information in the pixel-pixel space into the optimal combinations needed for r detection for cut skies, providing enhanced information over a first-step linear separation into a combination of E, B, and mixed modes, and ignoring the latter. With current computational power and for typical resolutions appropriate for r detection, the large matrix inversions requiredmore » are accurate and fast. Our simulations explore two classes of experiments, with differing bolometric detector numbers, sensitivities, and observational strategies. One is motivated by a long duration balloon experiment like Spider, with pixel noise {proportional_to}{radical}(f{sub sky}) for a specified observing period. This analysis also applies to ground-based array experiments. We find that, in the absence of systematic effects and foregrounds, an experiment with Spider-like noise concentrating on f{sub sky} {approx} 0.02-0.2 could place a 2{sigma}{sub r} Almost-Equal-To 0.014 boundary ({approx}95% confidence level), which rises to 0.02 with an l-dependent foreground residual left over from an assumed efficient component separation. We contrast this with a Planck-like fixed instrumental noise as f{sub sky} varies, which gives a Galaxy-masked (f{sub sky} = 0.75) 2{sigma}{sub r} Almost-Equal-To 0.015, rising to Almost-Equal-To 0.05 with the foreground residuals. Using as the figure of merit the (marginalized) one-dimensional Shannon entropy of r, taken relative to the first 2003 WMAP CMB-only constraint, gives -2.7 bits from the 2012 WMAP9+ACT+SPT+LSS data, and forecasts of -6 bits from Spider (+ Planck); this compares with up to -11 bits for CMBPol, COrE, and PIXIE post-Planck satellites and -13 bits for a perfectly noiseless cosmic variance limited experiment. We thus confirm the wisdom of the current strategy for r detection of deeply probed patches covering the f{sub sky} minimum-error trough with balloon and ground experiments.« less

  19. Evolution of probability densities in stochastic coupled map lattices

    NASA Astrophysics Data System (ADS)

    Losson, Jérôme; Mackey, Michael C.

    1995-08-01

    This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.

  20. Single Marker and Haplotype-Based Association Analysis of Semolina and Pasta Colour in Elite Durum Wheat Breeding Lines Using a High-Density Consensus Map

    PubMed Central

    Haile, Jemanesh K.; Cory, Aron T.; Clarke, Fran R.; Clarke, John M.; Knox, Ron E.; Pozniak, Curtis J.

    2017-01-01

    Association mapping is usually performed by testing the correlation between a single marker and phenotypes. However, because patterns of variation within genomes are inherited as blocks, clustering markers into haplotypes for genome-wide scans could be a worthwhile approach to improve statistical power to detect associations. The availability of high-density molecular data allows the possibility to assess the potential of both approaches to identify marker-trait associations in durum wheat. In the present study, we used single marker- and haplotype-based approaches to identify loci associated with semolina and pasta colour in durum wheat, the main objective being to evaluate the potential benefits of haplotype-based analysis for identifying quantitative trait loci. One hundred sixty-nine durum lines were genotyped using the Illumina 90K Infinium iSelect assay, and 12,234 polymorphic single nucleotide polymorphism (SNP) markers were generated and used to assess the population structure and the linkage disequilibrium (LD) patterns. A total of 8,581 SNPs previously localized to a high-density consensus map were clustered into 406 haplotype blocks based on the average LD distance of 5.3 cM. Combining multiple SNPs into haplotype blocks increased the average polymorphism information content (PIC) from 0.27 per SNP to 0.50 per haplotype. The haplotype-based analysis identified 12 loci associated with grain pigment colour traits, including the five loci identified by the single marker-based analysis. Furthermore, the haplotype-based analysis resulted in an increase of the phenotypic variance explained (50.4% on average) and the allelic effect (33.7% on average) when compared to single marker analysis. The presence of multiple allelic combinations within each haplotype locus offers potential for screening the most favorable haplotype series and may facilitate marker-assisted selection of grain pigment colour in durum wheat. These results suggest a benefit of haplotype-based analysis over single marker analysis to detect loci associated with colour traits in durum wheat. PMID:28135299

  1. Mars Gravity Field and Upper Atmosphere from MGS, Mars Odyssey, and MRO

    NASA Astrophysics Data System (ADS)

    Genova, A.; Goossens, S. J.; Lemoine, F. G.; Mazarico, E.; Neumann, G. A.; Smith, D. E.; Zuber, M. T.

    2015-12-01

    The NASA orbital missions Mars Global Surveyor (MGS), Mars Odyssey (ODY), and Mars Reconnaissance Orbiter (MRO) have been exploring and monitoring the planet Mars since 1997. MGS executed its mapping mission between 1999 and 2006 in a frozen sun-synchronous, near-circular, polar orbit with the periapsis altitude at ~370 km and the dayside equatorial crossing at 2 pm Local Solar Time (LST). The spacecraft was equipped with onboard instrumentation to acquire radio science data and to measure spacecraft ranges to the Martian surface (Mars Orbiter Laser Altimeter). These measurements resulted in static and time-varying gravity field and high-resolution global topography of the planet. ODY and MRO are still orbiting about Mars in two different sun-synchronous orbits, providing radio tracking data that indirectly measure both the static and time-varying gravity field and the atmospheric density. The orbit of ODY has its periapsis at ~390 km altitude and descending node at 4-5 pm LST. However, the spacecraft also collected measurements at lower altitudes (~220 km) in 2002 prior to the mapping phase. Since November 2006, MRO is in a low-altitude orbit with a periapsis altitude of 255 km and descending node at 3 pm LST. Radio data from MRO help improve the resolution of the static gravity field and measure the mass distribution of the polar caps, but the atmospheric drag at those altitudes may limit the benefits of these radio tracking observations. We present a combined solution of the Martian gravity field to degree and order 110 and atmospheric density profiles with radio tracking data from MGS, ODY and MRO. The gravity field solution is combined with the MOLA topography yielding an updated map of Mars crustal thickness. We also show our solution of the Love number k2 and time-variable gravity zonal harmonics (C20 and C30, in particular). The recovered atmospheric density profiles may be used in atmospheric models to constrain the long-term variability of the constituents in the upper atmosphere.

  2. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  3. ALMA deep field in SSA22: Survey design and source catalog of a 20 arcmin2 survey at 1.1 mm

    NASA Astrophysics Data System (ADS)

    Umehata, Hideki; Hatsukade, Bunyo; Smail, Ian; Alexander, David M.; Ivison, Rob J.; Matsuda, Yuichi; Tamura, Yoichi; Kohno, Kotaro; Kato, Yuta; Hayatsu, Natsuki H.; Kubo, Mariko; Ikarashi, Soh

    2018-06-01

    To search for dust-obscured star-formation activity in the early Universe, it is essential to obtain a deep and wide submillimeter/millimeter map. The advent of the Atacama Large Millimeter/submillimeter Array (ALMA) has enabled us to obtain such maps with sufficiently high spatial resolution to be free from source confusion. We present a new 1.1 mm-wave map obtained by ALMA in the SSA22 field. The field contains a remarkable proto-cluster at z = 3.09; therefore, it is an ideal region to investigate the role of a large-scale cosmic web on dust-obscured star formation. The typical 1σ depth of our map is 73 μJy beam-1 with a {0^{^''.}5} resolution. Combining the present survey with earlier, archived observations, we map an area of 20 arcmin2 (71 comoving Mpc2 at z = 3.09). Within the combined survey area we have detected 35 sources at a signal-to-noise ratio (S/N) >5, with flux densities of S1.1mm = 0.43-5.6 mJy, equivalent to star-formation rates of ≳100-1000 M⊙ yr-1 at z = 3.09, for a Chabrier initial mass function: 17 sources out of 35 are new detections. The cumulative number counts show an excess by a factor of three to five compared to blank fields. The excess suggests enhanced, dust-enshrouded star-formation activity in the proto-cluster on a 10 comoving Mpc scale, indicating accelerated galaxy evolution in this overdense region.

  4. Voltage gradient mapping and electrophysiologically guided cryoablation in children with AVNRT.

    PubMed

    Drago, Fabrizio; Battipaglia, Irma; Russo, Mario Salvatore; Remoli, Romolo; Pazzano, Vincenzo; Grifoni, Gino; Allegretti, Greta; Silvetti, Massimo Stefano

    2018-04-01

    Recently, voltage gradient mapping of Koch's triangle to find low-voltage connections, or 'voltage bridges', corresponding to the anatomic position of the slow pathway, has been introduced as a method to ablate atrioventricular nodal reentry tachycardia (AVNRT) in children. Thus, we aimed to assess the effectiveness of voltage mapping of Koch's triangle, combined with the search for the slow potential signal in 'low-voltage bridges', to guide cryoablation of AVNRT in children. From June 2015 to May 2016, 35 consecutive paediatric patients (mean age 12.1 ± 4.5 years) underwent 3D-guided cryoablation of AVNRT at our Institution. Fifteen children were enrolled as control group (mean age 14 ± 4 years). A voltage gradient mapping of Koch's triangle was obtained in all patients, showing low-voltage connections in all children with AVNRT but not in controls. Prior to performing cryoablation, we looked for the typical 'hump and spike' electrogram, generally considered to be representative of slow pathway potential within a low-voltage bridge. In all patients the 'hump and spike' electrogram was found inside bridges of low voltage. Focal or high-density linear lesions, extended or not, were delivered guided by low-voltage bridge visualization. Acute success rate was 100%, and no recurrence was reported at a mean follow-up of 8 ± 3 months. Voltage gradient mapping of Koch's triangle, combined with the search for the slow potential signal in low-voltage bridges, is effective in guiding cryoablation of AVNRT in paediatric patients, with a complete acute success rate and no AVNRT recurrences at mid-term follow-up.

  5. Volume and density changes of biological fluids with temperature

    NASA Technical Reports Server (NTRS)

    Hinghofer-Szalkay, H.

    1985-01-01

    The thermal expansion of human blood, plasma, ultrafiltrate, and erythrocycte concentration at temperatures in the range of 4-48 C is studied. The mechanical oscillator technique which has an accuracy of 1 x 10 to the -5 th g/ml is utilized to measure fluid density. The relationship between thermal expansion, density, and temperature is analyzed. The study reveals that: (1) thermal expansion increases with increasing temperature; (2) the magnitude of the increase declines with increasing temperature; (3) thermal expansion increases with density at temperatures below 40 C; and (4) the thermal expansion of intracellular fluid is greater than that of extracellular fluid in the temperature range of 4-10 C, but it is equal at temperatures greater than or equal to 40 C.

  6. Gravity, aeromagnetic and rock-property data of the central California Coast Ranges

    USGS Publications Warehouse

    Langenheim, V.E.

    2014-01-01

    Gravity, aeromagnetic, and rock-property data were collected to support geologic-mapping, water-resource, and seismic-hazard studies for the central California Coast Ranges. These data are combined with existing data to provide gravity, aeromagnetic, and physical-property datasets for this region. The gravity dataset consists of approximately 18,000 measurements. The aeromagnetic dataset consists of total-field anomaly values from several detailed surveys that have been merged and gridded at an interval of 200 m. The physical property dataset consists of approximately 800 density measurements and 1,100 magnetic-susceptibility measurements from rock samples, in addition to previously published borehole gravity surveys from Santa Maria Basin, density logs from Salinas Valley, and intensities of natural remanent magnetization.

  7. Vortex energy landscape from real space imaging analysis of YBa2Cu3O7 with different defect structures

    NASA Astrophysics Data System (ADS)

    Luccas, R. F.; Granados, X.; Obradors, X.; Puig, T.

    2014-10-01

    A methodology based on real space vortex image analysis is presented able to estimate semi-quantitatively the relevant energy densities of an arbitrary array of vortices, map the interaction energy distributions and evaluate the pinning energy associated to particular defects. The combined study using nanostructuration tools, a vortex visualization technique and the energy method is seen as an opportunity to estimate vortex pinning potentials strengths. Particularly, spatial distributions of vortex energy densities induced by surface nanoindented scratches are evaluated and compared to those of twin boundaries. This comparative study underlines the remarkable role of surface nanoscratches in pinning vortices and its potentiality in the design of novel devices for pinning and guiding vortex motion.

  8. Site-occupation embedding theory using Bethe ansatz local density approximations

    NASA Astrophysics Data System (ADS)

    Senjean, Bruno; Nakatani, Naoki; Tsuchiizu, Masahisa; Fromager, Emmanuel

    2018-06-01

    Site-occupation embedding theory (SOET) is an alternative formulation of density functional theory (DFT) for model Hamiltonians where the fully interacting Hubbard problem is mapped, in principle exactly, onto an impurity-interacting (rather than a noninteracting) one. It provides a rigorous framework for combining wave-function (or Green function)-based methods with DFT. In this work, exact expressions for the per-site energy and double occupation of the uniform Hubbard model are derived in the context of SOET. As readily seen from these derivations, the so-called bath contribution to the per-site correlation energy is, in addition to the latter, the key density functional quantity to model in SOET. Various approximations based on Bethe ansatz and perturbative solutions to the Hubbard and single-impurity Anderson models are constructed and tested on a one-dimensional ring. The self-consistent calculation of the embedded impurity wave function has been performed with the density-matrix renormalization group method. It has been shown that promising results are obtained in specific regimes of correlation and density. Possible further developments have been proposed in order to provide reliable embedding functionals and potentials.

  9. High-density genetic map using whole-genome resequencing for fine mapping and candidate gene discovery for disease resistance in peanut.

    PubMed

    Agarwal, Gaurav; Clevenger, Josh; Pandey, Manish K; Wang, Hui; Shasidhar, Yaduru; Chu, Ye; Fountain, Jake C; Choudhary, Divya; Culbreath, Albert K; Liu, Xin; Huang, Guodong; Wang, Xingjun; Deshmukh, Rupesh; Holbrook, C Corley; Bertioli, David J; Ozias-Akins, Peggy; Jackson, Scott A; Varshney, Rajeev K; Guo, Baozhu

    2018-04-10

    Whole-genome resequencing (WGRS) of mapping populations has facilitated development of high-density genetic maps essential for fine mapping and candidate gene discovery for traits of interest in crop species. Leaf spots, including early leaf spot (ELS) and late leaf spot (LLS), and Tomato spotted wilt virus (TSWV) are devastating diseases in peanut causing significant yield loss. We generated WGRS data on a recombinant inbred line population, developed a SNP-based high-density genetic map, and conducted fine mapping, candidate gene discovery and marker validation for ELS, LLS and TSWV. The first sequence-based high-density map was constructed with 8869 SNPs assigned to 20 linkage groups, representing 20 chromosomes, for the 'T' population (Tifrunner × GT-C20) with a map length of 3120 cM and an average distance of 1.45 cM. The quantitative trait locus (QTL) analysis using high-density genetic map and multiple season phenotyping data identified 35 main-effect QTLs with phenotypic variation explained (PVE) from 6.32% to 47.63%. Among major-effect QTLs mapped, there were two QTLs for ELS on B05 with 47.42% PVE and B03 with 47.38% PVE, two QTLs for LLS on A05 with 47.63% and B03 with 34.03% PVE and one QTL for TSWV on B09 with 40.71% PVE. The epistasis and environment interaction analyses identified significant environmental effects on these traits. The identified QTL regions had disease resistance genes including R-genes and transcription factors. KASP markers were developed for major QTLs and validated in the population and are ready for further deployment in genomics-assisted breeding in peanut. © 2018 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  10. The Effect of Tutoring With Nonstandard Equations for Students With Mathematics Difficulty.

    PubMed

    Powell, Sarah R; Driver, Melissa K; Julian, Tyler E

    2015-01-01

    Students often misinterpret the equal sign (=) as operational instead of relational. Research indicates misinterpretation of the equal sign occurs because students receive relatively little exposure to equations that promote relational understanding of the equal sign. No study, however, has examined effects of nonstandard equations on the equation solving and equal-sign understanding of students with mathematics difficulty (MD). In the present study, second-grade students with MD (n = 51) were randomly assigned to standard equations tutoring, combined tutoring (standard and nonstandard equations), and no-tutoring control. Combined tutoring students demonstrated greater gains on equation-solving assessments and equal-sign tasks compared to the other two conditions. Standard tutoring students demonstrated improved skill on equation solving over control students, but combined tutoring students' performance gains were significantly larger. Results indicate that exposure to and practice with nonstandard equations positively influence student understanding of the equal sign. © Hammill Institute on Disabilities 2013.

  11. Modeling folding related multi-scale deformation of sedimentary rock using ALSM and fracture characterization at Raplee Ridge, UT

    NASA Astrophysics Data System (ADS)

    Mynatt, I.; Hilley, G. E.; Pollard, D. D.

    2006-12-01

    Understanding and predicting the characteristics of folding induced fracturing is an important and intriguing structural problem. Folded sequences of sedimentary rock at depth are common traps for hydrocarbons and water and fractures can strongly effect (both positively and negatively) this trapping capability. For these reasons fold-fracture relationships are well studied, but due to the complex interactions between the remote tectonic stress, rheologic properties, underlying fault geometry and slip, and pre-existing fractures, fracture characteristics can vary greatly from fold to fold. Additionally, examination of the relationships between fundamental characteristics such as fold geometry and fracture density are difficult even in thoroughly studied producing fields as measurements of fold shape are hampered by the low resolution of seismic surveying and measurements of fractures are limited to sparse well-bore locations. Due to the complexity of the system, the limitations of available data and small number of detailed case studies, prediction of fracture characteristics, e.g. the distribution of fracture density, are often difficult to make for a particular fold. We suggest a combination of mechanical and numerical modeling and analysis combined with detailed field mapping can lead to important insights into fold-fracture relationships. We develop methods to quantify both fold geometry and fracture characteristics, and summarize their relationships for an exhumed analogue reservoir case study. The field area is Raplee Monocline, a Laramide aged, N-S oriented, ~14-km long fold exposed in the Monument Upwarp of south-eastern Utah and part of the larger Colorado Plateau geologic province. The investigation involves three distinct parts: 1) Field based characterization and mapping of the fractures on and near the fold; 2) Development of accurate models of the fold geometry using high resolution data including ~3.5x107 x, y, z topographic points collected using Airborne Laser Swath Mapping (ALSM); and 3) Analysis of the fold shape and fracture patterns using the concepts of differential geometry and fracture mechanics. Field documentation of fracture characteristics enables the classification of distinct pre- and syn- folding fracture sets and the development of conceptual models of multiple stages of fracture evolution. Numerical algorithms, visual methods and field mapping techniques are used to extract the geometry of specific stratigraphic bedding surfaces and interpolate fold geometry between topographic exposures, thereby creating models of the fold geometry at several stratigraphic levels. Geometric characteristics of the fold models, such as magnitudes and directions of maximum and minimum normal curvature and fold limb dip, are compared to the observed fracture characteristics to identify the following relationships: 1) Initiation of folding related fractures at ten degrees of limb dip and increasing fracture density with increasing dip and 2) No correlation between absolute maximum fold curvature and fracture density.

  12. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  13. Can we detect Galactic spiral arms? 3D dust distribution in the Milky Way

    NASA Astrophysics Data System (ADS)

    Rezaei Kh., Sara; Bailer-Jones, Coryn A. L.; Fouesneau, Morgan; Hanson, Richard

    2018-04-01

    We present a model to map the 3D distribution of dust in the Milky Way. Although dust is just a tiny fraction of what comprises the Galaxy, it plays an important role in various processes. In recent years various maps of dust extinction have been produced, but we still lack a good knowledge of the dust distribution. Our presented approach leverages line-of-sight extinctions towards stars in the Galaxy at measured distances. Since extinction is proportional to the integral of the dust density towards a given star, it is possible to reconstruct the 3D distribution of dust by combining many lines-of-sight in a model accounting for the spatial correlation of the dust. Such a technique can be used to infer the most probable 3D distribution of dust in the Galaxy even in regions which have not been observed. This contribution provides one of the first maps which does not show the ``fingers of God'' effect. Furthermore, we show that expected high precision measurements of distances and extinctions offer the possibility of mapping the spiral arms in the Galaxy.

  14. Functional materials discovery using energy-structure-function maps

    NASA Astrophysics Data System (ADS)

    Pulido, Angeles; Chen, Linjiang; Kaczorowski, Tomasz; Holden, Daniel; Little, Marc A.; Chong, Samantha Y.; Slater, Benjamin J.; McMahon, David P.; Bonillo, Baltasar; Stackhouse, Chloe J.; Stephenson, Andrew; Kane, Christopher M.; Clowes, Rob; Hasell, Tom; Cooper, Andrew I.; Day, Graeme M.

    2017-03-01

    Molecular crystals cannot be designed in the same manner as macroscopic objects, because they do not assemble according to simple, intuitive rules. Their structures result from the balance of many weak interactions, rather than from the strong and predictable bonding patterns found in metal-organic frameworks and covalent organic frameworks. Hence, design strategies that assume a topology or other structural blueprint will often fail. Here we combine computational crystal structure prediction and property prediction to build energy-structure-function maps that describe the possible structures and properties that are available to a candidate molecule. Using these maps, we identify a highly porous solid, which has the lowest density reported for a molecular crystal so far. Both the structure of the crystal and its physical properties, such as methane storage capacity and guest-molecule selectivity, are predicted using the molecular structure as the only input. More generally, energy-structure-function maps could be used to guide the experimental discovery of materials with any target function that can be calculated from predicted crystal structures, such as electronic structure or mechanical properties.

  15. Functional materials discovery using energy-structure-function maps.

    PubMed

    Pulido, Angeles; Chen, Linjiang; Kaczorowski, Tomasz; Holden, Daniel; Little, Marc A; Chong, Samantha Y; Slater, Benjamin J; McMahon, David P; Bonillo, Baltasar; Stackhouse, Chloe J; Stephenson, Andrew; Kane, Christopher M; Clowes, Rob; Hasell, Tom; Cooper, Andrew I; Day, Graeme M

    2017-03-30

    Molecular crystals cannot be designed in the same manner as macroscopic objects, because they do not assemble according to simple, intuitive rules. Their structures result from the balance of many weak interactions, rather than from the strong and predictable bonding patterns found in metal-organic frameworks and covalent organic frameworks. Hence, design strategies that assume a topology or other structural blueprint will often fail. Here we combine computational crystal structure prediction and property prediction to build energy-structure-function maps that describe the possible structures and properties that are available to a candidate molecule. Using these maps, we identify a highly porous solid, which has the lowest density reported for a molecular crystal so far. Both the structure of the crystal and its physical properties, such as methane storage capacity and guest-molecule selectivity, are predicted using the molecular structure as the only input. More generally, energy-structure-function maps could be used to guide the experimental discovery of materials with any target function that can be calculated from predicted crystal structures, such as electronic structure or mechanical properties.

  16. U.S.A. National Surface Rock Density Map - Part 2

    NASA Astrophysics Data System (ADS)

    Winester, D.

    2016-12-01

    A map of surface rock densities over the USA has been developed by the NOAA-National Geodetic Survey (NGS) as part of its Gravity for the Redefinition of the American Vertical Datum (GRAV-D) Program. GRAV-D is part of an international effort to generate a North American gravimetric geoid for use as the vertical datum reference surface. As a part of modeling process, it is necessary to eliminate from the observed gravity data the topographic and density effects of all masses above the geoid. However, the long-standing tradition in geoid modeling, which is to use an average rock density (e.g. 2.67 g/cm3), does not adequately represent the variety of lithologies in the USA. The U.S. Geological Survey has assembled a downloadable set of surface geologic formation maps (typically 1:100,000 to 1:500, 000 scale in NAD27) in GIS format. The lithologies were assigned densities typical of their rock type (Part 1) and these variety of densities were then rasterized and averaged over one arc-minute areas. All were then transformed into WGS84 datum. Thin layers of alluvium and some water bodies (interpreted to be less than 40 m thick) have been ignored in deference to underlying rocks. Deep alluvial basins have not been removed, since they represent significant fraction of local mass. The initial assumption for modeling densities will be that the surface rock densities extend down to the geoid. If this results in poor modeling, variable lithologies with depth can be attempted. Initial modeling will use elevations from the SRTM DEM. A map of CONUS densities is presented (denser lithologies are shown brighter). While a visual map at this scale does show detailed features, digital versions are available upon request. Also presented are some pitfalls of using source GIS maps digitized from variable reference sources, including the infamous `state line faults.'

  17. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  18. Use of total electron content data to analyze ionosphere electron density gradients

    NASA Astrophysics Data System (ADS)

    Nava, B.; Radicella, S. M.; Leitinger, R.; Coïsson, P.

    In the presence of electron density gradients the thin shell approximation for the ionosphere, used together with a simple mapping function to convert slant total electron content (TEC) to vertical TEC, could lead to TEC conversion errors. These "mapping function errors" can therefore be used to detect the electron density gradients in the ionosphere. In the present work GPS derived slant TEC data have been used to investigate the effects of the electron density gradients in the middle and low latitude ionosphere under geomagnetic quiet and disturbed conditions. In particular the data corresponding to the geographic area of the American Sector for the days 5-7 April 2000 have been used to perform a complete analysis of mapping function errors based on the "coinciding pierce point technique". The results clearly illustrate the electron density gradient effects according to the locations considered and to the actual levels of disturbance of the ionosphere. In addition, the possibility to assess an ionospheric shell height able to minimize the mapping function errors has been verified.

  19. Earliest phases of star formation (EPoS). Dust temperature distributions in isolated starless cores

    NASA Astrophysics Data System (ADS)

    Lippok, N.; Launhardt, R.; Henning, Th.; Balog, Z.; Beuther, H.; Kainulainen, J.; Krause, O.; Linz, H.; Nielbock, M.; Ragan, S. E.; Robitaille, T. P.; Sadavoy, S. I.; Schmiedeke, A.

    2016-07-01

    Context. Stars form by the gravitational collapse of cold and dense molecular cloud cores. Constraining the temperature and density structure of such cores is fundamental for understanding the initial conditions of star formation. We use Herschel observations of the thermal far-infrared (FIR) dust emission from nearby and isolated molecular cloud cores and combine them with ground-based submillimeter continuum data to derive observational constraints on their temperature and density structure. Aims: The aim of this study is to verify the validity of a ray-tracing inversion technique developed to derive the dust temperature and density structure of nearby and isolated starless cores directly from the dust emission maps and to test if the resulting temperature and density profiles are consistent with physical models. Methods: We have developed a ray-tracing inversion technique that can be used to derive the temperature and density structure of starless cores directly from the observed dust emission maps without the need to make assumptions about the physical conditions. Using this ray-tracing inversion technique, we derive the dust temperature and density structure of six isolated starless molecular cloud cores from dust emission maps in the wavelengths range 100 μm-1.2 mm. We then employ self-consistent radiative transfer modeling to the density profiles derived with the ray-tracing inversion method. In this model, the interstellar radiation field (ISRF) is the only heating source. The local strength of the ISRF as well as the total extinction provided by the outer envelope are treated as semi-free parameters which we scale within defined limits. The best-fit values of both parameters are derived by comparing the self-consistently calculated temperature profiles with those derived by the ray-tracing method. Results: We confirm earlier results and show that all starless cores are significantly colder inside than outside, with central core temperatures in the range 7.5-11.9 K and envelope temperatures that are 2.4 - 9.6 K higher. The core temperatures show a strong negative correlation with peak column density which suggests that the thermal structure of the cores is dominated by external heating from the ISRF and shielding by dusty envelopes. We find that temperature profiles derived with the ray-tracing inversion method can be well-reproduced with self-consistent radiative transfer models if the cores have geometry that is not too complex and good data coverage with spatially resolved maps at five or more wavelengths in range between 100 μm and 1.2 mm. We also confirm results from earlier studies that found that the usually adopted canonical value of the total strength of the ISRF in the solar neighbourhood is incompatible with the most widely used dust opacity models for dense cores. However, with the data available for this study, we cannot uniquely resolve the degeneracy between dust opacity law and strength of the ISRF. Final T maps (FITS format) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/cgi-bin/qcat?J/A+A/592/A61

  20. A computer program for converting rectangular coordinates to latitude-longitude coordinates

    USGS Publications Warehouse

    Rutledge, A.T.

    1989-01-01

    A computer program was developed for converting the coordinates of any rectangular grid on a map to coordinates on a grid that is parallel to lines of equal latitude and longitude. Using this program in conjunction with groundwater flow models, the user can extract data and results from models with varying grid orientations and place these data into grid structure that is oriented parallel to lines of equal latitude and longitude. All cells in the rectangular grid must have equal dimensions, and all cells in the latitude-longitude grid measure one minute by one minute. This program is applicable if the map used shows lines of equal latitude as arcs and lines of equal longitude as straight lines and assumes that the Earth 's surface can be approximated as a sphere. The program user enters the row number , column number, and latitude and longitude of the midpoint of the cell for three test cells on the rectangular grid. The latitude and longitude of boundaries of the rectangular grid also are entered. By solving sets of simultaneous linear equations, the program calculates coefficients that are used for making the conversion. As an option in the program, the user may build a groundwater model file based on a grid that is parallel to lines of equal latitude and longitude. The program reads a data file based on the rectangular coordinates and automatically forms the new data file. (USGS)

  1. A new complete sample of submillijansky radio sources: An optical and near-infrared study

    NASA Technical Reports Server (NTRS)

    Masci, F.; Condon, J.; Barlow, T.; Lonsdale, C.; Xu, C.; Shupe, D.; Pevunova, O.; Fang, F.; Cutri, R.

    2001-01-01

    The Very Large Array has been used in C configuration to map an area similar or equal to 0.3 deg(2) at 1.4 GHz with 5 sigma sensitivities of 0.305, 0.325, 0.380, and 0.450 mJy beam(-1)over four equal subareas.

  2. Dietary saturated triacylglycerols suppress hepatic low density lipoprotein receptor activity in the hamster.

    PubMed

    Spady, D K; Dietschy, J M

    1985-07-01

    The liver plays a key role in the regulation of circulating levels of low density lipoproteins (LDL) because it is both the site for the production of and the major organ for the degradation of this class of lipoproteins. In this study, the effects of feeding polyunsaturated or saturated triacylglycerols on receptor-dependent and receptor-independent hepatic LDL uptake were measured in vivo in the hamster. In control animals, receptor-dependent LDL transport manifested an apparent Km value of 85 mg/dl (plasma LDL-cholesterol concentration) and reached a maximum transport velocity of 131 micrograms of LDL-cholesterol/hr per g, whereas receptor-independent uptake increased as a linear function of plasma LDL levels. Thus, at normal plasma LDL-cholesterol concentrations, the hepatic clearance rate of LDL equaled 120 and 9 microliter/hr per g by receptor-dependent and receptor-independent mechanisms, respectively. As the plasma LDL-cholesterol was increased, the receptor-dependent (but not the receptor-independent) component declined. When cholesterol (0.12%) alone or in combination with polyunsaturated triacylglycerols was fed for 30 days, receptor-dependent clearance was reduced to 36-42 microliter/hr per g, whereas feeding of cholesterol plus saturated triacylglycerols essentially abolished receptor-dependent LDL uptake (5 microliter/hr per g). When compared to the appropriate kinetic curves, these findings indicated that receptor-mediated LDL transport was suppressed approximately equal to 30% by cholesterol feeding alone and this was unaffected by the addition of polyunsaturated triacylglycerols to the diet. In contrast, receptor-dependent uptake was suppressed approximately equal to 90% by the intake of saturated triacylglycerols. As compared to polyunsaturated triacylglycerols, the intake of saturated lipids was also associated with significantly higher plasma LDL-cholesterol concentrations and lower levels of cholesteryl esters in the liver.

  3. Detection of enhancement in number densities of background galaxies due to magnification by massive galaxy clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiu, I.; Dietrich, J. P.; Mohr, J.

    2016-02-18

    We present a detection of the enhancement in the number densities of background galaxies induced from lensing magnification and use it to test the Sunyaev-Zel'dovich effect (SZE-) inferred masses in a sample of 19 galaxy clusters with median redshift z similar or equal to 0.42 selected from the South Pole Telescope SPT-SZ survey. These clusters are observed by the Megacam on the Magellan Clay Telescope though gri filters. Two background galaxy populations are selected for this study through their photometric colours; they have median redshifts zmedian similar or equal to 0.9 (low-z background) and z(median) similar or equal to 1.8more » (high-z background). Stacking these populations, we detect the magnification bias effect at 3.3 sigma and 1.3 sigma for the low-and high-z backgrounds, respectively. We fit Navarro, Frenk and White models simultaneously to all observed magnification bias profiles to estimate the multiplicative factor. that describes the ratio of the weak lensing mass to the mass inferred from the SZE observable-mass relation. We further quantify systematic uncertainties in. resulting from the photometric noise and bias, the cluster galaxy contamination and the estimations of the background properties. The resulting. for the combined background populations with 1 sigma uncertainties is 0.83 +/- 0.24(stat) +/- 0.074(sys), indicating good consistency between the lensing and the SZE-inferred masses. We use our best-fitting eta to predict the weak lensing shear profiles and compare these predictions with observations, showing agreement between the magnification and shear mass constraints. This work demonstrates the promise of using the magnification as a complementary method to estimate cluster masses in large surveys.« less

  4. Electronic properties of RDX and HMX: Compton scattering experiment and first-principles calculation.

    PubMed

    Ahuja, B L; Jain, Pradeep; Sahariya, Jagrati; Heda, N L; Soni, Pramod

    2013-07-11

    The first-ever electron momentum density (EMD) measurements of explosive materials, namely, RDX (1,3,5-trinitro-1,3,5-triazacyclohexane, (CH2-N-NO2)3) and HMX (1,3,5,7-tetranitro-1,3,5,7-tetraazacyclooctane, (CH2-N-NO2)4), have been reported using a 740 GBq (137)Cs Compton spectrometer. Experimental Compton profiles (CPs) are compared with the EMDs derived from linear combination of atomic orbitals with density functional theory. It is found that the CPs deduced from generalized gradient approximation (GGA) with Wu-Cohen exchange energies give a better agreement with the corresponding experimental profiles than those from local density approximation and other schemes of GGA. Further, Mulliken population, energy bands, partial and total density of states, and band gap have also been reported using GGA calculations. Present ground state calculations unambiguously show large band gap semiconductor nature of both RDX and HMX. A similar type of bonding in these materials is uniquely established using Compton data and density of states. It is also outstandingly consistent with the Mulliken population, which predicts almost equal amount of charge transfer (0.84 and 0.83 e(-)) from H1 + H2 + N2 to C1 + N1 + O1 + O2 in both the explosives.

  5. Spatial and Temporal Analysis of Eruption Locations, Compositions, and Styles in Northern Harrat Rahat, Kingdom of Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.

    2017-12-01

    Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.

  6. Polder maps: Improving OMIT maps by excluding bulk solvent

    DOE PAGES

    Liebschner, Dorothee; Afonine, Pavel V.; Moriarty, Nigel W.; ...

    2017-02-01

    The crystallographic maps that are routinely used during the structure-solution workflow are almost always model-biased because model information is used for their calculation. As these maps are also used to validate the atomic models that result from model building and refinement, this constitutes an immediate problem: anything added to the model will manifest itself in the map and thus hinder the validation. OMIT maps are a common tool to verify the presence of atoms in the model. The simplest way to compute an OMIT map is to exclude the atoms in question from the structure, update the corresponding structure factorsmore » and compute a residual map. It is then expected that if these atoms are present in the crystal structure, the electron density for the omitted atoms will be seen as positive features in this map. This, however, is complicated by the flat bulk-solvent model which is almost universally used in modern crystallographic refinement programs. This model postulates constant electron density at any voxel of the unit-cell volume that is not occupied by the atomic model. Consequently, if the density arising from the omitted atoms is weak then the bulk-solvent model may obscure it further. A possible solution to this problem is to prevent bulk solvent from entering the selected OMIT regions, which may improve the interpretative power of residual maps. This approach is called a polder (OMIT) map. Polder OMIT maps can be particularly useful for displaying weak densities of ligands, solvent molecules, side chains, alternative conformations and residues both in terminal regions and in loops. As a result, the tools described in this manuscript have been implemented and are available in PHENIX.« less

  7. Construction of Reference Chromosome-Scale Pseudomolecules for Potato: Integrating the Potato Genome with Genetic and Physical Maps

    PubMed Central

    Sharma, Sanjeev Kumar; Bolser, Daniel; de Boer, Jan; Sønderkær, Mads; Amoros, Walter; Carboni, Martin Federico; D’Ambrosio, Juan Martín; de la Cruz, German; Di Genova, Alex; Douches, David S.; Eguiluz, Maria; Guo, Xiao; Guzman, Frank; Hackett, Christine A.; Hamilton, John P.; Li, Guangcun; Li, Ying; Lozano, Roberto; Maass, Alejandro; Marshall, David; Martinez, Diana; McLean, Karen; Mejía, Nilo; Milne, Linda; Munive, Susan; Nagy, Istvan; Ponce, Olga; Ramirez, Manuel; Simon, Reinhard; Thomson, Susan J.; Torres, Yerisf; Waugh, Robbie; Zhang, Zhonghua; Huang, Sanwen; Visser, Richard G. F.; Bachem, Christian W. B.; Sagredo, Boris; Feingold, Sergio E.; Orjeda, Gisella; Veilleux, Richard E.; Bonierbale, Merideth; Jacobs, Jeanne M. E.; Milbourne, Dan; Martin, David Michael Alan; Bryan, Glenn J.

    2013-01-01

    The genome of potato, a major global food crop, was recently sequenced. The work presented here details the integration of the potato reference genome (DM) with a new sequence-tagged site marker−based linkage map and other physical and genetic maps of potato and the closely related species tomato. Primary anchoring of the DM genome assembly was accomplished by the use of a diploid segregating population, which was genotyped with several types of molecular genetic markers to construct a new ~936 cM linkage map comprising 2469 marker loci. In silico anchoring approaches used genetic and physical maps from the diploid potato genotype RH89-039-16 (RH) and tomato. This combined approach has allowed 951 superscaffolds to be ordered into pseudomolecules corresponding to the 12 potato chromosomes. These pseudomolecules represent 674 Mb (~93%) of the 723 Mb genome assembly and 37,482 (~96%) of the 39,031 predicted genes. The superscaffold order and orientation within the pseudomolecules are closely collinear with independently constructed high density linkage maps. Comparisons between marker distribution and physical location reveal regions of greater and lesser recombination, as well as regions exhibiting significant segregation distortion. The work presented here has led to a greatly improved ordering of the potato reference genome superscaffolds into chromosomal “pseudomolecules”. PMID:24062527

  8. Spiral model of pitch

    NASA Astrophysics Data System (ADS)

    Miller, James D.

    2003-10-01

    A spiral model of pitch interrelates tone chroma, tone height, equal temperament scales, and a cochlear map. Donkin suggested in 1870 that the pitch of tones could be well represented by an equiangular spiral. More recently, the cylindrical helix has been popular for representing tone chroma and tone height. Here it is shown that tone chroma, tone height, and cochlear position can be conveniently related to tone frequency via a planar spiral. For this ``equal-temperament spiral,'' (ET Spiral) tone chroma is conceived as a circular array with semitones at 30° intervals. The frequency of sound on the cent scale (re 16.351 Hz) is represented by the radius of the spiral defined by r=(1200/2π)θr, where θr is in radians. By these definitions, one revolution represents one octave, 1200 cents, 30° represents a semitone, the radius relates θ to cents in accordance with equal temperament (ET) tuning, and the arclength of the spiral matches the mapping of sound frequency to the basilar membrane. Thus, the ET Spiral gives tone chroma as θ, tone height as the cent scale, and the cochlear map as the arclength. The possible implications and directions for further work are discussed.

  9. Classification and assessment of retrieved electron density maps in coherent X-ray diffraction imaging using multivariate analysis.

    PubMed

    Sekiguchi, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi

    2016-01-01

    Coherent X-ray diffraction imaging (CXDI) is one of the techniques used to visualize structures of non-crystalline particles of micrometer to submicrometer size from materials and biological science. In the structural analysis of CXDI, the electron density map of a sample particle can theoretically be reconstructed from a diffraction pattern by using phase-retrieval (PR) algorithms. However, in practice, the reconstruction is difficult because diffraction patterns are affected by Poisson noise and miss data in small-angle regions due to the beam stop and the saturation of detector pixels. In contrast to X-ray protein crystallography, in which the phases of diffracted waves are experimentally estimated, phase retrieval in CXDI relies entirely on the computational procedure driven by the PR algorithms. Thus, objective criteria and methods to assess the accuracy of retrieved electron density maps are necessary in addition to conventional parameters monitoring the convergence of PR calculations. Here, a data analysis scheme, named ASURA, is proposed which selects the most probable electron density maps from a set of maps retrieved from 1000 different random seeds for a diffraction pattern. Each electron density map composed of J pixels is expressed as a point in a J-dimensional space. Principal component analysis is applied to describe characteristics in the distribution of the maps in the J-dimensional space. When the distribution is characterized by a small number of principal components, the distribution is classified using the k-means clustering method. The classified maps are evaluated by several parameters to assess the quality of the maps. Using the proposed scheme, structure analysis of a diffraction pattern from a non-crystalline particle is conducted in two stages: estimation of the overall shape and determination of the fine structure inside the support shape. In each stage, the most accurate and probable density maps are objectively selected. The validity of the proposed scheme is examined by application to diffraction data that were obtained from an aggregate of metal particles and a biological specimen at the XFEL facility SACLA using custom-made diffraction apparatus.

  10. The 4 Ms CHANDRA Deep Field-South Number Counts Apportioned by Source Class: Pervasive Active Galactic Nuclei and the Ascent of Normal Galaxies

    NASA Technical Reports Server (NTRS)

    Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.; hide

    2012-01-01

    We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.

  11. Making Sense of 'Big Data' in Provenance Studies

    NASA Astrophysics Data System (ADS)

    Vermeesch, P.

    2014-12-01

    Huge online databases can be 'mined' to reveal previously hidden trends and relationships in society. One could argue that sedimentary geology has entered a similar era of 'Big Data', as modern provenance studies routinely apply multiple proxies to dozens of samples. Just like the Internet, sedimentary geology now requires specialised statistical tools to interpret such large datasets. These can be organised on three levels of progressively higher order:A single sample: The most effective way to reveal the provenance information contained in a representative sample of detrital zircon U-Pb ages are probability density estimators such as histograms and kernel density estimates. The widely popular 'probability density plots' implemented in IsoPlot and AgeDisplay compound analytical uncertainty with geological scatter and are therefore invalid.Several samples: Multi-panel diagrams comprising many detrital age distributions or compositional pie charts quickly become unwieldy and uninterpretable. For example, if there are N samples in a study, then the number of pairwise comparisons between samples increases quadratically as N(N-1)/2. This is simply too much information for the human eye to process. To solve this problem, it is necessary to (a) express the 'distance' between two samples as a simple scalar and (b) combine all N(N-1)/2 such values in a single two-dimensional 'map', grouping similar and pulling apart dissimilar samples. This can be easily achieved using simple statistics-based dissimilarity measures and a standard statistical method called Multidimensional Scaling (MDS).Several methods: Suppose that we use four provenance proxies: bulk petrography, chemistry, heavy minerals and detrital geochronology. This will result in four MDS maps, each of which likely show slightly different trends and patterns. To deal with such cases, it may be useful to use a related technique called 'three way multidimensional scaling'. This results in two graphical outputs: an MDS map, and a map with 'weights' showing to what extent the different provenance proxies influence the horizontal and vertical axis of the MDS map. Thus, detrital data can not only inform the user about the provenance of sediments, but also about the causal relationships between the mineralogy, geochronology and chemistry.

  12. The Three-dimensional Spatial Distribution of Interstellar Gas in the Milky Way: Implications for Cosmic Rays and High-energy Gamma-ray Emissions

    NASA Astrophysics Data System (ADS)

    Jóhannesson, Guđlaugur; Porter, Troy A.; Moskalenko, Igor V.

    2018-03-01

    Direct measurements of cosmic ray (CR) species combined with observations of their associated γ-ray emissions can be used to constrain models of CR propagation, trace the structure of the Galaxy, and search for signatures of new physics. The spatial density distribution of interstellar gas is a vital element for all these studies. So far, models have employed the 2D cylindrically symmetric geometry, but their accuracy is well behind that of the available data. In this paper, 3D spatial density models for neutral and molecular hydrogen are constructed based on empirical model fitting to gas line-survey data. The developed density models incorporate spiral arms and account for the warping of the disk, and the increasing gas scale height with radial distance from the Galactic center. They are employed together with the GALPROP CR propagation code to investigate how the new 3D gas models affect calculations of CR propagation and high-energy γ-ray intensity maps. The calculations reveal non-trivial features that are directly related to the new gas models. The best-fit values for propagation model parameters employing 3D gas models are presented and they differ significantly from those derived with the 2D gas density models that have been widely used. The combination of 3D CR and gas density models provide a more realistic basis for the interpretation of non-thermal emissions from the Galaxy.

  13. A first linkage map of globe artichoke (Cynara cardunculus var. scolymus L.) based on AFLP, S-SAP, M-AFLP and microsatellite markers.

    PubMed

    Lanteri, S; Acquadro, A; Comino, C; Mauro, R; Mauromicale, G; Portis, E

    2006-05-01

    We present the first genetic maps of globe artichoke (Cynara cardunculus var. scolymus L. 2n=2x=34), constructed with a two-way pseudo-testcross strategy. A F1 mapping population of 94 individuals was generated between a late-maturing, non-spiny type and an early-maturing spiny type. The 30 AFLP, 13 M-AFLP and 9 S-SAP primer combinations chosen identified, respectively, 352, 38 and 41 polymorphic markers. Of 32 microsatellite primer pairs tested, 12 identified heterozygous loci in one or other parent, and 7 were fully informative as they segregated in both parents. The female parent map comprised 204 loci, spread over 18 linkage groups and spanned 1330.5 cM with a mean marker density of 6.5 cM. The equivalent figures for the male parent map were 180 loci, 17 linkage groups, 1239.4 and 6.9 cM. About 3% of the AFLP and AFLP-derived markers displayed segregation distortion with a P value below 0.01, and were not used for map construction. All the SSR loci were included in the linkage analysis, although one locus did show some segregation distortion. The presence of 78 markers in common to both maps allowed the alignment of 16 linkage groups. The maps generated provide a firm basis for the mapping of agriculturally relevant traits, which will then open the way for the application of a marker-assisted selection breeding strategy in this species.

  14. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  15. Spatiotemporal Built-up Land Density Mapping Using Various Spectral Indices in Landsat-7 ETM+ and Landsat-8 OLI/TIRS (Case Study: Surakarta City)

    NASA Astrophysics Data System (ADS)

    Risky, Yanuar S.; Aulia, Yogi H.; Widayani, Prima

    2017-12-01

    Spectral indices variations support for rapid and accurate extracting information such as built-up density. However, the exact determination of spectral waves for built-up density extraction is lacking. This study explains and compares the capabilities of 5 variations of spectral indices in spatiotemporal built-up density mapping using Landsat-7 ETM+ and Landsat-8 OLI/TIRS in Surakarta City on 2002 and 2015. The spectral indices variations used are 3 mid-infrared (MIR) based indices such as the Normalized Difference Built-up Index (NDBI), Urban Index (UI) and Built-up and 2 visible based indices such as VrNIR-BI (visible red) and VgNIR-BI (visible green). Linear regression statistics between ground value samples from Google Earth image in 2002 and 2015 and spectral indices for determining built-up land density. Ground value used amounted to 27 samples for model and 7 samples for accuracy test. The classification of built-up density mapping is divided into 9 classes: unclassified, 0-12.5%, 12.5-25%, 25-37.5%, 37.5-50%, 50-62.5%, 62.5-75%, 75-87.5% and 87.5-100 %. Accuracy of built-up land density mapping in 2002 and 2015 using VrNIR-BI (81,823% and 73.235%), VgNIR-BI (78.934% and 69.028%), NDBI (34.870% and 74.365%), UI (43.273% and 64.398%) and Built-up (59.755% and 72.664%). Based all spectral indices, Surakarta City on 2000-2015 has increased of built-up land density. VgNIR-BI has better capabilities for built-up land density mapping on Landsat-7 ETM + and Landsat-8 OLI/TIRS.

  16. Building perceptual color maps for visualizing interval data

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Rogowitz, Bernice E.; Pelah, Adar; Cohen, Aron

    2000-06-01

    In visualization, a 'color map' maps a range of data values onto a scale of colors. However, unless a color map is e carefully constructed, visual artifacts can be produced. This problem has stimulated considerable interest in creating perceptually based color maps, that is, color maps where equal steps in data value are perceived as equal steps in the color map [Robertson (1988); Pizer (1981); Green (1992); Lefkowitz and Herman, 1992)]. In Rogowitz and Treinish, (1996, 1998) and in Bergman, Treinish and Rogowitz, (1995), we demonstrated that color maps based on luminance or saturation could be good candidates for satisfying this requirement. This work is based on the seminal work of S.S. Stevens (1966), who measured the perceived magnitude of different magnitudes of physical stimuli. He found that for many physical scales, including luminance (cd/m2) and saturation (the 'redness' of a long-wavelength light source), equal ratios in stimulus value produced equal ratios in perceptual magnitude. He interpreted this as indicating that there exists in human cognition a common scale for representing magnitude, and we scale the effects of different physical stimuli to this internal scale. In Rogowitz, Kalvin, Pelahb and Cohen (1999), we used a psychophysical technique to test this hypothesis as it applies to the creation of perceptually uniform color maps. We constructed color maps as trajectories through three-color spaces, a common computer graphics standard (uncalibrated HSV), a common perceptually-based engineering standard for creating visual stimuli (L*a*b*), and a space commonly used in the graphic arts (Munsell). For each space, we created color scales that varied linearly in hue, saturation, or luminance and measured the detectability of increments in hue, saturation or luminance for each of these color scales. We measured the amplitude of the just-detectable Gaussian increments at 20 different values along the range of each color map. For all three color spaces, we found that luminance-based color maps provided the most perceptually- uniform representations of the data. The just-detectable increment was constant at all points in the color map, with the exception of the lowest-luminance values, where a larger increment was required. The saturation-based color maps provided less sensitivity than the luminance-based color maps, requiring much larger increments for detection. For the hue- based color maps, the size of the increment required for detection varied across the range. For example, for the standard 'rainbow' color map (uncalibrated HSV, hue-varying map), a step in the 'green' region required an increment 16 times the size of the increment required in the 'cyan' part of the range. That is, the rainbow color map would not successfully represent changes in the data in the 'green' region of this color map. In this paper, we extend this research by studying the detectability of spatially-modulated Gabor targets based on these hue, saturation and luminance scales. Since, in visualization, the user is called upon to detect and identify patterns that vary in their spatial characteristics, it is important to study how different types of color maps represent data with varying spatial properties. To do so, we measured modulation thresholds for low-(0.2 c/deg) and high-spatial frequency (4.0 c/deg) Gabor patches and compared them with the Gaussian results. As before, we measured increment thresholds for hue, saturation, and luminance modulations. These color scales were constructed as trajectories along the three perceptual dimensions of color (hue, saturation, and luminance) in two color spaces, uncalibrated HSV and calibrated L*a*b. This allowed us to study how the three perceptual dimensions represent magnitude information for test patterns varying in spatial frequency. This design also allowed us to test the hypothesis that the luminance channel best carries high-spatial frequency information while the saturation channel best represents low spatial-frequency information (Mullen 1985; DeValois and DeValois 1988).

  17. Adaptive noise correction of dual-energy computed tomography images.

    PubMed

    Maia, Rafael Simon; Jacob, Christian; Hara, Amy K; Silva, Alvin C; Pavlicek, William; Mitchell, J Ross

    2016-04-01

    Noise reduction in material density images is a necessary preprocessing step for the correct interpretation of dual-energy computed tomography (DECT) images. In this paper we describe a new method based on a local adaptive processing to reduce noise in DECT images An adaptive neighborhood Wiener (ANW) filter was implemented and customized to use local characteristics of material density images. The ANW filter employs a three-level wavelet approach, combined with the application of an anisotropic diffusion filter. Material density images and virtual monochromatic images are noise corrected with two resulting noise maps. The algorithm was applied and quantitatively evaluated in a set of 36 images. From that set of images, three are shown here, and nine more are shown in the online supplementary material. Processed images had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than the raw material density images. The average improvements in SNR and CNR for the material density images were 56.5 and 54.75%, respectively. We developed a new DECT noise reduction algorithm. We demonstrate throughout a series of quantitative analyses that the algorithm improves the quality of material density images and virtual monochromatic images.

  18. VizieR Online Data Catalog: Star-forming potential in the Perseus complex (Mercimek+, 2017)

    NASA Astrophysics Data System (ADS)

    Mercimek, S.; Myers, P. C.; Lee, K. I.; Sadavoy, S. I.

    2018-05-01

    We used published catalogs of cores and YSOs at different wavelengths ranging from sub-millimeter (850 μm) to infrared (1.25 μm). We focus on seven clumps in Perseus, which Sadavoy et al. (2014ApJ...787L..18S) showed in their Figure 1. They defined these clumps and their boundaries using a fitted Herschel-derived column density map. The column density threshold of AV~7 mag is proposed as a star formation threshold by Andre et al. (2010A&A...518L.102A), Lada et al. (2010ApJ...724..687L), and Evans et al. (2014ApJ...782..114E) and is equal to N(H2)~5x1021/cm2 (see also, Kirk et al. 2006, J/ApJ/646/1009; Andre et al. 2010A&A...518L.102A). We considered a core or YSO to be associated with a clump if it is located within the AV=7 mag contour of that clump from Sadavoy et al. (2014ApJ...787L..18S). We define a "source" to be a starless core or a YSO. (7 data files).

  19. New advantages of the combined GPS and GLONASS observations for high-latitude ionospheric irregularities monitoring: case study of June 2015 geomagnetic storm

    NASA Astrophysics Data System (ADS)

    Cherniak, Iurii; Zakharenkova, Irina

    2017-05-01

    Monitoring, tracking and nowcasting of the ionospheric plasma density disturbances using dual-frequency measurements of the Global Positioning System (GPS) signals are effectively carried out during several decades. Recent rapid growth and modernization of the ground-based segment gives an opportunity to establish a great database consisting of more than 6000 stations worldwide which provide GPS signals measurements with an open access. Apart of the GPS signals, at least two-third of these stations receive simultaneously signals transmitted by another Global Navigation Satellite System (GNSS)—the Russian system GLONASS. Today, GLONASS signal measurements are mainly used in navigation and geodesy only and very rarely for ionosphere research. We present the first results demonstrating advantages of using several independent but compatible GNSS systems like GPS and GLONASS for improvement of the permanent monitoring of the high-latitude ionospheric irregularities. For the first time, the high-resolution two-dimensional maps of ROTI perturbation were made using not only GPS but also GLONASS measurements. We extend the use of the ROTI maps for analyzing ionospheric irregularities distribution. We demonstrate that the meridional slices of the ROTI maps can be effectively used to study the occurrence and temporal evolution of the ionospheric irregularities. The meridional slices of the geographical sectors with a high density of the GPS and GLONASS measurements can represent spatio-temporal dynamics of the intense ionospheric plasma density irregularities with very high resolution, and they can be effectively used for detailed study of the space weather drivers on the processes of the ionospheric irregularities generation, development and their lifetimes. Using a representative database of 5800 ground-based GNSS stations located worldwide, we have investigated the occurrence of the high-latitude ionospheric plasma density irregularities during the geomagnetic storm of June 22-23, 2015.[Figure not available: see fulltext.

  20. A high density linkage map of the ancestral diploid strawberry F. iinumae using SNP markers from the ISTRAW90 array and GBS

    USDA-ARS?s Scientific Manuscript database

    Fragaria iinumae is recognized as an ancestor of the octoploid strawberry species, including the cultivated strawberry, Fragaria ×ananassa. Here we report the construction of the first high density linkage map for F. iinumae. The map is based on two high-throughput techniques of single nucleotide p...

  1. Demonstration of a Strategy to Perform Two-Dimensional Diode Laser Tomography

    DTIC Science & Technology

    2008-03-01

    training set allows interpolation between beam paths resulting in temperature and density maps. Finally, the TDLAS temperature and density maps are... TDLAS and Tomography Results .................................................................. 38 Introduction...38 vii Page TDLAS Burner Setup

  2. Use of Satellite Remote Sensing Data in the Mapping of Global Landslide Susceptibility

    NASA Technical Reports Server (NTRS)

    Hong, Yang; Adler, Robert F.; Huffman, George J.

    2007-01-01

    Satellite remote sensing data has significant potential use in analysis of natural hazards such as landslides. Relying on the recent advances in satellite remote sensing and geographic information system (GIS) techniques, this paper aims to map landslide susceptibility over most of the globe using a GIs-based weighted linear combination method. First , six relevant landslide-controlling factors are derived from geospatial remote sensing data and coded into a GIS system. Next, continuous susceptibility values from low to high are assigned to each of the six factors. Second, a continuous scale of a global landslide susceptibility index is derived using GIS weighted linear combination based on each factor's relative significance to the process of landslide occurrence (e.g., slope is the most important factor, soil types and soil texture are also primary-level parameters, while elevation, land cover types, and drainage density are secondary in importance). Finally, the continuous index map is further classified into six susceptibility categories. Results show the hot spots of landslide-prone regions include the Pacific Rim, the Himalayas and South Asia, Rocky Mountains, Appalachian Mountains, Alps, and parts of the Middle East and Africa. India, China, Nepal, Japan, the USA, and Peru are shown to have landslide-prone areas. This first-cut global landslide susceptibility map forms a starting point to provide a global view of landslide risks and may be used in conjunction with satellite-based precipitation information to potentially detect areas with significant landslide potential due to heavy rainfall. 1

  3. Face Value: Towards Robust Estimates of Snow Leopard Densities.

    PubMed

    Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.

  4. Face Value: Towards Robust Estimates of Snow Leopard Densities

    PubMed Central

    2015-01-01

    When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682

  5. Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study.

    PubMed

    Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M

    2011-10-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. CAD-DRASTIC: chloride application density combined with DRASTIC for assessing groundwater vulnerability to road salt application

    NASA Astrophysics Data System (ADS)

    Salek, Mansour; Levison, Jana; Parker, Beth; Gharabaghi, Bahram

    2018-06-01

    Road salt is pervasively used throughout Canada and in other cold regions during winter. For cities relying exclusively on groundwater, it is important to plan and minimize the application of salt accordingly to mitigate the adverse effects of high chloride concentrations in water supply aquifers. The use of geospatial data (road network, land use, Quaternary and bedrock geology, average annual recharge, water-table depth, soil distribution, topography) in the DRASTIC methodology provides an efficient way of distinguishing salt-vulnerable areas associated with groundwater supply wells, to aid in the implementation of appropriate management practices for road salt application in urban areas. This research presents a GIS-based methodology to accomplish a vulnerability analysis for 12 municipal water supply wells within the City of Guelph, Ontario, Canada. The chloride application density (CAD) value at each supply well is calculated and related to the measured groundwater chloride concentrations and further combined with soil media and aquifer vadose- and saturated-zone properties used in DRASTIC. This combined approach, CAD-DRASTIC, is more accurate than existing groundwater vulnerability mapping methods and can be used by municipalities and other water managers to further improve groundwater protection related to road salt application.

  7. The interstellar medium and the highly ionized species observed in the spectrum of the nearby white dwarf G191-B2B

    NASA Technical Reports Server (NTRS)

    Bruhweiler, F. C.; Kondo, Y.

    1981-01-01

    High-resolution spectra of the nearby (48 pc) white dwarf G191-B2B, obtained with the International Ultraviolet Explorer, reveal sharp resonance lines of N V, C IV, and Si IV. The origin of these features is most likely linked to the white dwarf, possibly being formed in an expanding halo around the star. Interstellar lines of C II, N I, Mg II, Si II, and Fe II are also seen in the spectrum. Analysis of these features indicates an average neutral hydrogen number density of 0.064 for this line of sight. In combination with the recent EUV and soft X-ray results, this is interpreted to mean that the interstellar medium in the most immediate solar vicinity is of the normal density n approximately equal to 0.1/cu cm of lower ionization, while just beyond it, at least in some directions, is a hot lower density plasma. These results are apparently in conflict with the model of the interstellar medium by McKee and Ostriker (1977) in its present form.

  8. Gravitational body forces focus North American intraplate earthquakes

    USGS Publications Warehouse

    Levandowski, William Brower; Zellman, Mark; Briggs, Richard

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard.

  9. Gravitational body forces focus North American intraplate earthquakes

    PubMed Central

    Levandowski, Will; Zellman, Mark; Briggs, Rich

    2017-01-01

    Earthquakes far from tectonic plate boundaries generally exploit ancient faults, but not all intraplate faults are equally active. The North American Great Plains exemplify such intraplate earthquake localization, with both natural and induced seismicity generally clustered in discrete zones. Here we use seismic velocity, gravity and topography to generate a 3D lithospheric density model of the region; subsequent finite-element modelling shows that seismicity focuses in regions of high-gravity-derived deviatoric stress. Furthermore, predicted principal stress directions generally align with those observed independently in earthquake moment tensors and borehole breakouts. Body forces therefore appear to control the state of stress and thus the location and style of intraplate earthquakes in the central United States with no influence from mantle convection or crustal weakness necessary. These results show that mapping where gravitational body forces encourage seismicity is crucial to understanding and appraising intraplate seismic hazard. PMID:28211459

  10. A Comparison of Spatial Analysis Methods for the Construction of Topographic Maps of Retinal Cell Density

    PubMed Central

    Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568

  11. Using the Mean Shift Algorithm to Make Post Hoc Improvements to the Accuracy of Eye Tracking Data Based on Probable Fixation Locations

    DTIC Science & Technology

    2010-08-01

    astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate

  12. Construction of an ultrahigh-density genetic linkage map for Jatropha curcas L. and identification of QTL for fruit yield.

    PubMed

    Xia, Zhiqiang; Zhang, Shengkui; Wen, Mingfu; Lu, Cheng; Sun, Yufang; Zou, Meiling; Wang, Wenquan

    2018-01-01

    As an important biofuel plant, the demand for higher yield Jatropha curcas L. is rapidly increasing. However, genetic analysis of Jatropha and molecular breeding for higher yield have been hampered by the limited number of molecular markers available. An ultrahigh-density linkage map for a Jatropha mapping population of 153 individuals was constructed and covered 1380.58 cM of the Jatropha genome, with average marker density of 0.403 cM. The genetic linkage map consisted of 3422 SNP and indel markers, which clustered into 11 linkage groups. With this map, 13 repeatable QTLs (reQTLs) for fruit yield traits were identified. Ten reQTLs, qNF - 1 , qNF - 2a , qNF - 2b , qNF - 2c , qNF - 3 , qNF - 4 , qNF - 6 , qNF - 7a , qNF - 7b and qNF - 8, that control the number of fruits (NF) mapped to LGs 1, 2, 3, 4, 6, 7 and 8, whereas three reQTLs, qTWF - 1 , qTWF - 2 and qTWF - 3, that control the total weight of fruits (TWF) mapped to LGs 1, 2 and 3, respectively. It is interesting that there are two candidate critical genes, which may regulate Jatropha fruit yield. We also identified three pleiotropic reQTL pairs associated with both the NF and TWF traits. This study is the first to report an ultrahigh-density Jatropha genetic linkage map construction, and the markers used in this study showed great potential for QTL mapping. Thirteen fruit-yield reQTLs and two important candidate genes were identified based on this linkage map. This genetic linkage map will be a useful tool for the localization of other economically important QTLs and candidate genes for Jatropha .

  13. The construction of a high-density linkage map for identifying SNP markers that are tightly linked to a nuclear-recessive major gene for male sterility in Cryptomeria japonica D. Don

    PubMed Central

    2012-01-01

    Background High-density linkage maps facilitate the mapping of target genes and the construction of partial linkage maps around target loci to develop markers for marker-assisted selection (MAS). MAS is quite challenging in conifers because of their large, complex, and poorly-characterized genomes. Our goal was to construct a high-density linkage map to facilitate the identification of markers that are tightly linked to a major recessive male-sterile gene (ms1) for MAS in C. japonica, a species that is important in Japanese afforestation but which causes serious social pollinosis problems. Results We constructed a high-density saturated genetic linkage map for C. japonica using expressed sequence-derived co-dominant single nucleotide polymorphism (SNP) markers, most of which were genotyped using the GoldenGate genotyping assay. A total of 1261 markers were assigned to 11 linkage groups with an observed map length of 1405.2 cM and a mean distance between two adjacent markers of 1.1 cM; the number of linkage groups matched the basic chromosome number in C. japonica. Using this map, we located ms1 on the 9th linkage group and constructed a partial linkage map around the ms1 locus. This enabled us to identify a marker (hrmSNP970_sf) that is closely linked to the ms1 gene, being separated from it by only 0.5 cM. Conclusions Using the high-density map, we located the ms1 gene on the 9th linkage group and constructed a partial linkage map around the ms1 locus. The map distance between the ms1 gene and the tightly linked marker was only 0.5 cM. The identification of markers that are tightly linked to the ms1 gene will facilitate the early selection of male-sterile trees, which should expedite C. japonica breeding programs aimed at alleviating pollinosis problems without harming productivity. PMID:22424262

  14. Identification of early-stage usual interstitial pneumonia from low-dose chest CT scans using fractional high-density lung distribution

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Salvatore, Mary; Liu, Shuang; Jirapatnakul, Artit; Yankelevitz, David F.; Henschke, Claudia I.; Reeves, Anthony P.

    2017-03-01

    A fully-automated computer algorithm has been developed to identify early-stage Usual Interstitial Pneumonia (UIP) using features computed from low-dose CT scans. In each scan, the pre-segmented lung region is divided into N subsections (N = 1, 8, 27, 64) by separating the lung from anterior/posterior, left/right and superior/inferior in 3D space. Each subsection has approximately the same volume. In each subsection, a classic density measurement (fractional high-density volume h) is evaluated to characterize the disease severity in that subsection, resulting in a feature vector of length N for each lung. Features are then combined in two different ways: concatenation (2*N features) and taking the maximum in each of the two corresponding subsections in the two lungs (N features). The algorithm was evaluated on a dataset consisting of 51 UIP and 56 normal cases, a combined feature vector was computed for each case and an SVM classifier (RBF kernel) was used to classify them into UIP or normal using ten-fold cross validation. A receiver operating characteristic (ROC) area under the curve (AUC) was used for evaluation. The highest AUC of 0.95 was achieved by using concatenated features and an N of 27. Using lung partition (N = 27, 64) with concatenated features had significantly better result over not using partitions (N = 1) (p-value < 0.05). Therefore this equal-volume partition fractional high-density volume method is useful in distinguishing early-stage UIP from normal cases.

  15. Distribution of PAHs and trace metals in urban stormwater sediments: combination of density fractionation, mineralogy and microanalysis.

    PubMed

    El-Mufleh, Amelène; Béchet, Béatrice; Basile-Doelsch, Isabelle; Geffroy-Rodier, Claude; Gaudin, Anne; Ruban, Véronique

    2014-01-01

    Sediment management from stormwater infiltration basins represents a real environmental and economic issue for stakeholders due to the pollution load and important tonnages of these by-products. To reduce the sediment volumes to treat, organic and metal micropollutant-bearing phases should be identified. A combination of density fractionation procedure and microanalysis techniques was used to evaluate the distribution of polycyclic aromatic hydrocarbons (PAHs) and trace metals (Cd, Cr, Cu, Ni, Pb, and Zn) within variable density fractions for three urban stormwater basin sediments. The results confirm that PAHs are found in the lightest fractions (d < 1.9, 1.9 < d < 2.3 g cm(-3)) whereas trace metals are equally distributed within the light, intermediary, and highest fractions (d < 1.9, 1.9 < d < 2.3, 2.3 < d < 2.6, and d > 2.8 g cm(-3)) and are mostly in the 2.3 < d < 2.6 g cm(-3) fraction. The characterization of the five fractions by global analyses and microanalysis techniques (XRD and MEB-EDX) allowed us to identify pollutant-bearing phases. PAHs are bound to the organic matter (OM) and trace metals to OM, clays, carbonates and dense particles. Moreover, the microanalysis study underlines that OM is the main constituent responsible for the aggregation, particularly for microaggregation. In terms of sediment management, it was shown that density fractionation is not suitable for trace metals but could be adapted to separate PAH-enriched phases.

  16. The Grism Lens-Amplified Survey from Space (GLASS). VI. Comparing the Mass and Light in MACS J0416.1-2403 Using Frontier Field Imaging and GLASS Spectroscopy

    NASA Astrophysics Data System (ADS)

    Hoag, A.; Huang, K.-H.; Treu, T.; Bradač, M.; Schmidt, K. B.; Wang, X.; Brammer, G. B.; Broussard, A.; Amorin, R.; Castellano, M.; Fontana, A.; Merlin, E.; Schrabback, T.; Trenti, M.; Vulcani, B.

    2016-11-01

    We present a model using both strong and weak gravitational lensing of the galaxy cluster MACS J0416.1-2403, constrained using spectroscopy from the Grism Lens-Amplified Survey from Space (GLASS) and Hubble Frontier Fields (HFF) imaging data. We search for emission lines in known multiply imaged sources in the GLASS spectra, obtaining secure spectroscopic redshifts of 30 multiple images belonging to 15 distinct source galaxies. The GLASS spectra provide the first spectroscopic measurements for five of the source galaxies. The weak lensing signal is acquired from 884 galaxies in the F606W HFF image. By combining the weak lensing constraints with 15 multiple image systems with spectroscopic redshifts and nine multiple image systems with photometric redshifts, we reconstruct the gravitational potential of the cluster on an adaptive grid. The resulting map of total mass density is compared with a map of stellar mass density obtained from the deep Spitzer Frontier Fields imaging data to study the relative distribution of stellar and total mass in the cluster. We find that the projected stellar mass to total mass ratio, f ⋆, varies considerably with the stellar surface mass density. The mean projected stellar mass to total mass ratio is < {f}\\star > =0.009+/- 0.003 (stat.), but with a systematic error as large as 0.004-0.005, dominated by the choice of the initial mass function. We find agreement with several recent measurements of f ⋆ in massive cluster environments. The lensing maps of convergence, shear, and magnification are made available to the broader community in the standard HFF format.

  17. Mapping Findspots of Roman Military Brickstamps in Mogontiacum (Mainz) and Archaeometrical Analysis

    NASA Astrophysics Data System (ADS)

    Dolata, Jens; Mucha, Hans-Joachim; Bartel, Hans-Georg

    Mainz was a Roman settlement that was established as an important military outpost in 13 BC. Almost 100 years later Mainz, the ancient Mogontiacum, became the seat of the administrative centre of the Roman Province of Germania Superior. About 3,500 brickstamps concerning to the period until the fall of the Roman Empire in the fifth century AD have been found in archaeological excavations. These documents have to be investigated based on several methods for a better understanding the history. Here the focus is on an application of spatial statistical analysis in archaeology. Concretely, about 250 sites have to be investigated. So, we compare maps of different periods graphically by nonparametric density estimation. Here different weights of the sites according to the radius of the finding area are taken into account. Moreover we can test whether archaeological segmentation is statistically significant or not. In combination of smooth mapping, testing and looking for dated brickstamps there is a good chance to achieve new sources for the Roman history of Mainz.

  18. Correlation between land cover and ground vulnerability in Alexandria City (Egypt) using time series SAR interferometry and optical Earth observation data

    NASA Astrophysics Data System (ADS)

    Seleem, T.; Stergiopoulos, V.; Kourkouli, P.; Perrou, T.; Parcharidis, Is.

    2017-10-01

    The main scope of this study is to investigate the potential correlation between land cover and ground vulnerability over Alexandria city, Egypt. Two different datasets for generating ground deformation and land cover maps were used. Hence, two different approaches were followed, a PSI approach for surface displacement mapping and a supervised classification algorithm for land cover/use mapping. The interferometric results show a gradual qualitative and quantitative differentiation of ground deformation from East to West of Alexandria government. We selected three regions of interest, in order to compare the obtained interferometric results with the different land cover types. The ground deformation may be resulted due to different geomorphic and geologic factors encompassing the proximity to the active deltaic plain of the Nile River, the expansion of the urban network within arid regions of recent deposits, the urban density increase, and finally the combination of the above mentioned parameters.

  19. BOREAS RSS-15 SIR-C and Landsat TM Biomass and Landcover Maps of the NSA

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Ranson, K. Jon

    2000-01-01

    As part of BOREAS, the RSS-15 team conducted an investigation using SIR-C, X-SAR, and Landsat TM data for estimating total above-ground dry biomass for the SSA and NSA modeling grids and component biomass for the SSA. Relationships of backscatter to total biomass and total biomass to foliage, branch, and bole biomass were used to estimate biomass density across the landscape. The procedure involved image classification with SAR and Landsat TM data and development of simple mapping techniques using combinations of SAR channels. For the SSA, the SIR-C data used were acquired on 06-Oct-1994, and the Landsat TM data used were acquired on 02-Sep-1995. The maps of the NSA were developed from SIR-C data acquired on 13-Apr-1994. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).

  20. Towards quantitative off-axis electron holographic mapping of the electric field around the tip of a sharp biased metallic needle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beleggia, M.; Helmholtz-Zentrum Berlin für Materialien und Energie, Berlin; Kasama, T.

    We apply off-axis electron holography and Lorentz microscopy in the transmission electron microscope to map the electric field generated by a sharp biased metallic tip. A combination of experimental data and modelling provides quantitative information about the potential and the field around the tip. Close to the tip apex, we measure a maximum field intensity of 82 MV/m, corresponding to a field k factor of 2.5, in excellent agreement with theory. In order to verify the validity of the measurements, we use the inferred charge density distribution in the tip region to generate simulated phase maps and Fresnel (out-of-focus) imagesmore » for comparison with experimental measurements. While the overall agreement is excellent, the simulations also highlight the presence of an unexpected astigmatic contribution to the intensity in a highly defocused Fresnel image, which is thought to result from the geometry of the applied field.« less

  1. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth.

    PubMed

    Zhang, Xuehai; Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Xiong, Lizhong; Yang, Wanneng; Yan, Jianbing

    2017-03-01

    With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize ( Zea mays ) recombinant inbred line population ( n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. © 2017 American Society of Plant Biologists. All Rights Reserved.

  2. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth1[OPEN

    PubMed Central

    Huang, Chenglong; Wu, Di; Qiao, Feng; Li, Wenqiang; Duan, Lingfeng; Wang, Ke; Xiao, Yingjie; Chen, Guoxing; Liu, Qian; Yang, Wanneng

    2017-01-01

    With increasing demand for novel traits in crop breeding, the plant research community faces the challenge of quantitatively analyzing the structure and function of large numbers of plants. A clear goal of high-throughput phenotyping is to bridge the gap between genomics and phenomics. In this study, we quantified 106 traits from a maize (Zea mays) recombinant inbred line population (n = 167) across 16 developmental stages using the automatic phenotyping platform. Quantitative trait locus (QTL) mapping with a high-density genetic linkage map, including 2,496 recombinant bins, was used to uncover the genetic basis of these complex agronomic traits, and 988 QTLs have been identified for all investigated traits, including three QTL hotspots. Biomass accumulation and final yield were predicted using a combination of dissected traits in the early growth stage. These results reveal the dynamic genetic architecture of maize plant growth and enhance ideotype-based maize breeding and prediction. PMID:28153923

  3. Estimation of Monthly Near Surface Air Temperature Using Geographically Weighted Regression in China

    NASA Astrophysics Data System (ADS)

    Wang, M. M.; He, G. J.; Zhang, Z. M.; Zhang, Z. J.; Liu, X. G.

    2018-04-01

    Near surface air temperature (NSAT) is a primary descriptor of terrestrial environment conditions. The availability of NSAT with high spatial resolution is deemed necessary for several applications such as hydrology, meteorology and ecology. In this study, a regression-based NSAT mapping method is proposed. This method is combined remote sensing variables with geographical variables, and uses geographically weighted regression to estimate NSAT. The altitude was selected as geographical variable; and the remote sensing variables include land surface temperature (LST) and Normalized Difference vegetation index (NDVI). The performance of the proposed method was assessed by predict monthly minimum, mean, and maximum NSAT from point station measurements in China, a domain with a large area, complex topography, and highly variable station density, and the NSAT maps were validated against the meteorology observations. Validation results with meteorological data show the proposed method achieved an accuracy of 1.58 °C. It is concluded that the proposed method for mapping NSAT is very operational and has good precision.

  4. THE VIRUS-P EXPLORATION OF NEARBY GALAXIES (VENGA): THE X {sub CO} GRADIENT IN NGC 628

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanc, Guillermo A.; Schruba, Andreas; Evans, Neal J. II

    2013-02-20

    We measure the radial profile of the {sup 12}CO(1-0) to H{sub 2} conversion factor (X {sub CO}) in NGC 628. The H{alpha} emission from the VENGA integral field spectroscopy is used to map the star formation rate (SFR) surface density ({Sigma}{sub SFR}). We estimate the molecular gas surface density ({Sigma}{sub H2}) from {Sigma}{sub SFR} by inverting the molecular star formation law (SFL), and compare it to the CO intensity to measure X {sub CO}. We study the impact of systematic uncertainties by changing the slope of the SFL, using different SFR tracers (H{alpha} versus far-UV plus 24 {mu}m), and COmore » maps from different telescopes (single-dish and interferometers). The observed X {sub CO} profile is robust against these systematics, drops by a factor of two from R {approx} 7 kpc to the center of the galaxy, and is well fit by a gradient {Delta}log(X {sub CO}) = 0.06 {+-} 0.02 dex kpc{sup -1}. We study how changes in X {sub CO} follow changes in metallicity, gas density, and ionization parameter. Theoretical models show that the gradient in X {sub CO} can be explained by a combination of decreasing metallicity, and decreasing {Sigma}{sub H2} with radius. Photoelectric heating from the local UV radiation field appears to contribute to the decrease of X {sub CO} in higher density regions. Our results show that galactic environment plays an important role at setting the physical conditions in star-forming regions, in particular the chemistry of carbon in molecular complexes, and the radiative transfer of CO emission. We caution against adopting a single X {sub CO} value when large changes in gas surface density or metallicity are present.« less

  5. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  6. Comparison of RS/GIS analysis with classic mapping approaches for siting low-yield boreholes for hand pumps in crystalline terrains. An application to rural communities of the Caimbambo province, Angola

    NASA Astrophysics Data System (ADS)

    Martín-Loeches, Miguel; Reyes-López, Jaime; Ramírez-Hernández, Jorge; Temiño-Vela, Javier; Martínez-Santos, Pedro

    2018-02-01

    In poverty-stricken regions of Sub-Saharan Africa, groundwater for supply is often obtained by means of hand pumps, which means that low-yield boreholes are acceptable. However, boreholes are often sited without sufficient hydrogeological information due to budget constraints, which leads to high failure rates. Cost-effective techniques for borehole siting need to be developed in order to maximize the success rate. In regions underlain by granite, weathered formations are usually targeted for drilling, as these are generally presented as a better cost-benefit ratio than the fractured basement. Within this context, this research focuses on a granite region of Angola. A comparison of two mapping techniques for borehole siting-groundwater prospect is presented. A classic hydrogeomorphological map was developed first based on aerial photographs, field mapping and a geophysical survey. This map represents a considerable time investment and was developed by qualified technicians. The second map (RS/GIS) is considerably simpler and more cost-effective. It was developed by the integration in a GIS platform of six maps of equal importance-slope, drainage density, vegetation vigor, presence of clay in the soil, lineaments and rock outcrops-prepared from Landsat 8 imagery and a Digital Elevation Model (DEM). Similar results were obtained in both cases. By means of a supervised classification of Landsat images, RS/GIS analysis allows for the identification of granitic outcrops, house clusters and sandy alluvial valleys. This in turn allows for the delineation of low-interest or contamination-prone areas, thus contributing additional qualitative information. The position of a well that is going to be powered by a handpump is chosen also upon social and local matters as the distance to the stakeholders, information that are not difficult to integrate in the GIS. Although the second map needs some field inputs (i.e. surveys to determine the thickness of the weathered pack), results show that RS/GIS analyses such as this one provide a valuable and cost-effective alternative for siting low-yield boreholes in remote regions.

  7. Construction of a high-density genetic map for grape using specific length amplified fragment (SLAF) sequencing

    PubMed Central

    Guo, Yinshan; Xing, Huiyang; Zhao, Yuhui; Liu, Zhendong; Li, Kun; Guo, Xiuwu

    2017-01-01

    Genetic maps are important tools in plant genomics and breeding. We report a large-scale discovery of single nucleotide polymorphisms (SNPs) using the specific length amplified fragment sequencing (SLAF-seq) technique for the construction of high-density genetic maps for two elite wine grape cultivars, ‘Chardonnay’ and ‘Beibinghong’, and their 130 F1 plants. A total of 372.53 M paired-end reads were obtained after preprocessing. The average sequencing depth was 33.81 for ‘Chardonnay’ (the female parent), 48.20 for ‘Beibinghong’ (the male parent), and 12.66 for the F1 offspring. We detected 202,349 high-quality SLAFs of which 144,972 were polymorphic; 10,042 SNPs were used to construct a genetic map that spanned 1,969.95 cM, with an average genetic distance of 0.23 cM between adjacent markers. This genetic map contains the largest molecular marker number of the grape maps so far reported. We thus demonstrate that SLAF-seq is a promising strategy for the construction of high-density genetic maps; the map that we report here is a good potential resource for QTL mapping of genes linked to major economic and agronomic traits, map-based cloning, and marker-assisted selection of grape. PMID:28746364

  8. Hydrology of the Reelfoot Lake basin, Obion and Lake counties, northwestern Tennessee

    USGS Publications Warehouse

    Robbins, C.H.

    1985-01-01

    Nine maps describe the following water resources aspects of the Reelfoot Lake watershed: Map 1-Surface water gaging stations, lake level, and locations of observation wells, rainfall stations and National Weather Service rainfall stations; Maps 2 and 3-water level contours, river stage, groundwater movement; Maps 4 and 5-grid blocks simulating constant head on the Mississippi River, Reelfoot Lake, Running Reelfoot Bayou, Reelfoot Creek, and Running Slough; Maps 6 and 7-difference between model calculated and observed water levels; and Maps 8 and 9-line of equal groundwater level increase and approximate lake area at pool elevation. (Lantz-PTT)

  9. Effect of strain and deformation route on grain boundary characteristics and recrystallization behavior of aluminum

    NASA Astrophysics Data System (ADS)

    Sakai, Tetsuo; Utsunomiya, Hiroshi; Takahashi, Yasuo

    2014-08-01

    The effect of strain and deformation route on the recrystallization behavior of aluminum sheets has been investigated using well lubricated cold rolling and continuous equal channel angular extrusion. Three different deformation routes in plane strain corresponding to (1) simple shear, (2) compression, and (3) the combination of simple shear and compression were performed on 1100 aluminum sheet. Fixed amounts of the equivalent strain of 1.28 and 1.06 were accumulated in each route. In case of the combined deformation route, the ratio of shear strain to the total equivalent strain was varied. The recrystallized grain size was finer if the combined deformation route was employed instead of the monotonic route under the same amount of equivalent strain at either strain level. The density of high angle grain boundaries that act as nucleation sites for recrystallization was higher in materials deformed by the combined route. The orientation imaging micrographs revealed that the change in deformation route is effective for introducing a larger number of new high angle grain boundaries with relatively low misorientation angle.

  10. The Heat Exposure Integrated Deprivation Index (HEIDI): A data-driven approach to quantifying neighborhood risk during extreme hot weather.

    PubMed

    Krstic, Nikolas; Yuchi, Weiran; Ho, Hung Chak; Walker, Blake B; Knudby, Anders J; Henderson, Sarah B

    2017-12-01

    Mortality attributable to extreme hot weather is a growing concern in many urban environments, and spatial heat vulnerability indexes are often used to identify areas at relatively higher and lower risk. Three indexes were developed for greater Vancouver, Canada using a pool of 20 potentially predictive variables categorized to reflect social vulnerability, population density, temperature exposure, and urban form. One variable was chosen from each category: an existing deprivation index, senior population density, apparent temperature, and road density, respectively. The three indexes were constructed from these variables using (1) unweighted, (2) weighted, and (3) data-driven Heat Exposure Integrated Deprivation Index (HEIDI) approaches. The performance of each index was assessed using mortality data from 1998-2014, and the maps were compared with respect to spatial patterns identified. The population-weighted spatial correlation between the three indexes ranged from 0.68-0.89. The HEIDI approach produced a graduated map of vulnerability, whereas the other approaches primarily identified areas of highest risk. All indexes performed best under extreme temperatures, but HEIDI was more useful at lower thresholds. Each of the indexes in isolation provides valuable information for public health protection, but combining the HEIDI approach with unweighted and weighted methods provides richer information about areas most vulnerable to heat. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Observing gas in Cosmic Web filaments to constrain simulations of cosmic structure formation

    NASA Astrophysics Data System (ADS)

    Wakker, Bart

    2016-10-01

    Cosmological simulations predict that dark matter and baryons condense into multi-Mpc filamentary structures, making up the Cosmic Web. This is outlined by dark matter halos, inside which 10% of baryons are concentrated to make stars in galaxies. The other 90% of the baryons remain gaseous, with about half located outside galaxy halos. They can be traced by Lyman alpha absorbers, whose HI column density is determined by a combination of gas density and the intensity of the extragalactic ionizing background (EGB). About 1000 HST orbits have been expended to map the 50% of baryons in galaxy halos. This contrasts with 37 orbits explicitly allocated to map the other 50% (our Cycle 18 program to observe 17 AGN projected onto a single filament at cz 3500 km/s). We propose a 68-orbit program to observe 40 AGN, creating a sample of 56 sightlines covering a second filament at cz 2500 km/s. Using this dataset we will do the following: (1) measure the intensity of the EGB to within about 50%; (2) confirm that the linewidth of Lya absorbers increases near the filament axis, suggesting increasing temperature or turbulence; (3) check our earlier finding that simulations predict a transverse density HI profile (which scales with the dark-matter profile) that is much broader than is indicated by the observations.

  12. First Results from the Dense Extragalactic GBT+ARGUS Survey (DEGAS): A Direct, Quantitative Test of the Role of Gas Density in Star Formation

    NASA Astrophysics Data System (ADS)

    Kepley, Amanda; Bigiel, Frank; Bolatto, Alberto; Church, Sarah; Cleary, Kieran; Frayer, David; Gallagher, Molly; Gundersen, Joshua; Harris, Andrew; Hughes, Annie; Jimenez-Donaire, Maria Jesus; Kessler, Sarah; Lee, Cheoljong; Leroy, Adam; Li, Jialu; Donovan Meyer, Jennifer; Rosolowsky, Erik; Sandstrom, Karin; Schinnener, Eva; Schruba, Andreas; Sieth, Matt; Usero, Antonio

    2018-01-01

    Gas density plays a central role in all modern theories of star formation. A key test of these theories involves quantifying the resolved gas density distribution and its relationship to star formation within a wide range of galactic environments. Until recently, this experiment has been difficult to perform owing to the faint nature of key molecular gas tracers like HCN and HCO+, but the superior sensitivity of modern millimeter instruments like ALMA and the IRAM 30m make these types of experiments feasible. In particular, the sensitivity and resolution provided by large aperture of the GBT combined with fast mapping speeds made possible by its new 16-pixel, 3mm focal plane array (Argus) make the GBT an almost-ideal instrument for this type of study. The Dense Extragalactic GBT+Argus Survey (DEGAS) will leverage these capabilities to perform the largest, resolved survey of molecular gas tracers in nearby galaxies, ultimately mapping a suite of four molecular gas tracers in the inner 2’ by 2’ of 36 nearby galaxies. When complete in 2020, DEGAS will be the largest resolved survey of dense molecular gas tracers in nearby galaxies. This talk will present early results from the first observations for this Green Bank Telescope large survey and highlight some exciting future possibilities for this survey.

  13. Beneficial impact of aerobic exercises on bone mineral density in obese premenopausal women under caloric restriction.

    PubMed

    Hosny, Iman Abbas; Elghawabi, Hamed Samir; Younan, Wael Bahat Fahmy; Sabbour, Adly Aly; Gobrial, Mona Abdel Messih

    2012-04-01

    The aim of this study was to assess the impact of caloric restriction diet versus caloric restriction diet combined with aerobic exercises on bone mineral density (BMD) in obese premenopausal women. Forty premenopausal obese women were classified randomly into two groups equal in number. The first group (group A) received caloric restriction diet, while the second (group B) received caloric restriction diet combined with a program of aerobic exercises, over 3 months. The variables measured in this study included age, weight, height, body mass index, fat weight, lean mass, fat percent, basal metabolic rate, and BMD. The comparison between group A and group B showed significantly higher post-treatment lean mass, basal metabolic rate, and BMD in weight-bearing bones (L2-L4 lumbar spine and total hip) in group B compared to group A. In contrast to the BMD of the weight-bearing bones, the BMD of the radius showed significant decrease between the pre- and post-treatment results in groups A and B with no significant differences between the two groups. A greater improvement in the BMD of weight-bearing bones was observed in obese premenopausal women undergoing caloric restriction combined with exercise than in those not undergoing exercise. Anaerobic exercises incorporated into weight loss programs help offset the adverse effects of dietary restriction on bone.

  14. Mapping fractional woody cover in semi-arid savannahs using multi-seasonal composites from Landsat data

    NASA Astrophysics Data System (ADS)

    Higginbottom, Thomas P.; Symeonakis, Elias; Meyer, Hanna; van der Linden, Sebastian

    2018-05-01

    Increasing attention is being directed at mapping the fractional woody cover of savannahs using Earth-observation data. In this study, we test the utility of Landsat TM/ ETM-based spectral-temporal variability metrics for mapping regional-scale woody cover in the Limpopo Province of South Africa, for 2010. We employ a machine learning framework to compare the accuracies of Random Forest models derived using metrics calculated from different seasons. We compare these results to those from fused Landsat-PALSAR data to establish if seasonal metrics can compensate for structural information from the PALSAR signal. Furthermore, we test the applicability of a statistical variable selection method, the recursive feature elimination (RFE), in the automation of the model building process in order to reduce model complexity and processing time. All of our tests were repeated at four scales (30, 60, 90, and 120 m-pixels) to investigate the role of spatial resolution on modelled accuracies. Our results show that multi-seasonal composites combining imagery from both the dry and wet seasons produced the highest accuracies (R2 = 0.77, RMSE = 9.4, at the 120 m scale). When using a single season of observations, dry season imagery performed best (R2 = 0.74, RMSE = 9.9, at the 120 m resolution). Combining Landsat and radar imagery was only marginally beneficial, offering a mean relative improvement of 1% in accuracy at the 120 m scale. However, this improvement was concentrated in areas with lower densities of woody coverage (<30%), which are areas of concern for environmental monitoring. At finer spatial resolutions, the inclusion of SAR data actually reduced accuracies. Overall, the RFE was able to produce the most accurate model (R2 = 0.8, RMSE = 8.9, at the 120 m pixel scale). For mapping savannah woody cover at the 30 m pixel scale, we suggest that monitoring methodologies continue to exploit the Landsat archive, but should aim to use multi-seasonal derived information. When the coarser 120 m pixel scale is adequate, integration of Landsat and SAR data should be considered, especially in areas with lower woody cover densities. The use of multiple seasonal compositing periods offers promise for large-area mapping of savannahs, even in regions with a limited historical Landsat coverage.

  15. A journey from a SSR-based low density map to a SNP-based high density map for identification of disease resistance quantitative trait loci in peanut

    USDA-ARS?s Scientific Manuscript database

    Mapping and identification of quantitative trait loci (QTLs) are important for efficient marker-assisted breeding. Diseases such as leaf spots and Tomato spotted wilt virus (TSWV) cause significant loses to peanut growers. The U.S. Peanut Genome Initiative (PGI) was launched in 2004, and expanded to...

  16. An Ultra-High-Density, Transcript-Based, Genetic Map of Lettuce

    PubMed Central

    Truco, Maria José; Ashrafi, Hamid; Kozik, Alexander; van Leeuwen, Hans; Bowers, John; Wo, Sebastian Reyes Chin; Stoffel, Kevin; Xu, Huaqin; Hill, Theresa; Van Deynze, Allen; Michelmore, Richard W.

    2013-01-01

    We have generated an ultra-high-density genetic map for lettuce, an economically important member of the Compositae, consisting of 12,842 unigenes (13,943 markers) mapped in 3696 genetic bins distributed over nine chromosomal linkage groups. Genomic DNA was hybridized to a custom Affymetrix oligonucleotide array containing 6.4 million features representing 35,628 unigenes of Lactuca spp. Segregation of single-position polymorphisms was analyzed using 213 F7:8 recombinant inbred lines that had been generated by crossing cultivated Lactuca sativa cv. Salinas and L. serriola acc. US96UC23, the wild progenitor species of L. sativa. The high level of replication of each allele in the recombinant inbred lines was exploited to identify single-position polymorphisms that were assigned to parental haplotypes. Marker information has been made available using GBrowse to facilitate access to the map. This map has been anchored to the previously published integrated map of lettuce providing candidate genes for multiple phenotypes. The high density of markers achieved in this ultradense map allowed syntenic studies between lettuce and Vitis vinifera as well as other plant species. PMID:23550116

  17. An Ultra-High-Density, Transcript-Based, Genetic Map of Lettuce.

    PubMed

    Truco, Maria José; Ashrafi, Hamid; Kozik, Alexander; van Leeuwen, Hans; Bowers, John; Wo, Sebastian Reyes Chin; Stoffel, Kevin; Xu, Huaqin; Hill, Theresa; Van Deynze, Allen; Michelmore, Richard W

    2013-04-09

    We have generated an ultra-high-density genetic map for lettuce, an economically important member of the Compositae, consisting of 12,842 unigenes (13,943 markers) mapped in 3696 genetic bins distributed over nine chromosomal linkage groups. Genomic DNA was hybridized to a custom Affymetrix oligonucleotide array containing 6.4 million features representing 35,628 unigenes of Lactuca spp. Segregation of single-position polymorphisms was analyzed using 213 F 7:8 recombinant inbred lines that had been generated by crossing cultivated Lactuca sativa cv. Salinas and L. serriola acc. US96UC23, the wild progenitor species of L. sativa The high level of replication of each allele in the recombinant inbred lines was exploited to identify single-position polymorphisms that were assigned to parental haplotypes. Marker information has been made available using GBrowse to facilitate access to the map. This map has been anchored to the previously published integrated map of lettuce providing candidate genes for multiple phenotypes. The high density of markers achieved in this ultradense map allowed syntenic studies between lettuce and Vitis vinifera as well as other plant species. Copyright © 2013 Truco et al.

  18. Development and validation of a critical gradient energetic particle driven Alfven eigenmode transport model for DIII-D tilted neutral beam experiments

    NASA Astrophysics Data System (ADS)

    Waltz, R. E.; Bass, E. M.; Heidbrink, W. W.; VanZeeland, M. A.

    2015-11-01

    Recent experiments with the DIII-D tilted neutral beam injection (NBI) varying the beam energetic particle (EP) source profiles have provided strong evidence that unstable Alfven eigenmodes (AE) drive stiff EP transport at a critical EP density gradient [Heidbrink et al 2013 Nucl. Fusion 53 093006]. Here the critical gradient is identified by the local AE growth rate being equal to the local ITG/TEM growth rate at the same low toroidal mode number. The growth rates are taken from the gyrokinetic code GYRO. Simulation show that the slowing down beam-like EP distribution has a slightly lower critical gradient than the Maxwellian. The ALPHA EP density transport code [Waltz and Bass 2014 Nucl. Fusion 54 104006], used to validate the model, combines the low-n stiff EP critical density gradient AE mid-core transport with the Angioni et al (2009 Nucl. Fusion 49 055013) energy independent high-n ITG/TEM density transport model controling the central core EP density profile. For the on-axis NBI heated DIII-D shot 146102, while the net loss to the edge is small, about half the birth fast ions are transported from the central core r/a  <  0.5 and the central density is about half the slowing down density. These results are in good agreement with experimental fast ion pressure profiles inferred from MSE constrained EFIT equilibria.

  19. Official crime data versus collaborative crime mapping at a Brazilian city

    NASA Astrophysics Data System (ADS)

    Brito, P. L.; Jesus, E. G. V.; Sant'Ana, R. M. S.; Martins, C.; Delgado, J. P. M.; Fernandes, V. O.

    2014-11-01

    In July of 2013 a group of undergraduate students from the Federal University of Bahia, Brazil, published a collaborative web map called "Where I Was Robbed". Their initial efforts in publicizing their web map were restricted to announce it at a local radio as a tool of social interest. In two months the map had almost 10.000 reports, 155 reports per day and people from more the 350 cities had already reported a crime. The present study consists in an investigation about this collaborative web map spatial correlation to official robbery data registered at the Secretary of Public Safety database, for the city of Salvador, Bahia. Kernel density estimator combined with map algebra was used to the investigation. Spatial correlations with official robbery data for the city of Salvador were not found initially, but after standardizing collaborative data and mining official registers, both data pointed at very similar areas as the main hot spots for pedestrian robbery. Both areas are located at two of the most economical active areas of the city, although web map crimes reports were more concentrated in an area with higher income population. This results and discussions indicates that this collaborative application is been used mainly by mid class and upper class parcel of the city population, but can still provide significant information on public safety priority areas. Therefore, extended divulgation, on local papers, radio and TV, of the collaborative crime map application and partnership with official agencies are strongly recommended.

  20. Particle visualization in high-power impulse magnetron sputtering. I. 2D density mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Britun, Nikolay, E-mail: nikolay.britun@umons.ac.be; Palmucci, Maria; Konstantinidis, Stephanos

    2015-04-28

    Time-resolved characterization of an Ar-Ti high-power impulse magnetron sputtering discharge has been performed. This paper deals with two-dimensional density mapping in the discharge volume obtained by laser-induced fluorescence imaging. The time-resolved density evolution of Ti neutrals, singly ionized Ti atoms (Ti{sup +}), and Ar metastable atoms (Ar{sup met}) in the area above the sputtered cathode is mapped for the first time in this type of discharges. The energetic characteristics of the discharge species are additionally studied by Doppler-shift laser-induced fluorescence imaging. The questions related to the propagation of both the neutral and ionized discharge particles, as well as to theirmore » spatial density distributions, are discussed.« less

  1. Mapping Chinese tallow with color-infrared photography

    USGS Publications Warehouse

    Ramsey, Elijah W.; Nelson, G.A.; Sapkota, S.K.; Seeger, E.B.; Martella, K.D.

    2002-01-01

    Airborne color-infrared photography (CIR) (1:12,000 scale) was used to map localized occurrences of the widespread and aggressive Chinese tallow (Sapium sebiferum), an invasive species. Photography was collected during senescence when Chinese tallow's bright red leaves presented a high spectral contrast within the native bottomland hardwood and upland forests and marsh land-cover types. Mapped occurrences were conservative because not all senescing tallow leaves are bright red simultaneously. To simulate low spectral but high spatial resolution satellite/airborne image and digital video data, the CIR photography was transformed into raster images at spatial resolutions approximating 0.5 in and 1.0 m. The image data were then spectrally classified for the occurrence of bright red leaves associated with senescing Chinese tallow. Classification accuracies were greater than 95 percent at both spatial resolutions. There was no significant difference in either forest in the detection of tallow or inclusion of non-tallow trees associated with the two spatial resolutions. In marshes, slightly more tallow occurrences were mapped with the lower spatial resolution, but there were also more misclassifications of native land covers as tallow. Combining all land covers, there was no difference at detecting tallow occurrences (equal omission errors) between the two resolutions, but the higher spatial resolution was associated with less inclusion of non-tallow land covers as tallow (lower commission error). Overall, these results confirm that high spatial (???1 m) but low spectral resolution remote sensing data can be used for mapping Chinese tallow trees in dominant environments found in coastal and adjacent upland landscapes.

  2. An optimized protocol for generation and analysis of Ion Proton sequencing reads for RNA-Seq.

    PubMed

    Yuan, Yongxian; Xu, Huaiqian; Leung, Ross Ka-Kit

    2016-05-26

    Previous studies compared running cost, time and other performance measures of popular sequencing platforms. However, comprehensive assessment of library construction and analysis protocols for Proton sequencing platform remains unexplored. Unlike Illumina sequencing platforms, Proton reads are heterogeneous in length and quality. When sequencing data from different platforms are combined, this can result in reads with various read length. Whether the performance of the commonly used software for handling such kind of data is satisfactory is unknown. By using universal human reference RNA as the initial material, RNaseIII and chemical fragmentation methods in library construction showed similar result in gene and junction discovery number and expression level estimated accuracy. In contrast, sequencing quality, read length and the choice of software affected mapping rate to a much larger extent. Unspliced aligner TMAP attained the highest mapping rate (97.27 % to genome, 86.46 % to transcriptome), though 47.83 % of mapped reads were clipped. Long reads could paradoxically reduce mapping in junctions. With reference annotation guide, the mapping rate of TopHat2 significantly increased from 75.79 to 92.09 %, especially for long (>150 bp) reads. Sailfish, a k-mer based gene expression quantifier attained highly consistent results with that of TaqMan array and highest sensitivity. We provided for the first time, the reference statistics of library preparation methods, gene detection and quantification and junction discovery for RNA-Seq by the Ion Proton platform. Chemical fragmentation performed equally well with the enzyme-based one. The optimal Ion Proton sequencing options and analysis software have been evaluated.

  3. High-Density Genetic Linkage Map Construction and Quantitative Trait Locus Mapping for Hawthorn (Crataegus pinnatifida Bunge).

    PubMed

    Zhao, Yuhui; Su, Kai; Wang, Gang; Zhang, Liping; Zhang, Jijun; Li, Junpeng; Guo, Yinshan

    2017-07-14

    Genetic linkage maps are an important tool in genetic and genomic research. In this study, two hawthorn cultivars, Qiujinxing and Damianqiu, and 107 progenies from a cross between them were used for constructing a high-density genetic linkage map using the 2b-restriction site-associated DNA (2b-RAD) sequencing method, as well as for mapping quantitative trait loci (QTL) for flavonoid content. In total, 206,411,693 single-end reads were obtained, with an average sequencing depth of 57× in the parents and 23× in the progeny. After quality trimming, 117,896 high-quality 2b-RAD tags were retained, of which 42,279 were polymorphic; of these, 12,951 markers were used for constructing the genetic linkage map. The map contained 17 linkage groups and 3,894 markers, with a total map length of 1,551.97 cM and an average marker interval of 0.40 cM. QTL mapping identified 21 QTLs associated with flavonoid content in 10 linkage groups, which explained 16.30-59.00% of the variance. This is the first high-density linkage map for hawthorn, which will serve as a basis for fine-scale QTL mapping and marker-assisted selection of important traits in hawthorn germplasm and will facilitate chromosome assignment for hawthorn whole-genome assemblies in the future.

  4. Effects of oxide additions and temperature on sinterability of milled silicon nitride

    NASA Technical Reports Server (NTRS)

    Arias, A.

    1980-01-01

    Specimens of milled alpha-Si3N4 with 0 to 5.07 equivalent percent of oxide additions were pressureless sintered at 1650 to 1820 C for 4 hours in nitrogen while covered with powdered Si3N4 + SiO2. Densities of less than or equal to 97.5 percent resulted with approximately 2.5 equivalent percent of MgO, CeO2, Y2O3, and three mixtures involving these oxides. Densities of greater than or equal to 94 percent were obtained with approximately 0.62 equivalent percent of the same additives. At most temperatures, best sinterability (density maxima) was obtained with 1.2 to 2.5 equivalent percent additive.

  5. Trail Orienteering: An Effective Way To Practice Map Interpretation.

    ERIC Educational Resources Information Center

    Horizons, 1999

    1999-01-01

    Discusses a type of orienteering developed in Great Britain to allow people with physical disabilities to compete on equal terms. Sites are viewed from a wheelchair-accessible main route. The main skill is interpreting the maps at each site, not finding the sites. Describes differences from standard orienteering, how sites work, and essential…

  6. A high-density genetic map reveals variation in recombination rate across the genome of Daphnia magna.

    PubMed

    Dukić, Marinela; Berner, Daniel; Roesti, Marius; Haag, Christoph R; Ebert, Dieter

    2016-10-13

    Recombination rate is an essential parameter for many genetic analyses. Recombination rates are highly variable across species, populations, individuals and different genomic regions. Due to the profound influence that recombination can have on intraspecific diversity and interspecific divergence, characterization of recombination rate variation emerges as a key resource for population genomic studies and emphasises the importance of high-density genetic maps as tools for studying genome biology. Here we present such a high-density genetic map for Daphnia magna, and analyse patterns of recombination rate across the genome. A F2 intercross panel was genotyped by Restriction-site Associated DNA sequencing to construct the third-generation linkage map of D. magna. The resulting high-density map included 4037 markers covering 813 scaffolds and contigs that sum up to 77 % of the currently available genome draft sequence (v2.4) and 55 % of the estimated genome size (238 Mb). Total genetic length of the map presented here is 1614.5 cM and the genome-wide recombination rate is estimated to 6.78 cM/Mb. Merging genetic and physical information we consistently found that recombination rate estimates are high towards the peripheral parts of the chromosomes, while chromosome centres, harbouring centromeres in D. magna, show very low recombination rate estimates. Due to its high-density, the third-generation linkage map for D. magna can be coupled with the draft genome assembly, providing an essential tool for genome investigation in this model organism. Thus, our linkage map can be used for the on-going improvements of the genome assembly, but more importantly, it has enabled us to characterize variation in recombination rate across the genome of D. magna for the first time. These new insights can provide a valuable assistance in future studies of the genome evolution, mapping of quantitative traits and population genetic studies.

  7. An Investigation of the Relationship Between fMRI and ERP Source Localized Measurements of Brain Activity during Face Processing

    PubMed Central

    Richards, Todd; Webb, Sara Jane; Murias, Michael; Merkle, Kristen; Kleinhans, Natalia M.; Johnson, L. Clark; Poliakov, Andrew; Aylward, Elizabeth; Dawson, Geraldine

    2013-01-01

    Brain activity patterns during face processing have been extensively explored with functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs). ERP source localization adds a spatial dimension to the ERP time series recordings, which allows for a more direct comparison and integration with fMRI findings. The goals for this study were (1) to compare the spatial descriptions of neuronal activity during face processing obtained with fMRI and ERP source localization using low-resolution electro-magnetic tomography (LORETA), and (2) to use the combined information from source localization and fMRI to explore how the temporal sequence of brain activity during face processing is summarized in fMRI activation maps. fMRI and high-density ERP data were acquired in separate sessions for 17 healthy adult males for a face and object processing task. LORETA statistical maps for the comparison of viewing faces and viewing houses were coregistered and compared to fMRI statistical maps for the same conditions. The spatial locations of face processing-sensitive activity measured by fMRI and LORETA were found to overlap in a number of areas including the bilateral fusiform gyri, the right superior, middle and inferior temporal gyri, and the bilateral precuneus. Both the fMRI and LORETA solutions additionally demon-strated activity in regions that did not overlap. fMRI and LORETA statistical maps of face processing-sensitive brain activity were found to converge spatially primarily at LORETA solution latencies that were within 18 ms of the N170 latency. The combination of data from these techniques suggested that electrical brain activity at the latency of the N170 is highly represented in fMRI statistical maps. PMID:19322649

  8. A Catalog of Soft X-Ray Shadows, and More Contemplation of the 1/4 KeV Background

    NASA Technical Reports Server (NTRS)

    Snowden, S. L.; Freyberg, M. J.; Kuntz, K. D.; Sanders, W. T.

    1999-01-01

    This paper presents a catalog of shadows in the 1/4 keV soft X-ray diffuse background 4 (SXRB) that were identified by a comparison between ROSAT All-Sky Survey maps and DIRB&corrected IRAS 100 micron maps. These "shadows" are the negative correlations between the surface brightness of the SXRB and the column density of the Galactic interstellar medium (ISIM) over limited angular regions (a few degrees in extent). We have compiled an extensive but not exhaustive set of 378 shadows in the polar regions of the Galaxy (Absolute value (beta) > and approximately equal 20 deg.), and determined their foreground and background X-ray intensities (relative to the absorbing features), and the respective hardness ratios of that emission. The portion of the sky that was examined to find these shadows was restricted in general to regions where the minimum column density is less than and approximately equal to 4 x 10(exp 20) H/square cm, i.e., relatively high Galactic latitudes, and to regions away from distinct extended features in the SXRB such as supernova remnants and superbubbles. The results for the foreground intensities agree well with the recent results of a general analysis of the local 1/4 KeV emission while the background intensities show additional. but not unexpected scatter. The results also confirm the existence of a gradient in the hardness of the local 1/4 keV emission along a Galactic center/ anticenter axis with a temperature that varies from 10(exp 6.13) K to 10(exp 6.02) K, respectively. The average temperature of the foreground component from this analysis is 10(exp 6.08) K, compared to 10(exp 6.06) K in the previous analysis. Likewise, the average temperature for the distant component for the current and previous analyses are 10(exp 6.06) K and 10(exp 6.02) K, respectively. Finally, the results for the 1/4 keV halo emission are compared to the observed fluxes at 3/4 keV, where the lack of correlation suggests that the Galactic halo's 1/4 keV and 3/4 keV fluxes are likely produced by separate emission regions.

  9. Damage Assessment for Disaster Relief Efforts in Urban Areas Using Optical Imagery and LiDAR Data

    NASA Astrophysics Data System (ADS)

    Bahr, Thomas

    2014-05-01

    Imagery combined with LiDAR data and LiDAR-derived products provides a significant source of geospatial data which is of use in disaster mitigation planning. Feature rich building inventories can be constructed from tools with 3D rooftop extraction capabilities, and two dimensional outputs such as DSMs and DTMs can be used to generate layers to support routing efforts in Spatial Analyst and Network Analyst workflows. This allows us to leverage imagery and LiDAR tools for disaster mitigation or other scenarios. Software such as ENVI, ENVI LiDAR, and ArcGIS® Spatial and Network Analyst can therefore be used in conjunction to help emergency responders route ground teams in support of disaster relief efforts. This is exemplified by a case study against the background of the magnitude 7.0 earthquake that struck Haiti's capital city of Port-au-Prince on January 12, 2010. Soon after, both LiDAR data and an 8-band WorldView-2 scene were collected to map the disaster zone. The WorldView-2 scene was orthorectified and atmospherically corrected in ENVI prior to use. ENVI LiDAR was used to extract the DSM, DTM, buildings, and debris from the LiDAR data point cloud. These datasets provide a foundation for the 2D portion of the analysis. As the data was acquired over an area of dense urbanization, the majority of ground surfaces are roads, and standing buildings and debris are actually largely separable on the basis of elevation classes. To extract the road network of Port-au-Prince, the LiDAR-based feature height information was fused with the WorldView-2 scene, using ENVI's object-based feature extraction approach. This road network was converted to a network dataset for further analysis by the ArcGIS Network Analyst. For the specific case of Haiti, the distribution of blue tarps, used as accommodations for refugees, provided a spectrally distinct target. Pure blue tarp pixel spectra were selected from the WorldView-2 scene and input as a reference into ENVI's Spectral Angle Mapper (SAM) classification routine, together with a water-shadow mask to prevent false positives. The resulting blue tarp shape file was input into the ArcGIS Point Density tool, a feature of the Spatial Analyst toolbox. The final distribution map shows the density of blue tarps in Port-au-Prince and can be used to roughly delineate camps of refugees. Analogous, a debris density map was generated after separating the debris elevation class. The combination of this debris density map with the road network allowed to construct an intact road network of Port-au-Prince within the ArcGIS Network Analyst. Moderate density debris was used as a cost-increase barrier feature of the network dataset, and high density debris was used as a total obstruction barrier feature. Based on this information, two hypothetical routing scenarios were analyzed. One involved routing a ground team between two different refugee concentration zones. For the other, potential helicopter landing zones were computed from the LiDAR-derived products and added as facility features to the Network Analyst. Routes from the helicopter landing zones to refugee concentration access points were solved using closest facility logic, again making use of the obstructed network.

  10. New Clinically Feasible 3T MRI Protocol to Discriminate Internal Brain Stem Anatomy.

    PubMed

    Hoch, M J; Chung, S; Ben-Eliezer, N; Bruno, M T; Fatterpekar, G M; Shepherd, T M

    2016-06-01

    Two new 3T MR imaging contrast methods, track density imaging and echo modulation curve T2 mapping, were combined with simultaneous multisection acquisition to reveal exquisite anatomic detail at 7 canonical levels of the brain stem. Compared with conventional MR imaging contrasts, many individual brain stem tracts and nuclear groups were directly visualized for the first time at 3T. This new approach is clinically practical and feasible (total scan time = 20 minutes), allowing better brain stem anatomic localization and characterization. © 2016 by American Journal of Neuroradiology.

  11. Material and device properties of superacid-treated monolayer molybdenum disulfide

    DOE PAGES

    Alharbi, Abdullah; Zahl, Percy; Shahrjerdi, Davood

    2017-01-16

    Here, we study the effects of chemical treatment with bis(trifluoromethane) sulfonimide superacid on material and device properties of monolayer molybdenum disulfide grown by chemical vapor deposition. Our spatially resolved photoluminescence (PL) measurements and device studies reveal two key findings due to the chemical treatment: (1) noticeable transformation of trions to neutral excitons, and (2) over 7-fold reduction in the density of mid-gap trap states. Specifically, a combination of scanning Auger microscopy and PL mapping reveals that the superacid treatment is effective in passivating the sulfur-deficient regions.

  12. A New, Large-scale Map of Interstellar Reddening Derived from H I Emission

    NASA Astrophysics Data System (ADS)

    Lenz, Daniel; Hensley, Brandon S.; Doré, Olivier

    2017-09-01

    We present a new map of interstellar reddening, covering the 39% of the sky with low H I column densities ({N}{{H}{{I}}}< 4× {10}20 cm-2 or E(B-V)≈ 45 mmag) at 16\\buildrel{ \\prime}\\over{.} 1 resolution, based on all-sky observations of Galactic H I emission by the HI4PI Survey. In this low-column-density regime, we derive a characteristic value of {N}{{H}{{I}}}/E(B-V)=8.8 × {10}21 {{cm}}2 {{mag}}-1 for gas with | {v}{LSR}| < 90 km s-1 and find no significant reddening associated with gas at higher velocities. We compare our H I-based reddening map with the Schlegel et al. (SFD) reddening map and find them consistent to within a scatter of ≃ 5 mmag. Further, the differences between our map and the SFD map are in excellent agreement with the low-resolution (4\\buildrel{\\circ}\\over{.} 5) corrections to the SFD map derived by Peek and Graves based on observed reddening toward passive galaxies. We therefore argue that our H I-based map provides the most accurate interstellar reddening estimates in the low-column-density regime to date. Our reddening map is made publicly available at doi.org/10.7910/DVN/AFJNWJ.

  13. A numerical exercise in musical scales

    NASA Astrophysics Data System (ADS)

    Hartmann, George C.

    1987-03-01

    This paper investigates why the 12-note scale, having equal intervals, seems to be the best representation of scales constructed from purely harmonic intervals. Is it possible that other equal temperament scales with more or less than 12 notes would serve just as well? The investigation is done by displaying the difference between a set of harmonic notes and scales with equal intervals having n notes per octave. The difference is small when n is equal to 12, but also when n equals 19 and 29. The number density of notes per unit frequency intervals is also investigated.

  14. Updating Mars-GRAM to Increase the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hiliary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). During the Mars Science Laboratory (MSL) site selection process, it was discovered that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear set to 0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. As a preliminary fix to this pressure-density problem, density factor values were determined for tau=0.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented from work being done to derive better multipliers by including variation with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data. The addition of these more precise density factors to Mars-GRAM 2005 Release 1.4 will improve the results of the sensitivity studies done for large optical depths.

  15. Plant pigment types, distributions, and influences on shallow water submerged aquatic vegetation mapping

    NASA Astrophysics Data System (ADS)

    Hall, Carlton R.; Bostater, Charles R., Jr.; Virnstein, Robert

    2004-11-01

    Development of robust protocols for use in mapping shallow water habitats using hyperspectral imagery requires knowledge of absorbing and scattering features present in the environment. These include, but are not limited to, water quality parameters, phytoplankton concentrations and species, submerged aquatic vegetation (SAV) species and densities, epiphytic growth on SAV, benthic microalgae and substrate reflectance characteristics. In the Indian River Lagoon, Fl. USA we conceptualize the system as having three possible basic layers, water column and SAV bed above the bottom. Each layer is occupied by plants with their associated light absorbing pigments that occur in varying proportions and concentrations. Phytoplankton communities are composed primarily of diatoms, dinoflagellates, and picoplanktonic cyanobacteria. SAV beds, including flowering plants and green, red, and brown macro-algae exist along density gradients ranging in coverage from 0-100%. SAV beds may be monotypic, or more typically, mixtures of the several species that may or may not be covered in epiphytes. Shallow water benthic substrates are colonized by periphyton communities that include diatoms, dinoflagellates, chlorophytes and cyanobacteria. Inflection spectra created form ASIA hyperspectral data display a combination of features related to water and select plant pigment absorption peaks.

  16. Path and site effects deduced from merged transfrontier internet macroseismic data of two recent M4 earthquakes in northwest Europe using a grid cell approach

    NASA Astrophysics Data System (ADS)

    Van Noten, Koen; Lecocq, Thomas; Sira, Christophe; Hinzen, Klaus-G.; Camelbeeck, Thierry

    2017-04-01

    The online collection of earthquake reports in Europe is strongly fragmented across numerous seismological agencies. This paper demonstrates how collecting and merging online institutional macroseismic data strongly improves the density of observations and the quality of intensity shaking maps. Instead of using ZIP code Community Internet Intensity Maps, we geocode individual response addresses for location improvement, assign intensities to grouped answers within 100 km2 grid cells, and generate intensity attenuation relations from the grid cell intensities. Grid cell intensity maps are less subjective and illustrate a more homogeneous intensity distribution than communal ZIP code intensity maps. Using grid cells for ground motion analysis offers an advanced method for exchanging transfrontier equal-area intensity data without sharing any personal information. The applicability of the method is demonstrated on the felt responses of two clearly felt earthquakes: the 8 September 2011 ML 4.3 (Mw 3.7) Goch (Germany) and the 22 May 2015 ML 4.2 (Mw 3.7) Ramsgate (UK) earthquakes. Both events resulted in a non-circular distribution of intensities which is not explained by geometrical amplitude attenuation alone but illustrates an important low-pass filtering due to the sedimentary cover above the Anglo-Brabant Massif and in the Lower Rhine Graben. Our study illustrates the effect of increasing bedrock depth on intensity attenuation and the importance of the WNW-ESE Caledonian structural axis of the Anglo-Brabant Massif for seismic wave propagation. Seismic waves are less attenuated - high Q - along the strike of a tectonic structure but are more strongly attenuated - low Q - perpendicular to this structure, particularly when they cross rheologically different seismotectonic units separated by crustal-rooted faults.

  17. Regional Characterization of Soil Properties via a Combination of Methods from Remote Sensing, Geophysics and Geopedology

    NASA Astrophysics Data System (ADS)

    Meyer, Uwe; Fries, Elke; Frei, Michaela

    2016-04-01

    Soil is one of the most precious resources on Earth. Preserving, using and enriching soils are most complex processes that fundamentally need a sound regional data base. Many countries lack this sort of extensive data or the existing data must be urgently updated when land use recently changed in major patterns. The project "RECHARBO" (Regional Characterization of Soil Properties) aims at the combination of methods from remote sensing, geophysics and geopedology in order to develop a new system to map soils on a regional scale in a quick and efficient manner. First tests will be performed on existing soil monitoring districts, using newly available sensing systems as well as established techniques. Especially hyperspectral and infrared data measured from satellites or airborne platforms shall be combined. Moreover, a systematic correlation between hyperspectral imagery and gamma-ray spectroscopy shall be established. These recordings will be compared and correlated to measurements upon ground and on soil samples to get hold of properties such as soil moisture, soil density, specific resistance plus analytic properties like clay content, anorganic background, organic matter etc. The goal is to generate a system that enables users to map soil patterns on a regional scale using airborne or satellite data and to fix their characteristics with only a limited number of soil samples.

  18. Mapping Tree Density at the Global Scale

    NASA Astrophysics Data System (ADS)

    Covey, K. R.; Crowther, T. W.; Glick, H.; Bettigole, C.; Bradford, M.

    2015-12-01

    The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global-scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical regions, with 0.74, and 0.61 trillion in boreal and temperate regions, respectively. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming impact of humans across most of the world. Based on our projected tree densities, we estimate that deforestation is currently responsible for removing over 15 billion trees each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.

  19. Mapping tree density at a global scale

    NASA Astrophysics Data System (ADS)

    Crowther, T. W.; Glick, H. B.; Covey, K. R.; Bettigole, C.; Maynard, D. S.; Thomas, S. M.; Smith, J. R.; Hintler, G.; Duguid, M. C.; Amatulli, G.; Tuanmu, M.-N.; Jetz, W.; Salas, C.; Stam, C.; Piotto, D.; Tavani, R.; Green, S.; Bruce, G.; Williams, S. J.; Wiser, S. K.; Huber, M. O.; Hengeveld, G. M.; Nabuurs, G.-J.; Tikhonova, E.; Borchardt, P.; Li, C.-F.; Powrie, L. W.; Fischer, M.; Hemp, A.; Homeier, J.; Cho, P.; Vibrans, A. C.; Umunay, P. M.; Piao, S. L.; Rowe, C. W.; Ashton, M. S.; Crane, P. R.; Bradford, M. A.

    2015-09-01

    The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical forests, with 0.74 trillion in boreal regions and 0.61 trillion in temperate regions. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming effect of humans across most of the world. Based on our projected tree densities, we estimate that over 15 billion trees are cut down each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.

  20. Mapping tree density at a global scale.

    PubMed

    Crowther, T W; Glick, H B; Covey, K R; Bettigole, C; Maynard, D S; Thomas, S M; Smith, J R; Hintler, G; Duguid, M C; Amatulli, G; Tuanmu, M-N; Jetz, W; Salas, C; Stam, C; Piotto, D; Tavani, R; Green, S; Bruce, G; Williams, S J; Wiser, S K; Huber, M O; Hengeveld, G M; Nabuurs, G-J; Tikhonova, E; Borchardt, P; Li, C-F; Powrie, L W; Fischer, M; Hemp, A; Homeier, J; Cho, P; Vibrans, A C; Umunay, P M; Piao, S L; Rowe, C W; Ashton, M S; Crane, P R; Bradford, M A

    2015-09-10

    The global extent and distribution of forest trees is central to our understanding of the terrestrial biosphere. We provide the first spatially continuous map of forest tree density at a global scale. This map reveals that the global number of trees is approximately 3.04 trillion, an order of magnitude higher than the previous estimate. Of these trees, approximately 1.39 trillion exist in tropical and subtropical forests, with 0.74 trillion in boreal regions and 0.61 trillion in temperate regions. Biome-level trends in tree density demonstrate the importance of climate and topography in controlling local tree densities at finer scales, as well as the overwhelming effect of humans across most of the world. Based on our projected tree densities, we estimate that over 15 billion trees are cut down each year, and the global number of trees has fallen by approximately 46% since the start of human civilization.

Top