Sample records for local scale setting

  1. How complexity science can inform scale-up and spread in health care: understanding the role of self-organization in variation across local contexts.

    PubMed

    Lanham, Holly Jordan; Leykum, Luci K; Taylor, Barbara S; McCannon, C Joseph; Lindberg, Curt; Lester, Richard T

    2013-09-01

    Health care systems struggle to scale-up and spread effective practices across diverse settings. Failures in scale-up and spread (SUS) are often attributed to a lack of consideration for variation in local contexts among different health care delivery settings. We argue that SUS occurs within complex systems and that self-organization plays an important role in the success, or failure, of SUS. Self-organization is a process whereby local interactions give rise to patterns of organizing. These patterns may be stable or unstable, and they evolve over time. Self-organization is a major contributor to local variations across health care delivery settings. Thus, better understanding of self-organization in the context of SUS is needed. We re-examine two cases of successful SUS: 1) the application of a mobile phone short message service intervention to improve adherence to medications during HIV treatment scale up in resource-limited settings, and 2) MRSA prevention in hospital inpatient settings in the United States. Based on insights from these cases, we discuss the role of interdependencies and sensemaking in leveraging self-organization in SUS initiatives. We argue that self-organization, while not completely controllable, can be influenced, and that improving interdependencies and sensemaking among SUS stakeholders is a strategy for facilitating self-organization processes that increase the probability of spreading effective practices across diverse settings. Published by Elsevier Ltd.

  2. Correspondence: Reply to ‘Phantom phonon localization in relaxors’

    DOE PAGES

    Manley, Michael E.; Abernathy, Douglas L.; Budai, John D.

    2017-12-05

    The Correspondence by Gehring et al. mistakes Anderson phonon localization for the concept of an atomic-scale local mode. An atomic-scale local mode refers to a single atom vibrating on its own within a crystal. Such a local mode will have an almost flat intensity profile, but this is not the same as phonon localization. Anderson localization is a wave interference effect in a disordered system that results in waves becoming spatially localized. The length scale of the localized waves is set by the wavelength, which is approximately 2 nm in this case. This larger length scale in real space meansmore » narrower intensity profiles in reciprocal space. Here, we conclude that the claims in the Correspondence by Gehring et al. are incorrect because they mistakenly assume that the length scale for Anderson localization is atomic, and because the experimental observations rule out multiple scattering as the origin.« less

  3. Correspondence: Reply to ‘Phantom phonon localization in relaxors’

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manley, Michael E.; Abernathy, Douglas L.; Budai, John D.

    The Correspondence by Gehring et al. mistakes Anderson phonon localization for the concept of an atomic-scale local mode. An atomic-scale local mode refers to a single atom vibrating on its own within a crystal. Such a local mode will have an almost flat intensity profile, but this is not the same as phonon localization. Anderson localization is a wave interference effect in a disordered system that results in waves becoming spatially localized. The length scale of the localized waves is set by the wavelength, which is approximately 2 nm in this case. This larger length scale in real space meansmore » narrower intensity profiles in reciprocal space. Here, we conclude that the claims in the Correspondence by Gehring et al. are incorrect because they mistakenly assume that the length scale for Anderson localization is atomic, and because the experimental observations rule out multiple scattering as the origin.« less

  4. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  5. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  6. DEVELOPMENT OF RIPARIAN ZONE INDICATORS (INT. GRANT)

    EPA Science Inventory

    Landscape features (e.g., land use) influence water quality characteristics on a variety of spatial scales. For example, while land use is controlled by anthropogenic features at a local scale, geologic features are set at larger spatial, and longer temporal scales. Individual ...

  7. Adaptive Spot Detection With Optimal Scale Selection in Fluorescence Microscopy Images.

    PubMed

    Basset, Antoine; Boulanger, Jérôme; Salamero, Jean; Bouthemy, Patrick; Kervrann, Charles

    2015-11-01

    Accurately detecting subcellular particles in fluorescence microscopy is of primary interest for further quantitative analysis such as counting, tracking, or classification. Our primary goal is to segment vesicles likely to share nearly the same size in fluorescence microscopy images. Our method termed adaptive thresholding of Laplacian of Gaussian (LoG) images with autoselected scale (ATLAS) automatically selects the optimal scale corresponding to the most frequent spot size in the image. Four criteria are proposed and compared to determine the optimal scale in a scale-space framework. Then, the segmentation stage amounts to thresholding the LoG of the intensity image. In contrast to other methods, the threshold is locally adapted given a probability of false alarm (PFA) specified by the user for the whole set of images to be processed. The local threshold is automatically derived from the PFA value and local image statistics estimated in a window whose size is not a critical parameter. We also propose a new data set for benchmarking, consisting of six collections of one hundred images each, which exploits backgrounds extracted from real microscopy images. We have carried out an extensive comparative evaluation on several data sets with ground-truth, which demonstrates that ATLAS outperforms existing methods. ATLAS does not need any fine parameter tuning and requires very low computation time. Convincing results are also reported on real total internal reflection fluorescence microscopy images.

  8. Tailoring Healthy Workplace Interventions to Local Healthcare Settings: A Complexity Theory-Informed Workplace of Well-Being Framework

    PubMed Central

    Brand, Sarah L.; Fleming, Lora E.; Wyatt, Katrina M.

    2015-01-01

    Many healthy workplace interventions have been developed for healthcare settings to address the consistently low scores of healthcare professionals on assessments of mental and physical well-being. Complex healthcare settings present challenges for the scale-up and spread of successful interventions from one setting to another. Despite general agreement regarding the importance of the local setting in affecting intervention success across different settings, there is no consensus on what it is about a local setting that needs to be taken into account to design healthy workplace interventions appropriate for different local settings. Complexity theory principles were used to understand a workplace as a complex adaptive system and to create a framework of eight domains (system characteristics) that affect the emergence of system-level behaviour. This Workplace of Well-being (WoW) framework is responsive and adaptive to local settings and allows a shared understanding of the enablers and barriers to behaviour change by capturing local information for each of the eight domains. We use the results of applying the WoW framework to one workplace, a UK National Health Service ward, to describe the utility of this approach in informing design of setting-appropriate healthy workplace interventions that create workplaces conducive to healthy behaviour change. PMID:26380358

  9. Tailoring Healthy Workplace Interventions to Local Healthcare Settings: A Complexity Theory-Informed Workplace of Well-Being Framework.

    PubMed

    Brand, Sarah L; Fleming, Lora E; Wyatt, Katrina M

    2015-01-01

    Many healthy workplace interventions have been developed for healthcare settings to address the consistently low scores of healthcare professionals on assessments of mental and physical well-being. Complex healthcare settings present challenges for the scale-up and spread of successful interventions from one setting to another. Despite general agreement regarding the importance of the local setting in affecting intervention success across different settings, there is no consensus on what it is about a local setting that needs to be taken into account to design healthy workplace interventions appropriate for different local settings. Complexity theory principles were used to understand a workplace as a complex adaptive system and to create a framework of eight domains (system characteristics) that affect the emergence of system-level behaviour. This Workplace of Well-being (WoW) framework is responsive and adaptive to local settings and allows a shared understanding of the enablers and barriers to behaviour change by capturing local information for each of the eight domains. We use the results of applying the WoW framework to one workplace, a UK National Health Service ward, to describe the utility of this approach in informing design of setting-appropriate healthy workplace interventions that create workplaces conducive to healthy behaviour change.

  10. Non-Gaussianity and Excursion Set Theory: Halo Bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adshead, Peter; Baxter, Eric J.; Dodelson, Scott

    2012-09-01

    We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales asmore » $$k^{-2}$$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.« less

  11. Out of the net: An agent-based model to study human movements influence on local-scale malaria transmission.

    PubMed

    Pizzitutti, Francesco; Pan, William; Feingold, Beth; Zaitchik, Ben; Álvarez, Carlos A; Mena, Carlos F

    2018-01-01

    Though malaria control initiatives have markedly reduced malaria prevalence in recent decades, global eradication is far from actuality. Recent studies show that environmental and social heterogeneities in low-transmission settings have an increased weight in shaping malaria micro-epidemiology. New integrated and more localized control strategies should be developed and tested. Here we present a set of agent-based models designed to study the influence of local scale human movements on local scale malaria transmission in a typical Amazon environment, where malaria is transmission is low and strongly connected with seasonal riverine flooding. The agent-based simulations show that the overall malaria incidence is essentially not influenced by local scale human movements. In contrast, the locations of malaria high risk spatial hotspots heavily depend on human movements because simulated malaria hotspots are mainly centered on farms, were laborers work during the day. The agent-based models are then used to test the effectiveness of two different malaria control strategies both designed to reduce local scale malaria incidence by targeting hotspots. The first control scenario consists in treat against mosquito bites people that, during the simulation, enter at least once inside hotspots revealed considering the actual sites where human individuals were infected. The second scenario involves the treatment of people entering in hotspots calculated assuming that the infection sites of every infected individual is located in the household where the individual lives. Simulations show that both considered scenarios perform better in controlling malaria than a randomized treatment, although targeting household hotspots shows slightly better performance.

  12. Estuarine abandoned channel sedimentation rates record peak fluvial discharge magnitudes

    NASA Astrophysics Data System (ADS)

    Gray, A. B.; Pasternack, G. B.; Watson, E. B.

    2018-04-01

    Fluvial sediment deposits can provide useful records of integrated watershed expressions including flood event magnitudes. However, floodplain and estuarine sediment deposits evolve through the interaction of watershed/marine sediment supply and transport characteristics with the local depositional environment. Thus extraction of watershed scale signals depends upon accounting for local scale effects on sediment deposition rates and character. This study presents an examination of the balance of fluvial sediment dynamics and local scale hydro-geomorphic controls on alluviation of an abandoned channel in the Salinas River Lagoon, CA. A set of three sediment cores contained discrete flood deposits that corresponded to the largest flood events over the period of accretion from 1969 to 2007. Sedimentation rates scaled with peak flood discharge and event scale sediment flux, but were not influenced by longer scale hydro-meteorological activities such as annual precipitation and water yield. Furthermore, the particle size distributions of flood deposits showed no relationship to event magnitudes. Both the responsiveness of sedimentation and unresponsiveness of particle size distributions to hydro-sedimentological event magnitudes appear to be controlled by aspects of local geomorphology that influence the connectivity of the abandoned channel to the Salinas River mainstem. Well-developed upstream plug bar formation precluded the entrainment of coarser bedload into the abandoned channel, while Salinas River mouth conditions (open/closed) in conjunction with tidal and storm surge conditions may play a role in influencing the delivery of coarser suspended load fractions. Channel adjacent sediment deposition can be valuable records of hydro-meteorological and sedimentological regimes, but local depositional settings may dominate the character of short term (interdecadal) signatures.

  13. Scaling depth-induced wave-breaking in two-dimensional spectral wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.

    2015-03-01

    Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.

  14. Relations of Water Quality to Agricultural Chemical Use and Environmental Setting at Various Scales - Results from Selected Studies of the National Water-Quality Assessment Program

    USGS Publications Warehouse

    ,

    2008-01-01

    In 1991, the U.S. Geological Survey (USGS) began studies of 51 major river basins and aquifers across the United States as part of the National Water-Quality Assessment (NAWQA) Program to provide scientifically sound information for managing the Nation's water resources. The major goals of the NAWQA Program are to assess the status and long-term trends of the Nation's surface- and ground-water quality and to understand the natural and human factors that affect it (Gilliom and others, 1995). In 2001, the NAWQA Program began a second decade of intensive water-quality assessments. The 42 study units for this second decade were selected to represent a wide range of important hydrologic environments and potential contaminant sources. These NAWQA studies continue to address the goals of the first decade of the assessments to determine how water-quality conditions are changing over time. In addition to local- and regional-scale studies, NAWQA began to analyze and synthesize water-quality status and trends at the principal aquifer and major river-basin scales. This fact sheet summarizes results from four NAWQA studies that relate water quality to agricultural chemical use and environmental setting at these various scales: * Comparison of ground-water quality in northern and southern High Plains agricultural settings (principal aquifer scale); * Distribution patterns of pesticides and degradates in rain (local scale); * Occurrence of pesticides in shallow ground water underlying four agricultural areas (local and regional scales); and * Trends in nutrients and sediment over time in the Missouri River and its tributaries (major river-basin scale).

  15. REGIONAL-SCALE FISH ECOLOGY IN NORTHEASTERN USA LAKES USING A PROBABILITY-BASED SURVEY DESIGN

    EPA Science Inventory

    Historically, most fish ecology has been done at local scales. As these data accumulate, the need to set this knowledge into landscape, regional, and historical context grows. There are important broad-scale issues (e.g., non-point source pollution, biodiversity loss, alien spe...

  16. MULTI-SCALED VULNERABILITY ANALYSES: IMPROVING DECISION-MAKING AT REGIONAL TO LOCAL LEVELS THROUGH PARTNERSHIP

    EPA Science Inventory

    Decision-makers at all scales are faced with setting priorities for both use of limited resources and for risk management. While there are all kinds of monitoring data and models to project conditions at different spatial and temporal scales, synthesized information to establish ...

  17. Development process of an assessment tool for disruptive behavior problems in cross-cultural settings: the Disruptive Behavior International Scale – Nepal version (DBIS-N)

    PubMed Central

    Burkey, Matthew D.; Ghimire, Lajina; Adhikari, Ramesh P.; Kohrt, Brandon A.; Jordans, Mark J. D.; Haroz, Emily; Wissow, Lawrence

    2017-01-01

    Systematic processes are needed to develop valid measurement instruments for disruptive behavior disorders (DBDs) in cross-cultural settings. We employed a four-step process in Nepal to identify and select items for a culturally valid assessment instrument: 1) We extracted items from validated scales and local free-list interviews. 2) Parents, teachers, and peers (n=30) rated the perceived relevance and importance of behavior problems. 3) Highly rated items were piloted with children (n=60) in Nepal. 4) We evaluated internal consistency of the final scale. We identified 49 symptoms from 11 scales, and 39 behavior problems from free-list interviews (n=72). After dropping items for low ratings of relevance and severity and for poor item-test correlation, low frequency, and/or poor acceptability in pilot testing, 16 items remained for the Disruptive Behavior International Scale—Nepali version (DBIS-N). The final scale had good internal consistency (α=0.86). A 4-step systematic approach to scale development including local participation yielded an internally consistent scale that included culturally relevant behavior problems. PMID:28093575

  18. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  19. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  20. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  1. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  2. Time-averaged aerodynamic loads on the vane sets of the 40- by 80-foot and 80- by 120-foot wind tunnel complex

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.

    1987-01-01

    Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.

  3. Anisotropic-Scale Junction Detection and Matching for Indoor Images.

    PubMed

    Xue, Nan; Xia, Gui-Song; Bai, Xiang; Zhang, Liangpei; Shen, Weiming

    Junctions play an important role in characterizing local geometrical structures of images, and the detection of which is a longstanding but challenging task. Existing junction detectors usually focus on identifying the location and orientations of junction branches while ignoring their scales, which, however, contain rich geometries of images. This paper presents a novel approach for junction detection and characterization, which especially exploits the locally anisotropic geometries of a junction and estimates its scales by relying on an a-contrario model. The output junctions are with anisotropic scales, saying that a scale parameter is associated with each branch of a junction and are thus named as anisotropic-scale junctions (ASJs). We then apply the new detected ASJs for matching indoor images, where there are dramatic changes of viewpoints and the detected local visual features, e.g., key-points, are usually insufficient and lack distinctive ability. We propose to use the anisotropic geometries of our junctions to improve the matching precision of indoor images. The matching results on sets of indoor images demonstrate that our approach achieves the state-of-the-art performance on indoor image matching.Junctions play an important role in characterizing local geometrical structures of images, and the detection of which is a longstanding but challenging task. Existing junction detectors usually focus on identifying the location and orientations of junction branches while ignoring their scales, which, however, contain rich geometries of images. This paper presents a novel approach for junction detection and characterization, which especially exploits the locally anisotropic geometries of a junction and estimates its scales by relying on an a-contrario model. The output junctions are with anisotropic scales, saying that a scale parameter is associated with each branch of a junction and are thus named as anisotropic-scale junctions (ASJs). We then apply the new detected ASJs for matching indoor images, where there are dramatic changes of viewpoints and the detected local visual features, e.g., key-points, are usually insufficient and lack distinctive ability. We propose to use the anisotropic geometries of our junctions to improve the matching precision of indoor images. The matching results on sets of indoor images demonstrate that our approach achieves the state-of-the-art performance on indoor image matching.

  4. Sustainability at the local scale: defining highly aggregated indices for assessing environmental performance. The province of Reggio Emilia (Italy) as a case study.

    PubMed

    Clerici, Nicola; Bodini, Antonio; Ferrarini, Alessandro

    2004-10-01

    In order to achieve improved sustainability, local authorities need to use tools that adequately describe and synthesize environmental information. This article illustrates a methodological approach that organizes a wide suite of environmental indicators into few aggregated indices, making use of correlation, principal component analysis, and fuzzy sets. Furthermore, a weighting system, which includes stakeholders' priorities and ambitions, is applied. As a case study, the described methodology is applied to the Reggio Emilia Province in Italy, by considering environmental information from 45 municipalities. Principal component analysis is used to condense an initial set of 19 indicators into 6 fundamental dimensions that highlight patterns of environmental conditions at the provincial scale. These dimensions are further aggregated in two indices of environmental performance through fuzzy sets. The simple form of these indices makes them particularly suitable for public communication, as they condensate a wide set of heterogeneous indicators. The main outcomes of the analysis and the potential applications of the method are discussed.

  5. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  6. What Shapes the Phylogenetic Structure of Anuran Communities in a Seasonal Environment? The Influence of Determinism at Regional Scale to Stochasticity or Antagonistic Forces at Local Scale

    PubMed Central

    Ferreira, Vanda Lúcia; Strüssmann, Christine; Tomas, Walfrido Moraes

    2015-01-01

    Ecological communities are structured by both deterministic and stochastic processes. We investigated phylogenetic patterns at regional and local scales to understand the influences of seasonal processes in shaping the structure of anuran communities in the southern Pantanal wetland, Brazil. We assessed the phylogenetic structure at different scales, using the Net Relatedness Index (NRI), the Nearest Taxon Index (NTI), and phylobetadiversity indexes, as well as a permutation test, to evaluate the effect of seasonality. The anuran community was represented by a non-random set of species with a high degree of phylogenetic relatedness at the regional scale. However, at the local scale the phylogenetic structure of the community was weakly related with the seasonality of the system, indicating that oriented stochastic processes (e.g. colonization, extinction and ecological drift) and/or antagonist forces drive the structure of such communities in the southern Pantanal. PMID:26102202

  7. What Shapes the Phylogenetic Structure of Anuran Communities in a Seasonal Environment? The Influence of Determinism at Regional Scale to Stochasticity or Antagonistic Forces at Local Scale.

    PubMed

    Martins, Clarissa de Araújo; Roque, Fabio de Oliveira; Santos, Bráulio A; Ferreira, Vanda Lúcia; Strüssmann, Christine; Tomas, Walfrido Moraes

    2015-01-01

    Ecological communities are structured by both deterministic and stochastic processes. We investigated phylogenetic patterns at regional and local scales to understand the influences of seasonal processes in shaping the structure of anuran communities in the southern Pantanal wetland, Brazil. We assessed the phylogenetic structure at different scales, using the Net Relatedness Index (NRI), the Nearest Taxon Index (NTI), and phylobetadiversity indexes, as well as a permutation test, to evaluate the effect of seasonality. The anuran community was represented by a non-random set of species with a high degree of phylogenetic relatedness at the regional scale. However, at the local scale the phylogenetic structure of the community was weakly related with the seasonality of the system, indicating that oriented stochastic processes (e.g. colonization, extinction and ecological drift) and/or antagonist forces drive the structure of such communities in the southern Pantanal.

  8. Assimilation of global versus local data sets into a regional model of the Gulf Stream system. 1. Data effectiveness

    NASA Astrophysics Data System (ADS)

    Malanotte-Rizzoli, Paola; Young, Roberta E.

    1995-12-01

    The primary objective of this paper is to assess the relative effectiveness of data sets with different space coverage and time resolution when they are assimilated into an ocean circulation model. We focus on obtaining realistic numerical simulations of the Gulf Stream system typically of the order of 3-month duration by constructing a "synthetic" ocean simultaneously consistent with the model dynamics and the observations. The model used is the Semispectral Primitive Equation Model. The data sets are the "global" Optimal Thermal Interpolation Scheme (OTIS) 3 of the Fleet Numerical Oceanography Center providing temperature and salinity fields with global coverage and with bi-weekly frequency, and the localized measurements, mostly of current velocities, from the central and eastern array moorings of the Synoptic Ocean Prediction (SYNOP) program, with daily frequency but with a very small spatial coverage. We use a suboptimal assimilation technique ("nudging"). Even though this technique has already been used in idealized data assimilation studies, to our knowledge this is the first study in which the effectiveness of nudging is tested by assimilating real observations of the interior temperature and salinity fields. This is also the first work in which a systematic assimilation is carried out of the localized, high-quality SYNOP data sets in numerical experiments longer than 1-2 weeks, that is, not aimed to forecasting. We assimilate (1) the global OTIS 3 alone, (2) the local SYNOP observations alone, and (3) both OTIS 3 and SYNOP observations. We assess the success of the assimilations with quantitative measures of performance, both on the global and local scale. The results can be summarized as follows. The intermittent assimilation of the global OTIS 3 is necessary to keep the model "on track" over 3-month simulations on the global scale. As OTIS 3 is assimilated at every model grid point, a "gentle" weight must be prescribed to it so as not to overconstrain the model. However, in these assimilations the predicted velocity fields over the SYNOP arrays are greatly in error. The continuous assimilation of the localized SYNOP data sets with a strong weight is necessary to obtain local realistic evolutions. Then assimilation of velocity measurements alone recovers the density structure over the array area. However, the spatial coverage of the SYNOP measurements is too small to constrain the model on the global scale. Thus the blending of both types of datasets is necessary in the assimilation as they constrain different time and space scales. Our choice of "gentle" nudging weight for the global OTIS 3 and "strong" weight for the local SYNOP data provides for realistic simulations of the Gulf Stream system, both globally and locally, on the 3- to 4-month-long timescale, the one governed by the Gulf Stream jet internal dynamics.

  9. Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings

    ERIC Educational Resources Information Center

    Morales, Ramses Victor

    2009-01-01

    Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…

  10. Estimating local scaling properties for the classification of interstitial lung disease patterns

    NASA Astrophysics Data System (ADS)

    Huber, Markus B.; Nagarajan, Mahesh B.; Leinsinger, Gerda; Ray, Lawrence A.; Wismueller, Axel

    2011-03-01

    Local scaling properties of texture regions were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions including the Bonferroni correction. The best classification results were obtained by the set of SIM features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  11. Phased Array Noise Source Localization Measurements of an F404 Nozzle Plume at Both Full and Model Scale

    NASA Technical Reports Server (NTRS)

    Podboy, Gary G.; Bridges, James E.; Henderson, Brenda S.

    2010-01-01

    A 48-microphone planar phased array system was used to acquire jet noise source localization data on both a full-scale F404-GE-F400 engine and on a 1/4th scale model of a F400 series nozzle. The full-scale engine test data show the location of the dominant noise sources in the jet plume as a function of frequency for the engine in both baseline (no chevron) and chevron configurations. Data are presented for the engine operating both with and without afterburners. Based on lessons learned during this test, a set of recommendations are provided regarding how the phased array measurement system could be modified in order to obtain more useful acoustic source localization data on high-performance military engines in the future. The data obtained on the 1/4th scale F400 series nozzle provide useful insights regarding the full-scale engine jet noise source mechanisms, and document some of the differences associated with testing at model-scale versus fullscale.

  12. The Partners in Flight species prioritization scheme

    Treesearch

    William C. Hunter; Michael F. Carter; David N. Pashley; Keith Barker

    1993-01-01

    The prioritization scheme identifies those birds at any locality on several geographic scales most in need of conservation action. Further, it suggests some of those actions that ought to be taken. Ranking criteria used to set priorities for Neotropical migratory landbirds measure characteristics of species that make them vulnerable to local and global extinction....

  13. Poiseuille flow of soft glasses in narrow channels: from quiescence to steady state.

    PubMed

    Chaudhuri, Pinaki; Horbach, Jürgen

    2014-10-01

    Using numerical simulations, the onset of Poiseuille flow in a confined soft glass is investigated. Starting from the quiescent state, steady flow sets in at a time scale which increases with a decrease in applied forcing. At this onset time scale, a rapid transition occurs via the simultaneous fluidization of regions having different local stresses. In the absence of steady flow at long times, creep is observed even in regions where the local stress is larger than the bulk yielding threshold. Finally, we show that the time scale to attain steady flow depends strongly on the history of the initial state.

  14. Bridging Scales: Developing a Framework to Build a City-Scale Environmental Scenario for Japanese Municipalities

    NASA Astrophysics Data System (ADS)

    Hashimoto, S.; Fujita, T.; Nakayama, T.; Xu, K.

    2007-12-01

    There is an ongoing project on establishing environmental scenarios in Japan to evaluate middle to long-term environmental policy and technology options toward low carbon society. In this project, the time horizon of the scenarios is set for 2050 on the ground that a large part of social infrastructure in Japan is likely to be renovated by that time, and cities are supposed to play important roles in building low carbon society in Japan. This belief is held because cities or local governments could implement various policies and programs, such as land use planning and promotion of new technologies with low GHG emissions, which produce an effect in an ununiform manner, taking local socio-economic conditions into account, while higher governments, either national or prefectural, could impose environmental tax on electricity and gas to alleviate ongoing GHG emissions, which uniformly covers their jurisdictions. In order for local governments to devise and implement concrete administrative actions equipped with rational policies and technologies, referring the environmental scenarios developed for the entire nation, we need to localize the national scenarios, both in terms of spatial and temporal extent, so that they could better reflect local socio-economic and institutional conditions. In localizing the national scenarios, the participation of stakeholders is significant because they play major roles in shaping future society. Stakeholder participation in the localization process would bring both creative and realistic inputs on how future unfolds on a city scale. In this research, 1) we reviewed recent efforts on international and domestic scenario development to set a practical time horizon for a city-scale environmental scenario, which would lead to concrete environmental policies and programs, 2) designed a participatory scenario development/localization process, drawing on the framework of the 'Story-and-Simulation' or SAS approach, which Alcamo(2001) proposed, and 3) started implementing it to the city of Kawasaki, Kanagawa, Japan, in cooperation with municipal officials and stakeholders. The participatory process is to develop city-scale environmental scenarios toward low carbon society, referring international and domestic environmental scenarios. Though the scenario development is still in process, it has already brought practical knowledge about and experience on how to bridge scenarios developed for different temporal and spatial scales.

  15. Is environmental legislation conserving tropical stream faunas? A large-scale assessment of local, riparian and catchment-scale influences on Amazonian fishes

    EPA Science Inventory

    Tropical agricultural is a major threat to biodiversity worldwide. In addition to the direct impacts of converting native vegetation to agriculture this process is accompanied by a wider set of human-induced disturbances, many of which are poorly addressed by existing environment...

  16. Biocultural approaches to well-being and sustainability indicators across scales.

    PubMed

    Sterling, Eleanor J; Filardi, Christopher; Toomey, Anne; Sigouin, Amanda; Betley, Erin; Gazit, Nadav; Newell, Jennifer; Albert, Simon; Alvira, Diana; Bergamini, Nadia; Blair, Mary; Boseto, David; Burrows, Kate; Bynum, Nora; Caillon, Sophie; Caselle, Jennifer E; Claudet, Joachim; Cullman, Georgina; Dacks, Rachel; Eyzaguirre, Pablo B; Gray, Steven; Herrera, James; Kenilorea, Peter; Kinney, Kealohanuiopuna; Kurashima, Natalie; Macey, Suzanne; Malone, Cynthia; Mauli, Senoveva; McCarter, Joe; McMillen, Heather; Pascua, Pua'ala; Pikacha, Patrick; Porzecanski, Ana L; de Robert, Pascale; Salpeteur, Matthieu; Sirikolo, Myknee; Stege, Mark H; Stege, Kristina; Ticktin, Tamara; Vave, Ron; Wali, Alaka; West, Paige; Winter, Kawika B; Jupiter, Stacy D

    2017-12-01

    Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and scientists. Sets of indicators that capture both ecological and social-cultural factors, and the feedbacks between them, can underpin cross-scale linkages that help bridge local and global scale initiatives to increase resilience of both humans and ecosystems. Here we argue that biocultural approaches, in combination with methods for synthesizing across evidence from multiple sources, are critical to developing metrics that facilitate linkages across scales and dimensions. Biocultural approaches explicitly start with and build on local cultural perspectives - encompassing values, knowledges, and needs - and recognize feedbacks between ecosystems and human well-being. Adoption of these approaches can encourage exchange between local and global actors, and facilitate identification of crucial problems and solutions that are missing from many regional and international framings of sustainability. Resource managers, scientists, and policymakers need to be thoughtful about not only what kinds of indicators are measured, but also how indicators are designed, implemented, measured, and ultimately combined to evaluate resource use and well-being. We conclude by providing suggestions for translating between local and global indicator efforts.

  17. Use of survey data to define regional and local priorities for management on National Wildlife Refuges

    USGS Publications Warehouse

    Sauer, J.R.; Casey, J.; Laskowski, H.; Taylor, J.D.; Fallon, J.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    National Wildlife Refuges must manage habitats to support a variety of species that often have conflicting needs. To make reasonable management decisions, managers must know what species are priorities for their refuges and the relative importance of the species. Unfortunately, species priorities are often set regionally, but refuges must develop local priorities that reconcile regional priorities with constraints imposed by refuge location and local management options. Some species cannot be managed on certain refuges, and the relative benefit of management to regional populations of species can vary greatly among refuges. We describe a process of 'stepping down' regional priorities to local priorities for bird species of management interest. We define three primary scales of management interest: regional (at which overall priority species are set); 'Sepik Blocks' (30 min blocks of latitude and longitude, which provide a landscape level context for a refuge); and the refuge. Regional surveys, such as the North American Breeding Bird Survey, provide information that can be summarized at regional and Sepik Block scales, permitting regional priorities to be focused to landscapes near refuges. However, refuges manage habitats, and managers need information about how the habitat management is likely to collectively influence the priority species. The value of the refuge for a species is also influenced by the availability of habitats within refuges and the relative amounts of those habitats at each scale. We use remotely-sensed data to assess proportions of habitats at the three geographic scales. These data provide many possible approaches for developing local priorities for management. Once these are defined, managers can use the priorities, in conjunction with predictions of the consequences of management for each species, to assess the overall benefit of alternative management actions for the priority species.

  18. Going Deeper With Contextual CNN for Hyperspectral Image Classification.

    PubMed

    Lee, Hyungtae; Kwon, Heesung

    2017-10-01

    In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.

  19. Action detection by double hierarchical multi-structure space-time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  20. Action detection by double hierarchical multi-structure space–time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-06-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  1. The Thick Level-Set model for dynamic fragmentation

    DOE PAGES

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    2017-01-04

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  2. Interaction quench dynamics in the Kondo model in the presence of a local magnetic field.

    PubMed

    Heyl, M; Kehrein, S

    2010-09-01

    In this work we investigate the quench dynamics in the Kondo model on the Toulouse line in the presence of a local magnetic field. It is shown that this setup can be realized by either applying the local magnetic field directly or by preparing the system in a macroscopically spin-polarized initial state. In the latter case, the magnetic field results from a subtlety in applying the bosonization technique where terms that are usually referred to as finite-size corrections become important in the present non-equilibrium setting. The transient dynamics are studied by analyzing exact analytical results for the local spin dynamics. The timescale for the relaxation of the local dynamical quantities turns out to be exclusively determined by the Kondo scale. In the transient regime, one observes damped oscillations in the local correlation functions with a frequency set by the magnetic field.

  3. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  4. Linkages and feedbacks in orogenic systems: An introduction

    USGS Publications Warehouse

    Thigpen, J. Ryan; Law, Richard D.; Merschat, Arthur J.; Stowell, Harold

    2017-01-01

    Orogenic processes operate at scales ranging from the lithosphere to grain-scale, and are inexorably linked. For example, in many orogens, fault and shear zone architecture controls distribution of heat advection along faults and also acts as the primary mechanism for redistribution of heat-producing material. This sets up the thermal structure of the orogen, which in turn controls lithospheric rheology, the nature and distribution of deformation and strain localization, and ultimately, through localized mechanical strengthening and weakening, the fundamental shape of the developing orogenic wedge (Fig. 1). Strain localization establishes shear zone and fault geometry, and it is the motion on these structures, in conjunction with climate, that often focuses erosional and exhumational processes. This climatic focusing effect can even drive development of asymmetry at the scale of the entire wedge (Willett et al., 1993).

  5. Can I solve my structure by SAD phasing? Planning an experiment, scaling data and evaluating the useful anomalous correlation and anomalous signal.

    PubMed

    Terwilliger, Thomas C; Bunkóczi, Gábor; Hung, Li Wei; Zwart, Peter H; Smith, Janet L; Akey, David L; Adams, Paul D

    2016-03-01

    A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment are described. A simple theoretical framework [Terwilliger et al. (2016), Acta Cryst. D72, 346-358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. The phenix.plan_sad_experiment tool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimate the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. The phenix.scale_and_merge tool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and the phenix.anomalous_signal tool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing.

  6. Can I solve my structure by SAD phasing? Planning an experiment, scaling data and evaluating the useful anomalous correlation and anomalous signal

    DOE PAGES

    Terwilliger, Thomas C.; Bunkóczi, Gábor; Hung, Li-Wei; ...

    2016-03-01

    A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, we describe algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment. A simple theoretical framework [Terwilliger et al.(2016),Acta Cryst.D72, 346–358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. Thephenix.plan_sad_experimenttool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimatemore » the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. Thephenix.scale_and_mergetool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and thephenix.anomalous_signaltool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing.« less

  7. Can I solve my structure by SAD phasing? Planning an experiment, scaling data and evaluating the useful anomalous correlation and anomalous signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas C.; Bunkóczi, Gábor; Hung, Li-Wei

    A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, we describe algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment. A simple theoretical framework [Terwilliger et al.(2016),Acta Cryst.D72, 346–358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. Thephenix.plan_sad_experimenttool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimatemore » the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. Thephenix.scale_and_mergetool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and thephenix.anomalous_signaltool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing.« less

  8. Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan

    2012-02-01

    Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.

  9. Ecosystems effects 25 years after Chernobyl: pollinators, fruit set and recruitment.

    PubMed

    Møller, Anders Pape; Barnier, Florian; Mousseau, Timothy A

    2012-12-01

    Animals are assumed to play a key role in ecosystem functioning through their effects on seed set, seed consumption, seed dispersal, and maintenance of plant communities. However, there are no studies investigating the consequences of animal scarcity on seed set, seed consumption and seed dispersal at large geographical scales. We exploited the unprecedented scarcity of pollinating bumblebees and butterflies in the vicinity of Chernobyl, Ukraine, linked to the effects of radiation on pollinator abundance, to test for effects of pollinator abundance on the ecosystem. There were considerably fewer pollinating insects in areas with high levels of radiation. Fruit trees and bushes (apple Malus domestica, pear Pyrus communis, rowan Sorbus aucuparia, wild rose Rosa rugosa, twistingwood Viburnum lantana, and European cranberry bush Viburnum opulus) that are all pollinated by insects produced fewer fruit in highly radioactively contaminated areas, partly linked to the local reduction in abundance of pollinators. This was the case even when controlling for the fact that fruit trees were generally smaller in more contaminated areas. Fruit-eating birds like thrushes and warblers that are known seed dispersers were less numerous in areas with lower fruit abundance, even after controlling for the effects of radiation, providing a direct link between radiation, pollinator abundance, fruit abundance and abundance of frugivores. Given that the Chernobyl disaster happened 25 years ago, one would predict reduced local recruitment of fruit trees if fruit set has been persistently depressed during that period; indeed, local recruitment was negatively related to the level of radiation and positively to the local level of fruit set. The patterns at the level of trees were replicated at the level of villages across the study site. This study provides the first large-scale study of the effects of a suppressed pollinator community on ecosystem functioning.

  10. Does environmental policy affect scaling laws between population and pollution? Evidence from American metropolitan areas

    PubMed Central

    2017-01-01

    Modern cities are engines of production, innovation, and growth. However, urbanization also increases both local and global pollution from household consumption and firms’ production. Do emissions change proportionately to city size or does pollution tend to outpace or lag urbanization? Do emissions scale differently with population versus economic growth or are emissions, population, and economic growth inextricably linked? How are the scaling relationships between emissions, population, and economic growth affected by environmental regulation? This paper examines the link between urbanization, economic growth and pollution using data from Metropolitan Statistical Areas (MSAs) in the United States between 1999 and 2011. We find that the emissions of local air pollution in these MSAs scale according to a ¾ power law with both population size and gross domestic product (GDP). However, the monetary damages from these local emissions scale linearly with both population and GDP. Counties that have previously been out of attainment with the local air quality standards set by the Clean Air Act show an entirely different relationship: local emissions scale according to the square root of population, while the monetary damages from local air pollution follow a 2/3rds power law with population. Counties out of attainment are subject to more stringent emission controls; we argue based on this that enforcement of the Clean Air Act induces sublinear scaling between emissions, damages, and city size. In contrast, we find that metropolitan GDP scales super-linearly with population in all MSAs regardless of attainment status. Summarizing, our findings suggest that environmental policy limits the adverse effects of urbanization without interfering with the productivity benefits that manifest in cities. PMID:28792949

  11. Does environmental policy affect scaling laws between population and pollution? Evidence from American metropolitan areas.

    PubMed

    Muller, Nicholas Z; Jha, Akshaya

    2017-01-01

    Modern cities are engines of production, innovation, and growth. However, urbanization also increases both local and global pollution from household consumption and firms' production. Do emissions change proportionately to city size or does pollution tend to outpace or lag urbanization? Do emissions scale differently with population versus economic growth or are emissions, population, and economic growth inextricably linked? How are the scaling relationships between emissions, population, and economic growth affected by environmental regulation? This paper examines the link between urbanization, economic growth and pollution using data from Metropolitan Statistical Areas (MSAs) in the United States between 1999 and 2011. We find that the emissions of local air pollution in these MSAs scale according to a ¾ power law with both population size and gross domestic product (GDP). However, the monetary damages from these local emissions scale linearly with both population and GDP. Counties that have previously been out of attainment with the local air quality standards set by the Clean Air Act show an entirely different relationship: local emissions scale according to the square root of population, while the monetary damages from local air pollution follow a 2/3rds power law with population. Counties out of attainment are subject to more stringent emission controls; we argue based on this that enforcement of the Clean Air Act induces sublinear scaling between emissions, damages, and city size. In contrast, we find that metropolitan GDP scales super-linearly with population in all MSAs regardless of attainment status. Summarizing, our findings suggest that environmental policy limits the adverse effects of urbanization without interfering with the productivity benefits that manifest in cities.

  12. "It's Just More in the Real World Really": How Can a Local Project Support Early Years Practitioners from Different Settings in Working and Learning Together?

    ERIC Educational Resources Information Center

    Cotton, Lizzie

    2013-01-01

    This article describes how early years practitioners working in different settings, with different experiences and qualifications, can work and learn together. It is a small-scale case study of an eight-month project, with a grass-roots approach, involving early years settings within the reach area of an inner-London Children's Centre. The data…

  13. Developing micro-level urban ecosystem indicators for sustainability assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dizdaroglu, Didem, E-mail: dizdaroglu@bilkent.edu.tr

    Sustainability assessment is increasingly being viewed as an important tool to aid in the shift towards sustainable urban ecosystems. An urban ecosystem is a dynamic system and requires regular monitoring and assessment through a set of relevant indicators. An indicator is a parameter which provides information about the state of the environment by producing a quantitative value. Indicator-based sustainability assessment needs to be considered on all spatial scales to provide efficient information of urban ecosystem sustainability. The detailed data is necessary to assess environmental change in urban ecosystems at local scale and easily transfer this information to the national andmore » global scales. This paper proposes a set of key micro-level urban ecosystem indicators for monitoring the sustainability of residential developments. The proposed indicator framework measures the sustainability performance of urban ecosystem in 3 main categories including: natural environment, built environment, and socio-economic environment which are made up of 9 sub-categories, consisting of 23 indicators. This paper also describes theoretical foundations for the selection of each indicator with reference to the literature [Turkish] Highlights: • As the impacts of environmental problems have multi-scale characteristics, sustainability assessment needs to be considered on all scales. • The detailed data is necessary to assess local environmental change in urban ecosystems to provide insights into the national and global scales. • This paper proposes a set of key micro-level urban ecosystem indicators for monitoring the sustainability of residential developments. • This paper also describes theoretical foundations for the selection of each indicator with reference to the literature.« less

  14. Crack surface roughness in three-dimensional random fuse networks

    NASA Astrophysics Data System (ADS)

    Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan

    2006-08-01

    Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.

  15. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  16. Time scales of tunneling decay of a localized state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ban, Yue; Muga, J. G.; Sherman, E. Ya.

    2010-12-15

    Motivated by recent time-domain experiments on ultrafast atom ionization, we analyze the transients and time scales that characterize, aside from the relatively long lifetime, the decay of a localized state by tunneling. While the tunneling starts immediately, some time is required for the outgoing flux to develop. This short-term behavior depends strongly on the initial state. For the initial state, tightly localized so that the initial transients are dominated by over-the-barrier motion, the time scale for flux propagation through the barrier is close to the Buettiker-Landauer traversal time. Then a quasistationary, slow-decay process follows, which sets ideal conditions for observingmore » diffraction in time at longer times and distances. To define operationally a tunneling time at the barrier edge, we extrapolate backward the propagation of the wave packet that escaped from the potential. This extrapolated time is considerably longer than the time scale of the flux and density buildup at the barrier edge.« less

  17. The ML Scale in Norway

    DTIC Science & Technology

    1990-05-31

    instead of the earlier value of 2800. The new AfL values have also regressively been related to a data set of Mfs values, yielding the relation Als ...reported until Bath et al . (1976) developed a new local magnitude scale (hereafter for simplicity called Bith’s ML scale) based on Swedish data. The...the data. Originally, a much larger data base was considered, based on available earthquake catalogues since 1971 (Bunguin et al ., 1990). In doing this

  18. Coupling Aeolian Stratigraphic Architecture to Paleo-Boundary Conditions: The Scour-Fill Dominated Jurassic Page Sandstone

    NASA Astrophysics Data System (ADS)

    Cardenas, B. T.; Kocurek, G.; Mohrig, D. C.; Swanson, T.

    2017-12-01

    The stratigraphic architecture of aeolian sandstones is thought to encode signals originating from both autogenic dune behavior and allogenic boundary conditions within which the dune field evolves. Mapping of outcrop-scale bounding surfaces and sets of cross-strata between these surfaces for the Jurassic Page Sandstone near Page, AZ, USA, demonstrates that dune autogenic behavior manifested in variable dune scour depth, whereas the dominant boundary conditions were antecedent topography and water-table elevation. At the study area, the Page Sandstone is 60 m thick and is separated from the underlying Navajo Sandstone by the J-2 regional unconformity, which shows meters of relief. Filling J-2 depressions are thin, climbing sets of cross-strata. In contrast, the overlying Page consists of packages of one to a few, meter-scale sets of cross-strata between the outcrop-scale bounding surfaces. These surfaces, marked by polygonal fractures and local overlying sabkha deposits, are regional in scale and correlated to high stands of the adjacent Carmel sea. Over the km-scale outcrop, the surfaces show erosional relief and packages of cross-strata are locally truncated. Notably absent within these cross-strata packages are early dune-field accumulations, interdune deposits, and apparent dune-climbing. These strata are interpreted to represent a scour-fill architecture created by migrating large dunes within a mature dry aeolian sand sea, in which early phases of dune-field construction have been cannibalized and dune fill of the deepest scours is recorded. At low angles of climb, set thickness is dominated by the component of scour-depth variation over the component resulting from the angle of climb. After filling of J-2 depressions, the Page consists of scour-fill accumulations formed during low stands. Carmel transgressions limited sediment availability, causing deflation to the water table and development of the regional bounding surfaces. Each subsequent fall of the water table with Carmel regressions renewed sediment availability, including local breaching of the resistant surfaces and cannibalization of Page accumulations. The Page record exists because of preservation associated with Carmel transgressions and subsidence, without which the Page would be represented by an erosional surface.

  19. State Enabling Legislation for Commercial-Scale Wind Power Siting and the Local Government Role

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McElfish, J.M.; Gersen, S.

    Siting of commercial-scale wind facilities (>5MW) is determined primarily by state laws. State laws either leave siting regulation to local governments, prescribe and constrain the role for local governments, establish state standards, or preempt local governance by having state institutions govern siting. Siting regulation is extremely important to the advancement of wind generation in the United States. Major siting decisions lie ahead for state and local governments as the nation diversifies its energy portfolio. An increase in the number of new wind facilities, siting in more locations and in more heavily populated areas, will require attention to the laws andmore » regulations that govern siting. Local governments exercise some authority over commercial-scale wind facility siting in 48 of the 50 states. In 34 states, local governments have substantial autonomy to regulate the siting of most or all commercial-scale wind facilities. A few states authorize local governments to regulate wind facility siting, but make the scope of local regulation subject to limitations defined by state law. Eleven states set size thresholds for state regulatory involvement with local governments in these states regulating smaller facilities and state boards regulating larger ones (either exclusively or concurrently with local governments). In just under a third of the states, siting of most or all commercial-scale wind facilities requires approval by both state and local government bodies. Only a few states reserve the regulation of siting of all or virtually all commercial-scale wind facilities to state boards and commissions. The content of the applicable regulations is more important, in general, than the level of government responsible for the decision. Several states that assign siting responsibilities to local governments have specified some of the content and the limits of local regulation. About 1/5 of the states have directed boards and commissions to develop statewide regulations to deal with wind facility siting issues subject to state approval. These requirements most often specify standards for setbacks, wildlife, noise, decommissioning, and other issues.« less

  20. A methodology to link national and local information for spatial targeting of ammonia mitigation efforts

    NASA Astrophysics Data System (ADS)

    Carnell, E. J.; Misselbrook, T. H.; Dore, A. J.; Sutton, M. A.; Dragosits, U.

    2017-09-01

    The effects of atmospheric nitrogen (N) deposition are evident in terrestrial ecosystems worldwide, with eutrophication and acidification leading to significant changes in species composition. Substantial reductions in N deposition from nitrogen oxides emissions have been achieved in recent decades. By contrast, ammonia (NH3) emissions from agriculture have not decreased substantially and are typically highly spatially variable, making efficient mitigation challenging. One solution is to target NH3 mitigation measures spatially in source landscapes to maximize the benefits for nature conservation. The paper develops an approach to link national scale data and detailed local data to help identify suitable measures for spatial targeting of local sources near designated Special Areas of Conservation (SACs). The methodology combines high-resolution national data on emissions, deposition and source attribution with local data on agricultural management and site conditions. Application of the methodology for the full set of 240 SACs in England found that agriculture contributes ∼45 % of total N deposition. Activities associated with cattle farming represented 54 % of agricultural NH3 emissions within 2 km of the SACs, making them a major contributor to local N deposition, followed by mineral fertiliser application (21 %). Incorporation of local information on agricultural management practices at seven example SACs provided the means to correct outcomes compared with national-scale emission factors. The outcomes show how national scale datasets can provide information on N deposition threats at landscape to national scales, while local-scale information helps to understand the feasibility of mitigation measures, including the impact of detailed spatial targeting on N deposition rates to designated sites.

  1. Spectroscopic measurement of spin-dependent resonant tunneling through a 3D disorder: the case of MnAs/GaAs/MnAs junctions.

    PubMed

    Garcia, V; Jaffrès, H; George, J-M; Marangolo, M; Eddrief, M; Etgens, V H

    2006-12-15

    We propose an analytical model of spin-dependent resonant tunneling through a 3D assembly of localized states (spread out in energy and in space) in a barrier. An inhomogeneous distribution of localized states leads to resonant tunneling magnetoresistance inversion and asymmetric bias dependence as evidenced with a set of experiments with MnAs/GaAs(7-10 nm)/MnAs tunnel junctions. One of the key parameters of our theory is a dimensionless critical exponent beta scaling the typical extension of the localized states over the characteristic length scale of the spatial distribution function. Furthermore, we demonstrate, through experiments with localized states introduced preferentially in the middle of the barrier, the influence of an homogeneous distribution on the spin-dependent transport properties.

  2. Can I solve my structure by SAD phasing? Planning an experiment, scaling data and evaluating the useful anomalous correlation and anomalous signal

    PubMed Central

    Terwilliger, Thomas C.; Bunkóczi, Gábor; Hung, Li-Wei; Zwart, Peter H.; Smith, Janet L.; Akey, David L.; Adams, Paul D.

    2016-01-01

    A key challenge in the SAD phasing method is solving a structure when the anomalous signal-to-noise ratio is low. Here, algorithms and tools for evaluating and optimizing the useful anomalous correlation and the anomalous signal in a SAD experiment are described. A simple theoretical framework [Terwilliger et al. (2016 ▸), Acta Cryst. D72, 346–358] is used to develop methods for planning a SAD experiment, scaling SAD data sets and estimating the useful anomalous correlation and anomalous signal in a SAD data set. The phenix.plan_sad_experiment tool uses a database of solved and unsolved SAD data sets and the expected characteristics of a SAD data set to estimate the probability that the anomalous substructure will be found in the SAD experiment and the expected map quality that would be obtained if the substructure were found. The phenix.scale_and_merge tool scales unmerged SAD data from one or more crystals using local scaling and optimizes the anomalous signal by identifying the systematic differences among data sets, and the phenix.anomalous_signal tool estimates the useful anomalous correlation and anomalous signal after collecting SAD data and estimates the probability that the data set can be solved and the likely figure of merit of phasing. PMID:26960123

  3. Improving depth maps of plants by using a set of five cameras

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Adam L.

    2015-03-01

    Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.

  4. Local morphologic scale: application to segmenting tumor infiltrating lymphocytes in ovarian cancer TMAs

    NASA Astrophysics Data System (ADS)

    Janowczyk, Andrew; Chandran, Sharat; Feldman, Michael; Madabhushi, Anant

    2011-03-01

    In this paper we present the concept and associated methodological framework for a novel locally adaptive scale notion called local morphological scale (LMS). Broadly speaking, the LMS at every spatial location is defined as the set of spatial locations, with associated morphological descriptors, which characterize the local structure or heterogeneity for the location under consideration. More specifically, the LMS is obtained as the union of all pixels in the polygon obtained by linking the final location of trajectories of particles emanating from the location under consideration, where the path traveled by originating particles is a function of the local gradients and heterogeneity that they encounter along the way. As these particles proceed on their trajectory away from the location under consideration, the velocity of each particle (i.e. do the particles stop, slow down, or simply continue around the object) is modeled using a physics based system. At some time point the particle velocity goes to zero (potentially on account of encountering (a) repeated obstructions, (b) an insurmountable image gradient, or (c) timing out) and comes to a halt. By using a Monte-Carlo sampling technique, LMS is efficiently determined through parallelized computations. LMS is different from previous local scale related formulations in that it is (a) not a locally connected sets of pixels satisfying some pre-defined intensity homogeneity criterion (generalized-scale), nor is it (b) constrained by any prior shape criterion (ball-scale, tensor-scale). Shape descriptors quantifying the morphology of the particle paths are used to define a tensor LMS signature associated with every spatial image location. These features include the number of object collisions per particle, average velocity of a particle, and the length of the individual particle paths. These features can be used in conjunction with a supervised classifier to correctly differentiate between two different object classes based on local structural properties. In this paper, we apply LMS to the specific problem of classifying regions of interest in Ovarian Cancer (OCa) histology images as either tumor or stroma. This approach is used to classify lymphocytes as either tumor infiltrating lymphocytes (TILs) or non-TILs; the presence of TILs having been identified as an important prognostic indicator for disease outcome in patients with OCa. We present preliminary results on the tumor/stroma classification of 11,000 randomly selected locations of interest, across 11 images obtained from 6 patient studies. Using a Probabilistic Boosting Tree (PBT), our supervised classifier yielded an area under the receiver operation characteristic curve (AUC) of 0.8341 +/-0.0059 over 5 runs of randomized cross validation. The average LMS computation time at every spatial location for an image patch comprising 2000 pixels with 24 particles at every location was only 18s.

  5. The Portland Basin: A (big) river runs through it

    USGS Publications Warehouse

    Evarts, Russell C.; O'Connor, Jim E.; Wells, Ray E.; Madin, Ian P.

    2009-01-01

    Metropolitan Portland, Oregon, USA, lies within a small Neogene to Holocene basin in the forearc of the Cascadia subduction system. Although the basin owes its existence and structural development to its convergent-margin tectonic setting, the stratigraphic architecture of basin-fill deposits chiefly reflects its physiographic position along the lower reaches of the continental-scale Columbia River system. As a result of this globally unique setting, the basin preserves a complex record of aggradation and incision in response to distant as well as local tectonic, volcanic, and climatic events. Voluminous flood basalts, continental and locally derived sediment and volcanic debris, and catastrophic flood deposits all accumulated in an area influenced by contemporaneous tectonic deformation and variations in regional and local base level.

  6. Cascade phenomenon against subsequent failures in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng

    2018-06-01

    Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.

  7. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  8. The Local Bubble: a magnetic veil to our Galaxy

    NASA Astrophysics Data System (ADS)

    Alves, M. I. R.; Boulanger, F.; Ferrière, K.; Montier, L.

    2018-04-01

    The magnetic field in the local interstellar medium does not follow the large-scale Galactic magnetic field. The local magnetic field has probably been distorted by the Local Bubble, a cavity of hot ionized gas extending all around the Sun and surrounded by a shell of cold neutral gas and dust. However, so far no conclusive association between the local magnetic field and the Local Bubble has been established. Here we develop an analytical model for the magnetic field in the shell of the Local Bubble, which we represent as an inclined spheroid, off-centred from the Sun. We fit the model to Planck dust polarized emission observations within 30° of the Galactic poles. We find a solution that is consistent with a highly deformed magnetic field, with significantly different directions towards the north and south Galactic poles. This work sets a methodological framework for modelling the three-dimensional (3D) structure of the magnetic field in the local interstellar medium, which is a most awaited input for large-scale Galactic magnetic field models.

  9. Ecological role and services of tropical mangrove ecosystems: a reassessment

    USGS Publications Warehouse

    Lee, Shing Yip; Primavera, Jurgene H.; Dahdouh-Guebas, Farid; McKee, Karen; Bosire, Jared O.; Cannicci, Stefano; Diele, Karen; Fromard, Francois; Koedam, Nico; Marchand, Cyril; Mendelssohn, Irving; Mukherjee, Nibedita; Record, Sydne

    2014-01-01

    Knowledge of thresholds, spatio-temporal scaling and variability due to geographic, biogeographic and socio-economic settings will improve the management of mangrove ecosystem services. Many drivers respond to global trends in climate change and local changes such as urbanization. While mangroves have traditionally been managed for subsistence, future governance models must involve partnerships between local custodians of mangroves and offsite beneficiaries of the services.

  10. Defining Face Perception Areas in the Human Brain: A Large-Scale Factorial fMRI Face Localizer Analysis

    ERIC Educational Resources Information Center

    Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence

    2012-01-01

    A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here…

  11. Strain localization in models and nature: bridging the gaps.

    NASA Astrophysics Data System (ADS)

    Burov, E.; Francois, T.; Leguille, J.

    2012-04-01

    Mechanisms of strain localization and their role in tectonic evolution are still largely debated. Indeed, the laboratory data on strain localization processes are not abundant, they do not cover the entire range of possible mechanisms and have to be extrapolated, sometimes with greatest uncertainties, to geological scales while the observations of localization processes at outcrop scale are scarce, not always representative, and usually are difficult to quantify. Numerical thermo-mechanical models allow us to investigate the relative importance of some of the localization processes whether they are hypothesized or observed at laboratory or outcrop scale. The numerical models can test different observationally or analytically derived laws in terms of their applicability to natural scales and tectonic processes. The models are limited, however, in their capacity of reproduction of physical mechanisms, and necessary simplify the softening laws leading to "numerical" localization. Numerical strain localization is also limited by grid resolution and the ability of specific numerical codes to handle large strains and the complexity of the associated physical phenomena. Hence, multiple iterations between observations and models are needed to elucidate the causes of strain localization in nature. We here investigate the relative impact of different weakening laws on localization of deformation using large-strain thermo-mechanical models. We test using several "generic" rifting and collision settings, the implications of structural softening, tectonic heritage, shear heating, friction angle and cohesion softening, ductile softening (mimicking grain-size reduction) as well as of a number of other mechanisms such as fluid-assisted phase changes. The results suggest that different mechanisms of strain localization may interfere in nature, yet it most cases it is not evident to establish quantifiable links between the laboratory data and the best-fitting parameters of the effective softening laws that allow to reproduce large scale tectonic evolution. For example, one of most effective and widely used mechanisms of "numerical" strain localization is friction angle softening. Yet, namely this law appears to be most difficult to justify from physical and observational grounds.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stershic, Andrew J.; Dolbow, John E.; Moës, Nicolas

    The Thick Level-Set (TLS) model is implemented to simulate brittle media undergoing dynamic fragmentation. This non-local model is discretized by the finite element method with damage represented as a continuous field over the domain. A level-set function defines the extent and severity of damage, and a length scale is introduced to limit the damage gradient. Numerical studies in one dimension demonstrate that the proposed method reproduces the rate-dependent energy dissipation and fragment length observations from analytical, numerical, and experimental approaches. In conclusion, additional studies emphasize the importance of appropriate bulk constitutive models and sufficient spatial resolution of the length scale.

  13. Local magnitude scale for earthquakes in Turkey

    NASA Astrophysics Data System (ADS)

    Kılıç, T.; Ottemöller, L.; Havskov, J.; Yanık, K.; Kılıçarslan, Ö.; Alver, F.; Özyazıcıoğlu, M.

    2017-01-01

    Based on the earthquake event data accumulated by the Turkish National Seismic Network between 2007 and 2013, the local magnitude (Richter, Ml) scale is calibrated for Turkey and the close neighborhood. A total of 137 earthquakes (Mw > 3.5) are used for the Ml inversion for the whole country. Three Ml scales, whole country, East, and West Turkey, are developed, and the scales also include the station correction terms. Since the scales for the two parts of the country are very similar, it is concluded that a single Ml scale is suitable for the whole country. Available data indicate the new scale to suffer from saturation beyond magnitude 6.5. For this data set, the horizontal amplitudes are on average larger than vertical amplitudes by a factor of 1.8. The recommendation made is to measure Ml amplitudes on the vertical channels and then add the logarithm scale factor to have a measure of maximum amplitude on the horizontal. The new Ml is compared to Mw from EMSC, and there is almost a 1:1 relationship, indicating that the new scale gives reliable magnitudes for Turkey.

  14. Disentangling multiple drivers of pollination in a landscape-scale experiment

    PubMed Central

    Schüepp, Christof; Herzog, Felix; Entling, Martin H.

    2014-01-01

    Animal pollination is essential for the reproductive success of many wild and crop plants. Loss and isolation of (semi-)natural habitats in agricultural landscapes can cause declines of plants and pollinators and endanger pollination services. We investigated the independent effects of these drivers on pollination of young cherry trees in a landscape-scale experiment. We included (i) isolation of study trees from other cherry trees (up to 350 m), (ii) the amount of cherry trees in the landscape, (iii) the isolation from other woody habitats (up to 200 m) and (iv) the amount of woody habitats providing nesting and floral resources for pollinators. At the local scale, we considered effects of (v) cherry flower density and (vi) heterospecific flower density. Pollinators visited flowers more often in landscapes with high amount of woody habitat and at sites with lower isolation from the next cherry tree. Fruit set was reduced by isolation from the next cherry tree and by a high local density of heterospecific flowers but did not directly depend on pollinator visitation. These results reveal the importance of considering the plant's need for conspecific pollen and its pollen competition with co-flowering species rather than focusing only on pollinators’ habitat requirements and flower visitation. It proved to be important to disentangle habitat isolation from habitat loss, local from landscape-scale effects, and direct effects of pollen availability on fruit set from indirect effects via pollinator visitation to understand the delivery of an agriculturally important ecosystem service. PMID:24225465

  15. Vocal local versus pharmacological treatments for pain management in tubal ligation procedures in rural Kenya: a non-inferiority trial.

    PubMed

    Keogh, Sarah C; Fry, Kenzo; Mbugua, Edwin; Ayallo, Mark; Quinn, Heidi; Otieno, George; Ngo, Thoai D

    2014-02-04

    Vocal local (VL) is a non-pharmacological pain management technique for gynecological procedures. In Africa, it is usually used in combination with pharmacological analgesics. However, analgesics are associated with side-effects, and can be costly and subject to frequent stock-outs, particularly in remote rural settings. We compared the effectiveness of VL + local anesthesia + analgesics (the standard approach), versus VL + local anesthesia without analgesics, on pain and satisfaction levels for women undergoing tubal ligations in rural Kenya. We conducted a site-randomised non-inferiority trial of 884 women receiving TLs from 40 Marie Stopes mobile outreach sites in Kisii and Machakos Districts. Twenty sites provided VL + local anesthesia + analgesics (control), while 20 offered VL + local anesthesia without additional analgesics (intervention). Pain was measured using a validated 11-point Numeric Rating Scale; satisfaction was measured using 11-point scales. A total of 461 women underwent tubal ligations with VL + local anesthesia, while 423 received tubal ligations with VL + local anesthesia + analgesics. The majority were aged ≥30 years (78%), and had >3 children (99%). In a multivariate analysis, pain during the procedure was not significantly different between the two groups. The pain score after the procedure was significantly lower in the intervention group versus the control group (by 0.40 points; p = 0.041). Satisfaction scores were equally high in both groups; 96% would recommend the procedure to a friend. VL + local anesthesia is as effective as VL + local anesthesia + analgesics for pain management during tubal ligation in rural Kenya. Avoiding analgesics is associated with numerous benefits including cost savings and fewer issues related to the maintenance, procurement and monitoring of restricted opioid drugs, particularly in remote low-resource settings where these systems are weak. Pan-African Clinical Trials Registry PACTR201304000495942.

  16. Neutrinos help reconcile Planck measurements with the local universe.

    PubMed

    Wyman, Mark; Rudd, Douglas H; Vanderveld, R Ali; Hu, Wayne

    2014-02-07

    Current measurements of the low and high redshift Universe are in tension if we restrict ourselves to the standard six-parameter model of flat ΛCDM. This tension has two parts. First, the Planck satellite data suggest a higher normalization of matter perturbations than local measurements of galaxy clusters. Second, the expansion rate of the Universe today, H0, derived from local distance-redshift measurements is significantly higher than that inferred using the acoustic scale in galaxy surveys and the Planck data as a standard ruler. The addition of a sterile neutrino species changes the acoustic scale and brings the two into agreement; meanwhile, adding mass to the active neutrinos or to a sterile neutrino can suppress the growth of structure, bringing the cluster data into better concordance as well. For our fiducial data set combination, with statistical errors for clusters, a model with a massive sterile neutrino shows 3.5σ evidence for a nonzero mass and an even stronger rejection of the minimal model. A model with massive active neutrinos and a massless sterile neutrino is similarly preferred. An eV-scale sterile neutrino mass--of interest for short baseline and reactor anomalies--is well within the allowed range. We caution that (i) unknown astrophysical systematic errors in any of the data sets could weaken this conclusion, but they would need to be several times the known errors to eliminate the tensions entirely; (ii) the results we find are at some variance with analyses that do not include cluster measurements; and (iii) some tension remains among the data sets even when new neutrino physics is included.

  17. Status of the FLARE (Facility for Laboratory Reconnection Experiments) Construction Project and Plans as a User Facility

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W.; Chen, Y.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.

    2016-10-01

    The FLARE device (flare.pppl.gov) is a new intermediate-scale plasma experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram [Ji & Daughton, (2011)]. Most of major components either have been already fabricated or are near their completion, including the two most crucial magnets called flux cores. The hardware assembly and installation begin in this summer, followed by commissioning in 2017. Initial comprehensive set of research diagnostics will be constructed and installed also in 2017. The main diagnostics is an extensive set of magnetic probe arrays, covering multiple scales from local electron scales, to intermediate ion scales, and global MHD scales. The planned procedures and example topics as a user facility will be discussed.

  18. Scale-dependent portfolio effects explain growth inflation and volatility reduction in landscape demography

    PubMed Central

    2017-01-01

    Population demography is central to fundamental ecology and for predicting range shifts, decline of threatened species, and spread of invasive organisms. There is a mismatch between most demographic work, carried out on few populations and at local scales, and the need to predict dynamics at landscape and regional scales. Inspired by concepts from landscape ecology and Markowitz’s portfolio theory, we develop a landscape portfolio platform to quantify and predict the behavior of multiple populations, scaling up the expectation and variance of the dynamics of an ensemble of populations. We illustrate this framework using a 35-y time series on gypsy moth populations. We demonstrate the demography accumulation curve in which the collective growth of the ensemble depends on the number of local populations included, highlighting a minimum but adequate number of populations for both regional-scale persistence and cross-scale inference. The attainable set of landscape portfolios further suggests tools for regional population management for both threatened and invasive species. PMID:29109261

  19. Rapid insights from remote sensing in the geosciences

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio

    2015-03-01

    The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Admin. under Contract DE-AC04-94AL85000.

  20. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  1. A new method of automatic landmark tagging for shape model construction via local curvature scale

    NASA Astrophysics Data System (ADS)

    Rueda, Sylvia; Udupa, Jayaram K.; Bai, Li

    2008-03-01

    Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches. To build the model, we need an annotated training set of shape examples with correspondences indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task, and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b. With this shape description method, we can automatically locate mathematical landmarks selected at different levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated to each shape in the training set, defining this way the correspondences among the shapes. Altogether 12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes.

  2. Scaling theory of tunneling diffusion of a heavy particle interacting with phonons

    NASA Astrophysics Data System (ADS)

    Itai, K.

    1988-05-01

    The author discusses motion of a heavy particle in a d-dimensional lattice interacting with phonons by different couplings. The models discussed are characterized by the dimension (d) and the set of two indices (λ,ν) which specify the momentum dependence of the dispersion of phonon energy (ω~kν) and of the particle-phonon coupling (~kλ). Scaling equations are derived by eliminating the short-time behavior in a renormalization-group scheme using Feynman's path-integral method, and the technique developed by Anderson, Yuval, and Hamann for the Kondo problem. The scaling equations show that the particle is localized in the strict sense when (2λ+d+2)/ν<2 and is not localized when (2λ+d+2)/ν>2. In the marginal case, i.e., (2λ+d+2)/ν=2, localization occurs for couplings larger than a critical value. This marginal case shows Ohmic dissipation and is a close analogy to the Caldeira-Leggett model for macroscopic quantum tunneling and the hopping models of Schmid's type. For large-enough (2λ+d+2)/ν, the particle is considered practically localized, but the origin of the localization is quite different from that for (2λ+d+2)/ν<=2. .AE

  3. Generic evolution of mixing in heterogeneous media

    NASA Astrophysics Data System (ADS)

    De Dreuzy, J.; Carrera, J.; Dentz, M.; Le Borgne, T.

    2011-12-01

    Mixing in heterogeneous media results from the competition bewteen flow fluctuations and local scale diffusion. Flow fluctuations quickly create concentration contrasts and thus heterogeneity of the concentration field, which is slowly homogenized by local scale diffusion. Mixing first deviates from Gaussian mixing, which represents the potential mixing induced by spreading before approaching it. This deviation fundamentally expresses the evolution of the interaction between spreading and local scale diffusion. We characterize it by the ratio γ of the non-Gaussian to the Gaussian mixing states. We define the Gaussian mixing state as the integrated squared concentration of the Gaussian plume that has the same longitudinal dispersion as the real plume. The non-Gaussian mixing state is the difference between the overall mixing state defined as the integrated squared concentration and the Gaussian mixing state. The main advantage of this definition is to use the full knowledge previously acquired on dispersion for characterizing mixing even when the solute concentration field is highly non Gaussian. Using high precision numerical simulations, we show that γ quickly increases, peaks and slowly decreases. γ can be derived from two scales characterizing spreading and local mixing, at least for large flux-weighted solute injection conditions into classically log-normal Gaussian correlated permeability fields. The spreading scale is directly related to the longitudinal dispersion. The local mixing scale is the largest scale over which solute concentrations can be considered locally uniform. More generally, beyond the characteristics of its maximum, γ turns out to have a highly generic scaling form. Its fast increase and slow decrease depend neither on the heterogeneity level, nor on the ratio of diffusion to advection, nor on the injection conditions. They might even not depend on the particularities of the flow fields as the same generic features also prevail for Taylor dispersion. This generic characterization of mixing can offer new ways to set up transport equations that honor not only advection and spreading (dispersion), but also mixing.

  4. Small-scale behavior in distorted turbulent boundary layers at low Reynolds number

    NASA Technical Reports Server (NTRS)

    Saddoughi, Seyed G.

    1994-01-01

    During the last three years we have conducted high- and low-Reynolds-number experiments, including hot-wire measurements of the velocity fluctuations, in the test-section-ceiling boundary layer of the 80- by 120-foot Full-Scale Aerodynamics Facility at NASA Ames Research Center, to test the local-isotropy predictions of Kolmogorov's universal equilibrium theory. This hypothesis, which states that at sufficiently high Reynolds numbers the small-scale structures of turbulent motions are independent of large-scale structures and mean deformations, has been used in theoretical studies of turbulence and computational methods such as large-eddy simulation; however, its range of validity in shear flows has been a subject of controversy. The present experiments were planned to enhance our understanding of the local-isotropy hypothesis. Our experiments were divided into two sets. First, measurements were taken at different Reynolds numbers in a plane boundary layer, which is a 'simple' shear flow. Second, experiments were designed to address this question: will our criteria for the existence of local isotropy hold for 'complex' nonequilibrium flows in which extra rates of mean strain are added to the basic mean shear?

  5. Local-scale topoclimate effects on treeline elevations: a country-wide investigation of New Zealand's southern beech treelines.

    PubMed

    Case, Bradley S; Buckley, Hannah L

    2015-01-01

    Although treeline elevations are limited globally by growing season temperature, at regional scales treelines frequently deviate below their climatic limit. The cause of these deviations relate to a host of climatic, disturbance, and geomorphic factors that operate at multiple scales. The ability to disentangle the relative effects of these factors is currently hampered by the lack of reliable topoclimatic data, which describe how regional climatic characteristics are modified by topographic effects in mountain areas. In this study we present an analysis of the combined effects of local- and regional-scale factors on southern beech treeline elevation variability at 28 study areas across New Zealand. We apply a mesoscale atmospheric model to generate local-scale (200 m) meteorological data at these treelines and, from these data, we derive a set of topoclimatic indices that reflect possible detrimental and ameliorative influences on tree physiological functioning. Principal components analysis of meteorological data revealed geographic structure in how study areas were situated in multivariate space along gradients of topoclimate. Random forest and conditional inference tree modelling enabled us to tease apart the relative effects of 17 explanatory factors on local-scale treeline elevation variability. Overall, modelling explained about 50% of the variation in treeline elevation variability across the 28 study areas, with local landform and topoclimatic effects generally outweighing those from regional-scale factors across the 28 study areas. Further, the nature of the relationships between treeline elevation variability and the explanatory variables were complex, frequently non-linear, and consistent with the treeline literature. To our knowledge, this is the first study where model-generated meteorological data, and derived topoclimatic indices, have been developed and applied to explain treeline variation. Our results demonstrate the potential of such an approach for ecological research in mountainous environments.

  6. Local-scale topoclimate effects on treeline elevations: a country-wide investigation of New Zealand’s southern beech treelines

    PubMed Central

    Buckley, Hannah L.

    2015-01-01

    Although treeline elevations are limited globally by growing season temperature, at regional scales treelines frequently deviate below their climatic limit. The cause of these deviations relate to a host of climatic, disturbance, and geomorphic factors that operate at multiple scales. The ability to disentangle the relative effects of these factors is currently hampered by the lack of reliable topoclimatic data, which describe how regional climatic characteristics are modified by topographic effects in mountain areas. In this study we present an analysis of the combined effects of local- and regional-scale factors on southern beech treeline elevation variability at 28 study areas across New Zealand. We apply a mesoscale atmospheric model to generate local-scale (200 m) meteorological data at these treelines and, from these data, we derive a set of topoclimatic indices that reflect possible detrimental and ameliorative influences on tree physiological functioning. Principal components analysis of meteorological data revealed geographic structure in how study areas were situated in multivariate space along gradients of topoclimate. Random forest and conditional inference tree modelling enabled us to tease apart the relative effects of 17 explanatory factors on local-scale treeline elevation variability. Overall, modelling explained about 50% of the variation in treeline elevation variability across the 28 study areas, with local landform and topoclimatic effects generally outweighing those from regional-scale factors across the 28 study areas. Further, the nature of the relationships between treeline elevation variability and the explanatory variables were complex, frequently non-linear, and consistent with the treeline literature. To our knowledge, this is the first study where model-generated meteorological data, and derived topoclimatic indices, have been developed and applied to explain treeline variation. Our results demonstrate the potential of such an approach for ecological research in mountainous environments. PMID:26528407

  7. A simple scaling approach to produce climate scenarios of local precipitation extremes for the Netherlands

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Attema, Jisk

    2015-08-01

    Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.

  8. Preface: Introductory Remarks: Linear Scaling Methods

    NASA Astrophysics Data System (ADS)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up implementation questions relating to parallelization (particularly with multi-core processors starting to dominate the market) and inherent scaling and basis sets (in both normal and linear scaling codes). For now, the answer seems to lie between 100-1,000 atoms, though this depends on the type of simulation used among other factors. Basis sets are still a problematic question in the area of electronic structure calculations. The linear scaling community has largely split into two camps: those using relatively small basis sets based on local atomic-like functions (where systematic convergence to the full basis set limit is hard to achieve); and those that use necessarily larger basis sets which allow convergence systematically and therefore are the localised equivalent of plane waves. Related to basis sets is the study of Wannier functions, on which some linear scaling methods are based and which give a good point of contact with traditional techniques; they are particularly interesting for modelling unoccupied states with linear scaling methods. There are, of course, as many approaches to linear scaling solution for the density matrix as there are groups in the area, though there are various broad areas: McWeeny-based methods, fragment-based methods, recursion methods, and combinations of these. While many ideas have been in development for several years, there are still improvements emerging, as shown by the rich variety of the talks below. Applications using O(N) DFT methods are now starting to emerge, though they are still clearly not trivial. Once systems to be simulated cross the 10,000 atom barrier, only linear scaling methods can be applied, even with the most efficient standard techniques. One of the most challenging problems remaining, now that ab initio methods can be applied to large systems, is the long timescale problem. Although much of the work presented was concerned with improving the performance of the codes, and applying them to scientificallyimportant problems, there was another important theme: extending functionality. The search for greater accuracy has given an implementation of density functional designed to model van der Waals interactions accurately as well as local correlation, TDDFT and QMC and GW methods which, while not explicitly O(N), take advantage of localisation. All speakers at the workshop were invited to contribute to this issue, but not all were able to do this. Hence it is useful to give a complete list of the talks presented, with the names of the sessions; however, many talks fell within more than one area. This is an exciting time for linear scaling methods, which are already starting to contribute significantly to important scientific problems. Applications to nanostructures and biomolecules A DFT study on the structural stability of Ge 3D nanostructures on Si(001) using CONQUEST Tsuyoshi Miyazaki, D R Bowler, M J Gillan, T Otsuka and T Ohno Large scale electronic structure calculation theory and several applications Takeo Fujiwara and Takeo Hoshi ONETEP:Linear-scaling DFT with plane waves Chris-Kriton Skylaris, Peter D Haynes, Arash A Mostofi, Mike C Payne Maximally-localised Wannier functions as building blocks for large-scale electronic structure calculations Arash A Mostofi and Nicola Marzari A linear scaling three dimensional fragment method for ab initio calculations Lin-Wang Wang, Zhengji Zhao, Juan Meza Peta-scalable reactive Molecular dynamics simulation of mechanochemical processes Aiichiro Nakano, Rajiv K. Kalia, Ken-ichi Nomura, Fuyuki Shimojo and Priya Vashishta Recent developments and applications of the real-space multigrid (RMG) method Jerzy Bernholc, M Hodak, W Lu, and F Ribeiro Energy minimisation functionals and algorithms CONQUEST: A linear scaling DFT Code David R Bowler, Tsuyoshi Miyazaki, Antonio Torralba, Veronika Brazdova, Milica Todorovic, Takao Otsuka and Mike Gillan Kernel optimisation and the physical significance of optimised local orbitals in the ONETEP code Peter Haynes, Chris-Kriton Skylaris, Arash Mostofi and Mike Payne A miscellaneous overview of SIESTA algorithms Jose M Soler Wavelets as a basis set for electronic structure calculations and electrostatic problems Stefan Goedecker Wavelets as a basis set for linear scaling electronic structure calculationsMark Rayson O(N) Krylov subspace method for large-scale ab initio electronic structure calculations Taisuke Ozaki Linear scaling calculations with the divide-and-conquer approach and with non-orthogonal localized orbitals Weitao Yang Toward efficient wavefunction based linear scaling energy minimization Valery Weber Accurate O(N) first-principles DFT calculations using finite differences and confined orbitals Jean-Luc Fattebert Linear-scaling methods in dynamics simulations or beyond DFT and ground state properties An O(N) time-domain algorithm for TDDFT Guan Hua Chen Local correlation theory and electronic delocalization Joseph Subotnik Ab initio molecular dynamics with linear scaling: foundations and applications Eiji Tsuchida Towards a linear scaling Car-Parrinello-like approach to Born-Oppenheimer molecular dynamics Thomas Kühne, Michele Ceriotti, Matthias Krack and Michele Parrinello Partial linear scaling for quantum Monte Carlo calculations on condensed matter Mike Gillan Exact embedding of local defects in crystals using maximally localized Wannier functions Eric Cancès Faster GW calculations in larger model structures using ultralocalized nonorthogonal Wannier functions Paolo Umari Other approaches for linear-scaling, including methods formetals Partition-of-unity finite element method for large, accurate electronic-structure calculations of metals John E Pask and Natarajan Sukumar Semiclassical approach to density functional theory Kieron Burke Ab initio transport calculations in defected carbon nanotubes using O(N) techniques Blanca Biel, F J Garcia-Vidal, A Rubio and F Flores Large-scale calculations with the tight-binding (screened) KKR method Rudolf Zeller Acknowledgments We gratefully acknowledge funding for the workshop from the UK CCP9 network, CECAM and the ESF through the PsiK network. DRB, PDH and CKS are funded by the Royal Society. References [1] Car R and Parrinello M 1985 Phys. Rev. Lett. 55 2471 [2] Kühne T D, Krack M, Mohamed F R and Parrinello M 2007 Phys. Rev. Lett. 98 066401 [3] Goedecker S 1999 Rev. Mod. Phys. 71 1085

  9. Multi-level multi-task learning for modeling cross-scale interactions in nested geospatial data

    USGS Publications Warehouse

    Yuan, Shuai; Zhou, Jiayu; Tan, Pang-Ning; Fergus, Emi; Wagner, Tyler; Sorrano, Patricia

    2017-01-01

    Predictive modeling of nested geospatial data is a challenging problem as the models must take into account potential interactions among variables defined at different spatial scales. These cross-scale interactions, as they are commonly known, are particularly important to understand relationships among ecological properties at macroscales. In this paper, we present a novel, multi-level multi-task learning framework for modeling nested geospatial data in the lake ecology domain. Specifically, we consider region-specific models to predict lake water quality from multi-scaled factors. Our framework enables distinct models to be developed for each region using both its local and regional information. The framework also allows information to be shared among the region-specific models through their common set of latent factors. Such information sharing helps to create more robust models especially for regions with limited or no training data. In addition, the framework can automatically determine cross-scale interactions between the regional variables and the local variables that are nested within them. Our experimental results show that the proposed framework outperforms all the baseline methods in at least 64% of the regions for 3 out of 4 lake water quality datasets evaluated in this study. Furthermore, the latent factors can be clustered to obtain a new set of regions that is more aligned with the response variables than the original regions that were defined a priori from the ecology domain.

  10. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin, E-mail: linlin@math.berkeley.edu; Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H{sub 2} and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  11. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE PAGES

    Zhang, Gaigong; Lin, Lin; Hu, Wei; ...

    2017-01-27

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  12. Adaptive local basis set for Kohn–Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Gaigong; Lin, Lin; Hu, Wei

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn–Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann–Feynmanmore » forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Sin ce the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann–Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H 2 and liquid Al–Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.« less

  13. Adaptive local basis set for Kohn-Sham density functional theory in a discontinuous Galerkin framework II: Force, vibration, and molecular dynamics calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Gaigong; Lin, Lin; Hu, Wei; Yang, Chao; Pask, John E.

    2017-04-01

    Recently, we have proposed the adaptive local basis set for electronic structure calculations based on Kohn-Sham density functional theory in a pseudopotential framework. The adaptive local basis set is efficient and systematically improvable for total energy calculations. In this paper, we present the calculation of atomic forces, which can be used for a range of applications such as geometry optimization and molecular dynamics simulation. We demonstrate that, under mild assumptions, the computation of atomic forces can scale nearly linearly with the number of atoms in the system using the adaptive local basis set. We quantify the accuracy of the Hellmann-Feynman forces for a range of physical systems, benchmarked against converged planewave calculations, and find that the adaptive local basis set is efficient for both force and energy calculations, requiring at most a few tens of basis functions per atom to attain accuracies required in practice. Since the adaptive local basis set has implicit dependence on atomic positions, Pulay forces are in general nonzero. However, we find that the Pulay force is numerically small and systematically decreasing with increasing basis completeness, so that the Hellmann-Feynman force is sufficient for basis sizes of a few tens of basis functions per atom. We verify the accuracy of the computed forces in static calculations of quasi-1D and 3D disordered Si systems, vibration calculation of a quasi-1D Si system, and molecular dynamics calculations of H2 and liquid Al-Si alloy systems, where we show systematic convergence to benchmark planewave results and results from the literature.

  14. Nano-indentation creep properties of the S2 cell wall lamina and compound corner middle lamella [abstract

    Treesearch

    Joseph E. Jakes; Charles R. Frihart; James F. Beecher; Donald S. Stone

    2010-01-01

    Bulk wood properties are derived from an ensemble of processes taking place at the micron-scale, and at this level the properties differ dramatically in going from cell wall layers to the middle lamella. To better understand the properties of these micron-scaled regions of wood, we have developed a unique set of nano-indentation tools that allow us to measure local...

  15. Analysis of Fiber Clustering in Composite Materials Using High-Fidelity Multiscale Micromechanics

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Aboudi, Jacob; Arnold, Steven M.

    2015-01-01

    A new multiscale micromechanical approach is developed for the prediction of the behavior of fiber reinforced composites in presence of fiber clustering. The developed method is based on a coupled two-scale implementation of the High-Fidelity Generalized Method of Cells theory, wherein both the local and global scales are represented using this micromechanical method. Concentration tensors and effective constitutive equations are established on both scales and linked to establish the required coupling, thus providing the local fields throughout the composite as well as the global properties and effective nonlinear response. Two nondimensional parameters, in conjunction with actual composite micrographs, are used to characterize the clustering of fibers in the composite. Based on the predicted local fields, initial yield and damage envelopes are generated for various clustering parameters for a polymer matrix composite with both carbon and glass fibers. Nonlinear epoxy matrix behavior is also considered, with results in the form of effective nonlinear response curves, with varying fiber clustering and for two sets of nonlinear matrix parameters.

  16. A hierarchical framework for investigating epiphyte assemblages: networks, meta-communities, and scale.

    PubMed

    Burns, K C; Zotz, G

    2010-02-01

    Epiphytes are an important component of many forested ecosystems, yet our understanding of epiphyte communities lags far behind that of terrestrial-based plant communities. This discrepancy is exacerbated by the lack of a theoretical context to assess patterns in epiphyte community structure. We attempt to fill this gap by developing an analytical framework to investigate epiphyte assemblages, which we then apply to a data set on epiphyte distributions in a Panamanian rain forest. On a coarse scale, interactions between epiphyte species and host tree species can be viewed as bipartite networks, similar to pollination and seed dispersal networks. On a finer scale, epiphyte communities on individual host trees can be viewed as meta-communities, or suites of local epiphyte communities connected by dispersal. Similar analytical tools are typically employed to investigate species interaction networks and meta-communities, thus providing a unified analytical framework to investigate coarse-scale (network) and fine-scale (meta-community) patterns in epiphyte distributions. Coarse-scale analysis of the Panamanian data set showed that most epiphyte species interacted with fewer host species than expected by chance. Fine-scale analyses showed that epiphyte species richness on individual trees was lower than null model expectations. Therefore, epiphyte distributions were clumped at both scales, perhaps as a result of dispersal limitations. Scale-dependent patterns in epiphyte species composition were observed. Epiphyte-host networks showed evidence of negative co-occurrence patterns, which could arise from adaptations among epiphyte species to avoid competition for host species, while most epiphyte meta-communities were distributed at random. Application of our "meta-network" analytical framework in other locales may help to identify general patterns in the structure of epiphyte assemblages and their variation in space and time.

  17. Optimizing Implementation of Obesity Prevention Programs: A Qualitative Investigation Within a Large-Scale Randomized Controlled Trial.

    PubMed

    Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B

    2016-01-01

    The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.

  18. Chebyshev polynomial filtered subspace iteration in the discontinuous Galerkin method for large-scale electronic structure calculations

    DOE PAGES

    Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...

    2016-10-21

    The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less

  19. Forensic Assessment on Ground Instability Using Electrical Resistivity Imaging (ERI)

    NASA Astrophysics Data System (ADS)

    Hazreek, Z. A. M.; Azhar, A. T. S.; Aziman, M.; Fauzan, S. M. S. A.; Ikhwan, J. M.; Aishah, M. A. N.

    2017-02-01

    Electrical resistivity imaging (ERI) was used to evaluate the ground settlement in local scale at housing areas. ERI and Borehole results were used to interpret the condition of the problematic subsurface profile due to its differential stiffness. Electrical resistivity of the subsurface profile was measured using ABEM SAS4000 equipment set. ERI results using electrical resistivity anomaly on subsurface materials resistivity shows the subsurface profile exhibited low (1 - 100 Ωm) and medium (> 100 Ωm) value (ERV) representing weak to firm materials. The occurrences of soft to medium cohesive material (SPT N value = 2 - 7) and stiff cohesive material (SPT N ≥ 8) in local scale has created inconsistency of the ground stability condition. Moreover, it was found that a layer of organic decayed wood (ERV = 43 ˜ 29 Ωm & SPT N = 15 ˜ 9) has been buried within the subsurface profile thus weaken the ground structure and finally promoting to the ground settlement. The heterogeneous of the subsurface material presented using integrated analysis of ERI and borehole data enabled ground settlement in this area to be evaluated. This is the major factor evaluating ground instability in the local scale. The result was applicable to assist in planning a strategy for sustainable ground improvement of local scale in fast, low cost, and large data coverage.

  20. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  1. Spatial Variation of Pressure in the Lyophilization Product Chamber Part 2: Experimental Measurements and Implications for Scale-up and Batch Uniformity.

    PubMed

    Sane, Pooja; Varma, Nikhil; Ganguly, Arnab; Pikal, Michael; Alexeenko, Alina; Bogner, Robin H

    2017-02-01

    Product temperature during the primary drying step of freeze-drying is controlled by a set point chamber pressure and shelf temperature. However, recent computational modeling suggests a possible variation in local chamber pressure. The current work presents an experimental verification of the local chamber pressure gradients in a lab-scale freeze-dryer. Pressure differences between the center and the edges of a lab-scale freeze-dryer shelf were measured as a function of sublimation flux and clearance between the sublimation front and the shelf above. A modest 3-mTorr difference in pressure was observed as the sublimation flux was doubled from 0.5 to 1.0 kg·h -1 ·m -2 at a clearance of 2.6 cm. Further, at a constant sublimation flux of 1.0 kg·h -1 ·m -2 , an 8-fold increase in the pressure drop was observed across the shelf as the clearance was decreased from 4 to 1.6 cm. Scale-up of the pressure variation from lab- to a manufacturing-scale freeze-dryer predicted an increased uniformity in drying rates across the batch for two frequently used pharmaceutical excipients (mannitol and sucrose at 5% w/w). However, at an atypical condition of shelf temperature of +10°C and chamber pressure of 50 mTorr, the product temperature in the center vials was calculated to be a degree higher than the edge vial for a low resistance product, thus reversing the typical edge and center vial behavior. Thus, the effect of local pressure variation is more significant at the manufacturing-scale than at a lab-scale and accounting for the contribution of variations in the local chamber pressures can improve success in scale-up.

  2. Saliency image of feature building for image quality assessment

    NASA Astrophysics Data System (ADS)

    Ju, Xinuo; Sun, Jiyin; Wang, Peng

    2011-11-01

    The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.

  3. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  4. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  5. Magnetohydrodynamics Carreau nanofluid flow over an inclined convective heated stretching cylinder with Joule heating

    NASA Astrophysics Data System (ADS)

    Khan, Imad; Shafquatullah; Malik, M. Y.; Hussain, Arif; Khan, Mair

    Current work highlights the computational aspects of MHD Carreau nanofluid flow over an inclined stretching cylinder with convective boundary conditions and Joule heating. The mathematical modeling of physical problem yields nonlinear set of partial differential equations. A suitable scaling group of variables is employed on modeled equations to convert them into non-dimensional form. The integration scheme Runge-Kutta-Fehlberg on the behalf of shooting technique is utilized to solve attained set of equations. The interesting aspects of physical problem (linear momentum, energy and nanoparticles concentration) are elaborated under the different parametric conditions through graphical and tabular manners. Additionally, the quantities (local skin friction coefficient, local Nusselt number and local Sherwood number) which are responsible to dig out the physical phenomena in the vicinity of stretched surface are computed and delineated by varying controlling flow parameters.

  6. The influence of environment on the properties of galaxies

    NASA Astrophysics Data System (ADS)

    Hashimoto, Yasuhiro

    1999-11-01

    I will present the result of the evaluation of the environmental influences on three important galactic properties; morphology, star formation rate, and interaction in the local universe. I have used a very large and homogeneous sample of 15749 galaxies drawn from the Las Campanas Redshift Survey (Shectman et al. 1996). This data set consists of galaxies inhabiting the entire range of galactic environments, from the sparsest field to the densest clusters, thus allowing me to study environmental variations without combing multiple data sets with inhomogeneous characteristics. Furthermore, I can also extend the research to a ``general'' environmental investigation by, for the first time, decoupling the very local environment, as characterized by local galaxy density, from the effects of larger-scale environments, such as membership in a cluster. The star formation rate is characterized by the strength of EW(OII), while the galactic morphology is characterized by the automatically-measured concentration index (e.g. Okamura, Kodaira, & Watanabe 1984), which is more closely related to the bulge-to-disk ratio of galaxies than Hubble type, and is therefore expected to behave more independently on star formation activity in a galaxy. On the other hand, the first systematic quantitative investigation of the environmental influence on the interaction of galaxies is made by using two automatically-determined objective measures; the asymmetry index and existence of companions. The principal conclusions of this work are: (1)The concentration of the galactic light profile (characterized by the concentration index) is predominantly correlated with the relatively small-scale environment which is characterized by the local galaxy density. (2)The star formation rate of galaxies (characterized by the EW(OII)) is correlated both with the small-scale environment (the local galaxy density) and the larger scale environment which is characterized by the cluster membership. For weakly star forming galaxies, the star formation rate is correlated both with the local galaxy density and rich cluster membership. It also shows a correlation with poor cluster membership. For strongly star forming galaxies, the star formation rate is correlated with the local density and the poor cluster membership. (3)Interacting galaxies (characterized by the asymmetry index and/or the existence of apparent companions) show no correlation with rich cluster membership, but show a fair to strong correlation with the poor cluster membership.

  7. Target detection and localization in shallow water: an experimental demonstration of the acoustic barrier problem at the laboratory scale.

    PubMed

    Marandet, Christian; Roux, Philippe; Nicolas, Barbara; Mars, Jérôme

    2011-01-01

    This study demonstrates experimentally at the laboratory scale the detection and localization of a wavelength-sized target in a shallow ultrasonic waveguide between two source-receiver arrays at 3 MHz. In the framework of the acoustic barrier problem, at the 1/1000 scale, the waveguide represents a 1.1-km-long, 52-m-deep ocean acoustic channel in the kilohertz frequency range. The two coplanar arrays record in the time-domain the transfer matrix of the waveguide between each pair of source-receiver transducers. Invoking the reciprocity principle, a time-domain double-beamforming algorithm is simultaneously performed on the source and receiver arrays. This array processing projects the multireverberated acoustic echoes into an equivalent set of eigenrays, which are defined by their launch and arrival angles. Comparison is made between the intensity of each eigenray without and with a target for detection in the waveguide. Localization is performed through tomography inversion of the acoustic impedance of the target, using all of the eigenrays extracted from double beamforming. The use of the diffraction-based sensitivity kernel for each eigenray provides both the localization and the signature of the target. Experimental results are shown in the presence of surface waves, and methodological issues are discussed for detection and localization.

  8. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  9. Atmospheric mechanisms governing the spatial and temporal variability of phenological phases in central Europe

    NASA Astrophysics Data System (ADS)

    Scheifinger, Helfried; Menzel, Annette; Koch, Elisabeth; Peter, Christian; Ahas, Rein

    2002-11-01

    A data set of 17 phenological phases from Germany, Austria, Switzerland and Slovenia spanning the time period from 1951 to 1998 has been made available for analysis together with a gridded temperature data set (1° × 1° grid) and the North Atlantic Oscillation (NAO) index time series. The disturbances of the westerlies constitute the main atmospheric source for the temporal variability of phenological events in Europe. The trend, the standard deviation and the discontinuity of the phenological time series at the end of the 1980s can, to a great extent, be explained by the NAO. A number of factors modulate the influence of the NAO in time and space. The seasonal northward shift of the westerlies overlaps with the sequence of phenological spring phases, thereby gradually reducing its influence on the temporal variability of phenological events with progression of spring (temporal loss of influence). This temporal process is reflected by a pronounced decrease in trend and standard deviation values and common variability with the NAO with increasing year-day. The reduced influence of the NAO with increasing distance from the Atlantic coast is not only apparent in studies based on the data set of the International Phenological Gardens, but also in the data set of this study with a smaller spatial extent (large-scale loss of influence). The common variance between phenological and NAO time series displays a discontinuous drop from the European Atlantic coast towards the Alps. On a local and regional scale, mountainous terrain reduces the influence of the large-scale atmospheric flow from the Atlantic via a proposed decoupling mechanism. Valleys in mountainous terrain have the inclination to harbour temperature inversions over extended periods of time during the cold season, which isolate the valley climate from the large-scale atmospheric flow at higher altitudes. Most phenological stations reside at valley bottoms and are thus largely decoupled in their temporal variability from the influence of the westerly flow regime (local-scale loss of influence). This study corroborates an increasing number of similar investigations that find that vegetation does react in a sensitive way to variations of its atmospheric environment across various temporal and spatial scales.

  10. Influence of a large-scale field on energy dissipation in magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Zhdankin, Vladimir; Boldyrev, Stanislav; Mason, Joanne

    2017-07-01

    In magnetohydrodynamic (MHD) turbulence, the large-scale magnetic field sets a preferred local direction for the small-scale dynamics, altering the statistics of turbulence from the isotropic case. This happens even in the absence of a total magnetic flux, since MHD turbulence forms randomly oriented large-scale domains of strong magnetic field. It is therefore customary to study small-scale magnetic plasma turbulence by assuming a strong background magnetic field relative to the turbulent fluctuations. This is done, for example, in reduced models of plasmas, such as reduced MHD, reduced-dimension kinetic models, gyrokinetics, etc., which make theoretical calculations easier and numerical computations cheaper. Recently, however, it has become clear that the turbulent energy dissipation is concentrated in the regions of strong magnetic field variations. A significant fraction of the energy dissipation may be localized in very small volumes corresponding to the boundaries between strongly magnetized domains. In these regions, the reduced models are not applicable. This has important implications for studies of particle heating and acceleration in magnetic plasma turbulence. The goal of this work is to systematically investigate the relationship between local magnetic field variations and magnetic energy dissipation, and to understand its implications for modelling energy dissipation in realistic turbulent plasmas.

  11. Spatially disaggregated population estimates in the absence of national population and housing census data

    PubMed Central

    Wardrop, N. A.; Jochem, W. C.; Bird, T. J.; Chamberlain, H. R.; Clarke, D.; Kerr, D.; Bengtsson, L.; Juran, S.; Seaman, V.; Tatem, A. J.

    2018-01-01

    Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. PMID:29555739

  12. Scaling issues in sustainable river basin management

    NASA Astrophysics Data System (ADS)

    Timmerman, Jos; Froebich, Jochen

    2014-05-01

    Sustainable river basin management implies considering the whole river basin when managing the water resources. Management measures target at dividing the water over different uses (nature, agriculture, industry, households) thereby avoiding calamities like having too much, too little or bad quality water. Water management measures are taken at the local level, usually considering the sub-national and sometimes national effects of such measures. A large part of the world's freshwater resources, however, is contained in river basins and groundwater systems that are shared by two or more countries. Sustainable river basin management consequently has to encompass local, regional, national and international scales. This requires coordination over and cooperation between these levels that is currently compressed into the term 'water governance' . Governance takes into account that a large number of stakeholders in different regimes (the principles, rules and procedures that steer management) contribute to policy and management of a resource. Governance includes the increasing importance of basically non-hierarchical modes of governing, where non-state actors (formal organizations like NGOs, private companies, consumer associations, etc.) participate in the formulation and implementation of public policy. Land use determines the run-off generation and use of irrigation water. Land use is increasingly determined by private sector initiatives at local scale. This is a complicating factor in the governance issue, as in comparison to former developments of large scale irrigation systems, planning institutions at state level have then less insight on actual water consumption. The water management regime of a basin consequently has to account for the different scales of water management and within these different scales with both state and non-state actors. The central elements of regimes include the policy setting (the policies and water management strategies), legal setting (national and international laws and agreements), the institutional setting (the formal networks), information management (the information collection and dissemination system), and financing systems (the public and private sources that cover the water management costs). These elements are usually designed for a specific level and are ideally aligned with the other levels. The presentation will go into detail on connecting the different elements of the water management regime between different levels as well as on the overarching governance issues that play a role and will present opportunities and limitations of the linking options.

  13. Multi-scale environmental accounting: methodological lessons from the application of NAMEA at sub-national levels.

    PubMed

    Dalmazzone, Silvana; La Notte, Alessandra

    2013-11-30

    Extending the application of integrated environmental and economic accounts from the national to the local level of government serves several purposes. They can be used not only as an instrument for communicating on the state of the environment and reporting the results of policies, but also as an operational tool - for setting the objectives and designing policies - if made available to the local authorities who have responsibility over the administration of natural resources, land use and conservation policies. The aim of the paper is to test the feasibility of applying hybrid flow accounts at the intermediate and local government levels. As an illustration, NAMEA for air emissions and wastes is applied to a Region, a Province and a Municipality, thus covering the three nested levels of local government in Italy. The study identifies the main issues raised by multi-scale environmental accounting and provides an applied discussion of feasible solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Mercury Slovenian soils: High, medium and low sample density geochemical maps

    NASA Astrophysics Data System (ADS)

    Gosar, Mateja; Šajn, Robert; Teršič, Tamara

    2017-04-01

    Regional geochemical survey was conducted in whole territory of Slovenia (20273 km2). High, medium and low sample density surveys were compared. High sample density represented the regional geochemical data set supplemented by local high-density sampling data (irregular grid, n=2835). Medium-density soil sampling was performed in a 5 x 5 km grid (n=817) and low-density geochemical survey was conducted in a sampling grid 25 x 25 km (n=54). Mercury distribution in Slovenian soils was determined with models of mercury distribution in soil using all three data sets. A distinct Hg anomaly in western part of Slovenia is evident on all three models. It is a consequence of 500-years of mining and ore processing in the second largest mercury mine in the world, the Idrija mine. The determined mercury concentrations revealed an important difference between the western and the eastern parts of the country. For the medium scale geochemical mapping is the median value (0.151 mg /kg) for western Slovenia almost 2-fold higher than the median value (0.083 mg/kg) in eastern Slovenia. Besides the Hg median for the western part of Slovenia exceeds the Hg median for European soil by a factor of 4 (Gosar et al., 2016). Comparing these sample density surveys, it was shown that high sampling density allows the identification and characterization of anthropogenic influences on a local scale, while medium- and low-density sampling reveal general trends in the mercury spatial distribution, but are not appropriate for identifying local contamination in industrial regions and urban areas. The resolution of the pattern generated is the best when the high-density survey on a regional scale is supplemented with the geochemical data of the high-density surveys on a local scale. References: Gosar, M, Šajn, R, Teršič, T. Distribution pattern of mercury in the Slovenian soil: geochemical mapping based on multiple geochemical datasets. Journal of geochemical exploration, 2016, 167/38-48.

  15. Fast evaluation of scaled opposite spin second-order Møller-Plesset correlation energies using auxiliary basis expansions and exploiting sparsity.

    PubMed

    Jung, Yousung; Shao, Yihan; Head-Gordon, Martin

    2007-09-01

    The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.

  16. Universities scale like cities.

    PubMed

    van Raan, Anthony F J

    2013-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the 'gross university income' in terms of total number of citations over 'size' in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities--the top-100 European universities--we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment.

  17. A New Paradigm For Modeling Fault Zone Inelasticity: A Multiscale Continuum Framework Incorporating Spontaneous Localization and Grain Fragmentation.

    NASA Astrophysics Data System (ADS)

    Elbanna, A. E.

    2015-12-01

    The brittle portion of the crust contains structural features such as faults, jogs, joints, bends and cataclastic zones that span a wide range of length scales. These features may have a profound effect on earthquake nucleation, propagation and arrest. Incorporating these existing features in modeling and the ability to spontaneously generate new one in response to earthquake loading is crucial for predicting seismicity patterns, distribution of aftershocks and nucleation sites, earthquakes arrest mechanisms, and topological changes in the seismogenic zone structure. Here, we report on our efforts in modeling two important mechanisms contributing to the evolution of fault zone topology: (1) Grain comminution at the submeter scale, and (2) Secondary faulting/plasticity at the scale of few to hundreds of meters. We use the finite element software Abaqus to model the dynamic rupture. The constitutive response of the fault zone is modeled using the Shear Transformation Zone theory, a non-equilibrium statistical thermodynamic framework for modeling plastic deformation and localization in amorphous materials such as fault gouge. The gouge layer is modeled as 2D plane strain region with a finite thickness and heterogeenous distribution of porosity. By coupling the amorphous gouge with the surrounding elastic bulk, the model introduces a set of novel features that go beyond the state of the art. These include: (1) self-consistent rate dependent plasticity with a physically-motivated set of internal variables, (2) non-locality that alleviates mesh dependence of shear band formation, (3) spontaneous evolution of fault roughness and its strike which affects ground motion generation and the local stress fields, and (4) spontaneous evolution of grain size and fault zone fabric.

  18. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    PubMed

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Local Kernel for Brains Classification in Schizophrenia

    NASA Astrophysics Data System (ADS)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  20. Beyond scenario planning: projecting the future using models at Wind Cave National Park (USA)

    NASA Astrophysics Data System (ADS)

    King, D. A.; Bachelet, D. M.; Symstad, A. J.

    2011-12-01

    Scenario planning has been used by the National Park Service as a tool for natural resource management planning in the face of climate change. Sets of plausible but divergent future scenarios are constructed from available information and expert opinion and serve as starting point to derive climate-smart management strategies. However, qualitative hypotheses about how systems would react to a particular set of conditions assumed from coarse scale climate projections may lack the scientific rigor expected from a federal agency. In an effort to better assess the range of likely futures at Wind Cave National Park, a project was conceived to 1) generate high resolution historic and future climate time series to identify local weather patterns that may or may not persist, 2) simulate the hydrological cycle in this geologically varied landscape and its response to future climate, 3) project vegetation dynamics and ensuing changes in the biogeochemical cycles given grazing and fire disturbances under new climate conditions, and 4) synthesize and compare results with those from the scenario planning exercise. In this framework, we tested a dynamic global vegetation model against local information on vegetation cover, disturbance history and stream flow to better understand the potential resilience of these ecosystems to climate change. We discuss the tradeoffs between a coarse scale application of the model showing regional trends with limited ability to project the fine scale mosaic of vegetation at Wind Cave, and a finer scale approach that can account for local slope effects on water balance and better assess the vulnerability of landscape facets, but requires more intensive data acquisition. We elaborate on the potential for sharing information between models to mitigate the often-limited treatment of biological feedbacks in the physical representations of soil and atmospheric processes.

  1. Standardized Evaluation for Multi-National Development Programs.

    ERIC Educational Resources Information Center

    Farrell, W. Timothy

    This paper takes the position that standardized evaluation formats and procedures for multi-national development programs are not only desirable but possible in diverse settings. The key is the localization of standard systems, which involves not only the technical manipulation of items and scales, but also the contextual interpretation of…

  2. How to maximally support local and regional biodiversity in applied conservation? Insights from pond management.

    PubMed

    Lemmens, Pieter; Mergeay, Joachim; De Bie, Tom; Van Wichelen, Jeroen; De Meester, Luc; Declerck, Steven A J

    2013-01-01

    Biodiversity and nature values in anthropogenic landscapes often depend on land use practices and management. Evaluations of the association between management and biodiversity remain, however, comparatively scarce, especially in aquatic systems. Furthermore, studies also tend to focus on a limited set of organism groups at the local scale, whereas a multi-group approach at the landscape scale is to be preferred. This study aims to investigate the effect of pond management on the diversity of multiple aquatic organism groups (e.g. phytoplankton, zooplankton, several groups of macro-invertebrates, submerged and emergent macrophytes) at local and regional spatial scales. For this purpose, we performed a field study of 39 shallow man-made ponds representing five different management types. Our results indicate that fish stock management and periodic pond drainage are crucial drivers of pond biodiversity. Furthermore, this study provides insight in how the management of eutrophied ponds can contribute to aquatic biodiversity. A combination of regular draining of ponds with efforts to keep ponds free of fish seems to be highly beneficial for the biodiversity of many groups of aquatic organisms at local and regional scales. Regular draining combined with a stocking of fish at low biomass is also preferable to infrequent draining and lack of fish stock control. These insights are essential for the development of conservation programs that aim long-term maintenance of regional biodiversity in pond areas across Europe.

  3. Operationalizing ecological resilience at a landscape scale: A framework and case study from Silicon Valley

    NASA Astrophysics Data System (ADS)

    Beller, E.; Robinson, A.; Grossinger, R.; Grenier, L.; Davenport, A.

    2015-12-01

    Adaptation to climate change requires redesigning our landscapes and watersheds to maximize ecological resilience at large scales and integrated across urban areas, wildlands, and a diversity of ecosystem types. However, it can be difficult for environmental managers and designers to access, interpret, and apply resilience concepts at meaningful scales and across a range of settings. To address this gap, we produced a Landscape Resilience Framework that synthesizes the latest science on the qualitative mechanisms that drive resilience of ecological functions to climate change and other large-scale stressors. The framework is designed to help translate resilience science into actionable ecosystem conservation and restoration recommendations and adaptation strategies by providing a concise but comprehensive list of considerations that will help integrate resilience concepts into urban design, conservation planning, and natural resource management. The framework is composed of seven principles that represent core attributes which determine the resilience of ecological functions within a landscape. These principles are: setting, process, connectivity, redundancy, diversity/complexity, scale, and people. For each principle we identify several key operationalizable components that help illuminate specific recommendations and actions that are likely to contribute to landscape resilience for locally appropriate species, habitats, and biological processes. We are currently using the framework to develop landscape-scale recommendations for ecological resilience in the heavily urbanized Silicon Valley, California, in collaboration with local agencies, companies, and regional experts. The resilience framework is being applied across the valley, including urban, suburban, and wildland areas and terrestrial and aquatic ecosystems. Ultimately, the framework will underpin the development of strategies that can be implemented to bolster ecological resilience from a site to landscape scale.

  4. Dirichlet Process Gaussian-mixture model: An application to localizing coalescing binary neutron stars with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.

    2018-06-01

    We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.

  5. A statistical model to estimate the local vulnerability to severe weather

    NASA Astrophysics Data System (ADS)

    Pardowitz, Tobias

    2018-06-01

    We present a spatial analysis of weather-related fire brigade operations in Berlin. By comparing operation occurrences to insured losses for a set of severe weather events we demonstrate the representativeness and usefulness of such data in the analysis of weather impacts on local scales. We investigate factors influencing the local rate of operation occurrence. While depending on multiple factors - which are often not available - we focus on publicly available quantities. These include topographic features, land use information based on satellite data and information on urban structure based on data from the OpenStreetMap project. After identifying suitable predictors such as housing coverage or local density of the road network we set up a statistical model to be able to predict the average occurrence frequency of local fire brigade operations. Such model can be used to determine potential hotspots for weather impacts even in areas or cities where no systematic records are available and can thus serve as a basis for a broad range of tools or applications in emergency management and planning.

  6. Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Eom, Young-Ho; Jo, Hang-Hyun

    2015-05-01

    Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.

  7. Geology of the Icy Galilean Satellites: Understanding Crustal Processes and Geologic Histories Through the JIMO Mission

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Tanaka, K.; Senske, D.; Greeley, R.

    2003-01-01

    Knowledge of the geology, style and time history of crustal processes on the icy Galilean satellites is necessary to understanding how these bodies formed and evolved. Data from the Galileo mission have provided a basis for detailed geologic and geo- physical analysis. Due to constrained downlink, Galileo Solid State Imaging (SSI) data consisted of global coverage at a -1 km/pixel ground sampling and representative, widely spaced regional maps at -200 m/pixel. These two data sets provide a general means to extrapolate units identified at higher resolution to lower resolution data. A sampling of key sites at much higher resolution (10s of m/pixel) allows evaluation of processes on local scales. We are currently producing the first global geological map of Europa using Galileo global and regional-scale data. This work is demonstrating the necessity and utility of planet-wide contiguous image coverage at global, regional, and local scales.

  8. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  9. Nitrate (NO2+NO3-N) in ground water of the Upper Snake River basin, Idaho and western Wyoming, 1991-95

    USGS Publications Warehouse

    Rupert, Michael G.

    1997-01-01

    Factors related to contamination of ground water by dissolved nitrite plus nitrate as nitrogen (NO2+NO3-N) in parts of the upper Snake River Basin were evaluated at regional and local scales. Regional-scale relations between NO2+NO3-N concentrations and depth to first-encountered ground water, land use, precipitation, and soils were evaluated using a geographic information system. Local-scale relations between NO 2+NO3-N concentrations and other nutrients, major ions, nitrogen isotopes, stable isotopes, and tritium in five areas with different hydrogeologic settings, land use, and sources of irrigation water were evaluated to determine the factors causing differences in NO2+NO3-N. Data were collected and analyzed as part of the U.S. Geological Survey's National Water-Quality Assessment Program, which began in 1991.

  10. Multiscale 3D Shape Analysis using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2013-01-01

    Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data. PMID:16685992

  11. Multiscale 3D shape analysis using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen R

    2005-01-01

    Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data.

  12. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  13. Multiscale moment-based technique for object matching and recognition

    NASA Astrophysics Data System (ADS)

    Thio, HweeLi; Chen, Liya; Teoh, Eam-Khwang

    2000-03-01

    A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.

  14. Classification of event location using matched filters via on-floor accelerometers

    NASA Astrophysics Data System (ADS)

    Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.

    2017-04-01

    Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.

  15. Towards the chemometric dissection of peptide - HLA-A*0201 binding affinity: comparison of local and global QSAR models

    NASA Astrophysics Data System (ADS)

    Doytchinova, Irini A.; Walshe, Valerie; Borrow, Persephone; Flower, Darren R.

    2005-03-01

    The affinities of 177 nonameric peptides binding to the HLA-A*0201 molecule were measured using a FACS-based MHC stabilisation assay and analysed using chemometrics. Their structures were described by global and local descriptors, QSAR models were derived by genetic algorithm, stepwise regression and PLS. The global molecular descriptors included molecular connectivity χ indices, κ shape indices, E-state indices, molecular properties like molecular weight and log P, and three-dimensional descriptors like polarizability, surface area and volume. The local descriptors were of two types. The first used a binary string to indicate the presence of each amino acid type at each position of the peptide. The second was also position-dependent but used five z-scales to describe the main physicochemical properties of the amino acids forming the peptides. The models were developed using a representative training set of 131 peptides and validated using an independent test set of 46 peptides. It was found that the global descriptors could not explain the variance in the training set nor predict the affinities of the test set accurately. Both types of local descriptors gave QSAR models with better explained variance and predictive ability. The results suggest that, in their interactions with the MHC molecule, the peptide acts as a complicated ensemble of multiple amino acids mutually potentiating each other.

  16. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  17. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  18. Predicting habitat suitability for rare plants at local spatial scales using a species distribution model.

    PubMed

    Gogol-Prokurat, Melanie

    2011-01-01

    If species distribution models (SDMs) can rank habitat suitability at a local scale, they may be a valuable conservation planning tool for rare, patchily distributed species. This study assessed the ability of Maxent, an SDM reported to be appropriate for modeling rare species, to rank habitat suitability at a local scale for four edaphic endemic rare plants of gabbroic soils in El Dorado County, California, and examined the effects of grain size, spatial extent, and fine-grain environmental predictors on local-scale model accuracy. Models were developed using species occurrence data mapped on public lands and were evaluated using an independent data set of presence and absence locations on surrounding lands, mimicking a typical conservation-planning scenario that prioritizes potential habitat on unsurveyed lands surrounding known occurrences. Maxent produced models that were successful at discriminating between suitable and unsuitable habitat at the local scale for all four species, and predicted habitat suitability values were proportional to likelihood of occurrence or population abundance for three of four species. Unfortunately, models with the best discrimination (i.e., AUC) were not always the most useful for ranking habitat suitability. The use of independent test data showed metrics that were valuable for evaluating which variables and model choices (e.g., grain, extent) to use in guiding habitat prioritization for conservation of these species. A goodness-of-fit test was used to determine whether habitat suitability values ranked habitat suitability on a continuous scale. If they did not, a minimum acceptable error predicted area criterion was used to determine the threshold for classifying habitat as suitable or unsuitable. I found a trade-off between model extent and the use of fine-grain environmental variables: goodness of fit was improved at larger extents, and fine-grain environmental variables improved local-scale accuracy, but fine-grain variables were not available at large extents. No single model met all habitat prioritization criteria, and the best models were overlaid to identify consensus areas of high suitability. Although the four species modeled here co-occur and are treated together for conservation planning, model accuracy and predicted suitable areas varied among species.

  19. Continental-scale, data-driven predictive assessment of eliminating the vector-borne disease, lymphatic filariasis, in sub-Saharan Africa by 2020.

    PubMed

    Michael, Edwin; Singh, Brajendra K; Mayala, Benjamin K; Smith, Morgan E; Hampton, Scott; Nabrzyski, Jaroslaw

    2017-09-27

    There are growing demands for predicting the prospects of achieving the global elimination of neglected tropical diseases as a result of the institution of large-scale nation-wide intervention programs by the WHO-set target year of 2020. Such predictions will be uncertain due to the impacts that spatial heterogeneity and scaling effects will have on parasite transmission processes, which will introduce significant aggregation errors into any attempt aiming to predict the outcomes of interventions at the broader spatial levels relevant to policy making. We describe a modeling platform that addresses this problem of upscaling from local settings to facilitate predictions at regional levels by the discovery and use of locality-specific transmission models, and we illustrate the utility of using this approach to evaluate the prospects for eliminating the vector-borne disease, lymphatic filariasis (LF), in sub-Saharan Africa by the WHO target year of 2020 using currently applied or newly proposed intervention strategies. METHODS AND RESULTS: We show how a computational platform that couples site-specific data discovery with model fitting and calibration can allow both learning of local LF transmission models and simulations of the impact of interventions that take a fuller account of the fine-scale heterogeneous transmission of this parasitic disease within endemic countries. We highlight how such a spatially hierarchical modeling tool that incorporates actual data regarding the roll-out of national drug treatment programs and spatial variability in infection patterns into the modeling process can produce more realistic predictions of timelines to LF elimination at coarse spatial scales, ranging from district to country to continental levels. Our results show that when locally applicable extinction thresholds are used, only three countries are likely to meet the goal of LF elimination by 2020 using currently applied mass drug treatments, and that switching to more intensive drug regimens, increasing the frequency of treatments, or switching to new triple drug regimens will be required if LF elimination is to be accelerated in Africa. The proportion of countries that would meet the goal of eliminating LF by 2020 may, however, reach up to 24/36 if the WHO 1% microfilaremia prevalence threshold is used and sequential mass drug deliveries are applied in countries. We have developed and applied a data-driven spatially hierarchical computational platform that uses the discovery of locally applicable transmission models in order to predict the prospects for eliminating the macroparasitic disease, LF, at the coarser country level in sub-Saharan Africa. We show that fine-scale spatial heterogeneity in local parasite transmission and extinction dynamics, as well as the exact nature of intervention roll-outs in countries, will impact the timelines to achieving national LF elimination on this continent.

  20. Global Swath and Gridded Data Tiling

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.

    2012-01-01

    This software generates cylindrically projected tiles of swath-based or gridded satellite data for the purpose of dynamically generating high-resolution global images covering various time periods, scaling ranges, and colors called "tiles." It reconstructs a global image given a set of tiles covering a particular time range, scaling values, and a color table. The program is configurable in terms of tile size, spatial resolution, format of input data, location of input data (local or distributed), number of processes run in parallel, and data conditioning.

  1. Hydrology or biology? Modeling simplistic physical constraints on lake carbon biogeochemistry to identify when and where biology is likely to matter

    NASA Astrophysics Data System (ADS)

    Jones, S.; Zwart, J. A.; Solomon, C.; Kelly, P. T.

    2017-12-01

    Current efforts to scale lake carbon biogeochemistry rely heavily on empirical observations and rarely consider physical or biological inter-lake heterogeneity that is likely to regulate terrestrial dissolved organic carbon (tDOC) decomposition in lakes. This may in part result from a traditional focus of lake ecologists on in-lake biological processes OR physical-chemical pattern across lake regions, rather than on process AND pattern across scales. To explore the relative importance of local biological processes and physical processes driven by lake hydrologic setting, we created a simple, analytical model of tDOC decomposition in lakes that focuses on the regulating roles of lake size and catchment hydrologic export. Our simplistic model can generally recreate patterns consistent with both local- and regional-scale patterns in tDOC concentration and decomposition. We also see that variation in lake hydrologic setting, including the importance of evaporation as a hydrologic export, generates significant, emergent variation in tDOC decomposition at a given hydrologic residence time, and creates patterns that have been historically attributed to variation in tDOC quality. Comparing predictions of this `biologically null model' to field observations and more biologically complex models could indicate when and where biology is likely to matter most.

  2. Prediction of Severe Disease in Children with Diarrhea in a Resource-Limited Setting

    PubMed Central

    Levine, Adam C.; Munyaneza, Richard M.; Glavis-Bloom, Justin; Redditt, Vanessa; Cockrell, Hannah C.; Kalimba, Bantu; Kabemba, Valentin; Musavuli, Juvenal; Gakwerere, Mathias; Umurungi, Jean Paul de Charles; Shah, Sachita P.; Drobac, Peter C.

    2013-01-01

    Objective To investigate the accuracy of three clinical scales for predicting severe disease (severe dehydration or death) in children with diarrhea in a resource-limited setting. Methods Participants included 178 children admitted to three Rwandan hospitals with diarrhea. A local physician or nurse assessed each child on arrival using the World Health Organization (WHO) severe dehydration scale and the Centers for Disease Control (CDC) scale. Children were weighed on arrival and daily until they achieved a stable weight, with a 10% increase between admission weight and stable weight considered severe dehydration. The Clinical Dehydration Scale was then constructed post-hoc using the data collected for the other two scales. Receiver Operator Characteristic (ROC) curves were constructed for each scale compared to the composite outcome of severe dehydration or death. Results The WHO severe dehydration scale, CDC scale, and Clinical Dehydration Scale had areas under the ROC curves (AUCs) of 0.72 (95% CI 0.60, 0.85), 0.73 (95% CI 0.62, 0.84), and 0.80 (95% CI 0.71, 0.89), respectively, in the full cohort. Only the Clinical Dehydration Scale was a significant predictor of severe disease when used in infants, with an AUC of 0.77 (95% CI 0.61, 0.93), and when used by nurses, with an AUC of 0.78 (95% CI 0.63, 0.93). Conclusions While all three scales were moderate predictors of severe disease in children with diarrhea, scale accuracy varied based on provider training and age of the child. Future research should focus on developing or validating clinical tools that can be used accurately by nurses and other less-skilled providers to assess all children with diarrhea in resource-limited settings. PMID:24349271

  3. Approximate scaling properties of RNA free energy landscapes

    NASA Technical Reports Server (NTRS)

    Baskaran, S.; Stadler, P. F.; Schuster, P.

    1996-01-01

    RNA free energy landscapes are analysed by means of "time-series" that are obtained from random walks restricted to excursion sets. The power spectra, the scaling of the jump size distribution, and the scaling of the curve length measured with different yard stick lengths are used to describe the structure of these "time series". Although they are stationary by construction, we find that their local behavior is consistent with both AR(1) and self-affine processes. Random walks confined to excursion sets (i.e., with the restriction that the fitness value exceeds a certain threshold at each step) exhibit essentially the same statistics as free random walks. We find that an AR(1) time series is in general approximately self-affine on timescales up to approximately the correlation length. We present an empirical relation between the correlation parameter rho of the AR(1) model and the exponents characterizing self-affinity.

  4. Simple and Multiple Endmember Mixture Analysis in the Boreal Forest

    NASA Technical Reports Server (NTRS)

    Roberts, Dar A.; Gamon, John A.; Qiu, Hong-Lie

    2000-01-01

    A key scientific objective of the original Boreal Ecosystem-Atmospheric Study (BOREAS) field campaign (1993-1996) was to obtain the baseline data required for modeling and predicting fluxes of energy, mass, and trace gases in the boreal forest biome. These data sets are necessary to determine the sensitivity of the boreal forest biome to potential climatic changes and potential biophysical feedbacks on climate. A considerable volume of remotely sensed and supporting field data were acquired by numerous researchers to meet this objective. By design, remote sensing and modeling were considered critical components for scaling efforts, extending point measurements from flux towers and field sites over larger spatial and longer temporal scales. A major focus of the BOREAS Follow-on program was concerned with integrating the diverse remotely sensed and ground-based data sets to address specific questions such as carbon dynamics at local to regional scales.

  5. Integrating multisource land use and land cover data

    USGS Publications Warehouse

    Wright, Bruce E.; Tait, Mike; Lins, K.F.; Crawford, J.S.; Benjamin, S.P.; Brown, Jesslyn F.

    1995-01-01

    As part of the U.S. Geological Survey's (USGS) land use and land cover (LULC) program, the USGS in cooperation with the Environmental Systems Research Institute (ESRI) is collecting and integrating LULC data for a standard USGS 1:100,000-scale product. The LULC data collection techniques include interpreting spectrally clustered Landsat Thematic Mapper (TM) images; interpreting 1-meter resolution digital panchromatic orthophoto images; and, for comparison, aggregating locally available large-scale digital data of urban areas. The area selected is the Vancouver, WA-OR quadrangle, which has a mix of urban, rural agriculture, and forest land. Anticipated products include an integrated LULC prototype data set in a standard classification scheme referenced to the USGS digital line graph (DLG) data of the area and prototype software to develop digital LULC data sets.This project will evaluate a draft standard LULC classification system developed by the USGS for use with various source material and collection techniques. Federal, State, and local governments, and private sector groups will have an opportunity to evaluate the resulting prototype software and data sets and to provide recommendations. It is anticipated that this joint research endeavor will increase future collaboration among interested organizations, public and private, for LULC data collection using common standards and tools.

  6. A Multi-Scale Settlement Matching Algorithm Based on ARG

    NASA Astrophysics Data System (ADS)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  7. Amplitude Scaling of Active Separation Control

    NASA Technical Reports Server (NTRS)

    Stalnov, Oksana; Seifert, Avraham

    2010-01-01

    Three existing and two new excitation magnitude scaling options for active separation control at Reynolds numbers below one Million. The physical background for the scaling options was discussed and their relevance was evaluated using two different sets of experimental data. For F+ approx. 1, 2D excitation: a) The traditional VR and C(mu) - do not scale the data. b) Only the Re*C(mu) is valid. This conclusion is also limited for positive lift increment.. For F+ > 10, 3D excitation, the Re corrected C(mu), the St corrected velocity ratio and the vorticity flux coefficient, all scale the amplitudes equally well. Therefore, the Reynolds weighted C(mu) is the preferred choice, relevant to both excitation modes. Incidence also considered, using Ue from local Cp.

  8. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  9. Mapping the microvascular and the associated absolute values of oxy-hemoglobin concentration through turbid media via local off-set diffuse optical imaging

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Klämpfl, Florian; Stelzle, Florian; Schmidt, Michael

    2014-11-01

    An imging resolution of micron-scale has not yet been discovered by diffuse optical imaging (DOI), while a superficial response was eliminated. In this work, we report on a new approach of DOI with a local off-set alignment to subvert the common boundary conditions of the modified Beer-Lambert Law (MBLL). It can resolve a superficial target in micron scale under a turbid media. To validate both major breakthroughs, this system was used to recover a subsurface microvascular mimicking structure under an skin equivalent phantom. This microvascular was included with oxy-hemoglobin solution in variant concentrations to distiguish the absolute values of CtRHb and CtHbO2 . Experimental results confirmed the feasibility of recovering the target vascular of 50 µm in diameter, and graded the values of the concentrations of oxy-hemoglobin from 10 g/L to 50 g/L absolutely. Ultimately, this approach could evolve into a non-invasive imaging system to map the microvascular pattern and the associated oximetry under a human skin in-vivo.

  10. Variability of temperature sensitivity of extreme precipitation from a regional-to-local impact scale perspective

    NASA Astrophysics Data System (ADS)

    Schroeer, K.; Kirchengast, G.

    2016-12-01

    Relating precipitation intensity to temperature is a popular approach to assess potential changes of extreme events in a warming climate. Potential increases in extreme rainfall induced hazards, such as flash flooding, serve as motivation. It has not been addressed whether the temperature-precipitation scaling approach is meaningful on a regional to local level, where the risk of climate and weather impact is dealt with. Substantial variability of temperature sensitivity of extreme precipitation has been found that results from differing methodological assumptions as well as from varying climatological settings of the study domains. Two aspects are consistently found: First, temperature sensitivities beyond the expected consistency with the Clausius-Clapeyron (CC) equation are a feature of short-duration, convective, sub-daily to sub-hourly high-percentile rainfall intensities at mid-latitudes. Second, exponential growth ceases or reverts at threshold temperatures that vary from region to region, as moisture supply becomes limited. Analyses of pooled data, or of single or dispersed stations over large areas make it difficult to estimate the consequences in terms of local climate risk. In this study we test the meaningfulness of the scaling approach from an impact scale perspective. Temperature sensitivities are assessed using quantile regression on hourly and sub-hourly precipitation data from 189 stations in the Austrian south-eastern Alpine region. The observed scaling rates vary substantially, but distinct regional and seasonal patterns emerge. High sensitivity exceeding CC-scaling is seen on the 10-minute scale more than on the hourly scale, in storms shorter than 2 hours duration, and in shoulder seasons, but it is not necessarily a significant feature of the extremes. To be impact relevant, change rates need to be linked to absolute rainfall amounts. We show that high scaling rates occur in lower temperature conditions and thus have smaller effect on absolute precipitation intensities. While reporting of mere percentage numbers can be misleading, scaling studies can add value to process understanding on the local scale, if the factors that influence scaling rates are considered from both a methodological and a physical perspective.

  11. Regional- and local-scale variations in benthic megafaunal composition at the Arctic deep-sea observatory HAUSGARTEN

    NASA Astrophysics Data System (ADS)

    Taylor, J.; Krumpen, T.; Soltwedel, T.; Gutt, J.; Bergmann, M.

    2016-02-01

    The Long-Term Ecological Research (LTER) observatory HAUSGARTEN, in the eastern Fram Strait, provides us the valuable ability to study the composition of benthic megafaunal communities through the analysis of seafloor photographs. This, in combination with extensive sampling campaigns, which have yielded a unique data set on faunal, bacterial, biogeochemical and geological properties, as well as on hydrography and sedimentation patterns, allows us to address the question of why variations in megafaunal community structure and species distribution exist within regional (60-110 km) and local (<4 km) scales. Here, we present first results from the latitudinal HAUSGARTEN gradient, consisting of three different stations (N3, HG-IV, S3) between 78°30‧N and 79°45‧N (2351-2788 m depth), obtained via the analysis of images acquired by a towed camera (OFOS-Ocean Floor Observation System) in 2011. We assess variability in megafaunal densities, species composition and diversity as well as biotic and biogenic habitat features, which may cause the patterns observed. While there were significant regional-scale differences in megafaunal composition and densities between the stations (N3=26.74±0.63; HG-IV=11.21±0.25; S3=18.34±0.39 individuals m-2), significant local differences were only found at HG-IV. Regional-scale variations may be due to the significant differences in ice coverage at each station as well as the different quantities of protein available, whereas local-scale differences at HG-IV may be a result of variation in bottom topography or factors not yet identified.

  12. An efficient linear-scaling CCSD(T) method based on local natural orbitals.

    PubMed

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-07

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  13. FLARE: A New User Facility for Studies of Multiple-Scale Physics of Magnetic Reconnection and Related Phenomena Through in-situ Measurements

    NASA Astrophysics Data System (ADS)

    Ji, Hantao; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.

    2017-10-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram. The whole device have been assembled with first plasmas expected in the fall of 2017. The main diagnostics is an extensive set of magnetic probe arrays, currently under construction, to cover multiple scales from local electron scales ( 2 mm), to intermediate ion scales ( 10 cm), and global MHD scales ( 1 m), simultaneously providing in-situ measurements over all these relevant scales. The planned procedures and example topics as a user facility will be discussed.

  14. SMALL-SCALE ANISOTROPIES OF COSMIC RAYS FROM RELATIVE DIFFUSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlers, Markus; Mertsch, Philipp

    2015-12-10

    The arrival directions of multi-TeV cosmic rays show significant anisotropies at small angular scales. It has been argued that this small-scale structure can naturally arise from cosmic ray scattering in local turbulent magnetic fields that distort a global dipole anisotropy set by diffusion. We study this effect in terms of the power spectrum of cosmic ray arrival directions and show that the strength of small-scale anisotropies is related to properties of relative diffusion. We provide a formalism for how these power spectra can be inferred from simulations and motivate a simple analytic extension of the ensemble-averaged diffusion equation that canmore » account for the effect.« less

  15. Universities Scale Like Cities

    PubMed Central

    van Raan, Anthony F. J.

    2013-01-01

    Recent studies of urban scaling show that important socioeconomic city characteristics such as wealth and innovation capacity exhibit a nonlinear, particularly a power law scaling with population size. These nonlinear effects are common to all cities, with similar power law exponents. These findings mean that the larger the city, the more disproportionally they are places of wealth and innovation. Local properties of cities cause a deviation from the expected behavior as predicted by the power law scaling. In this paper we demonstrate that universities show a similar behavior as cities in the distribution of the ‘gross university income’ in terms of total number of citations over ‘size’ in terms of total number of publications. Moreover, the power law exponents for university scaling are comparable to those for urban scaling. We find that deviations from the expected behavior can indeed be explained by specific local properties of universities, particularly the field-specific composition of a university, and its quality in terms of field-normalized citation impact. By studying both the set of the 500 largest universities worldwide and a specific subset of these 500 universities -the top-100 European universities- we are also able to distinguish between properties of universities with as well as without selection of one specific local property, the quality of a university in terms of its average field-normalized citation impact. It also reveals an interesting observation concerning the working of a crucial property in networked systems, preferential attachment. PMID:23544062

  16. Local thermal sensation modeling-a review on the necessity and availability of local clothing properties and local metabolic heat production.

    PubMed

    Veselá, S; Kingma, B R M; Frijns, A J H

    2017-03-01

    Local thermal sensation modeling gained importance due to developments in personalized and locally applied heating and cooling systems in office environments. The accuracy of these models depends on skin temperature prediction by thermophysiological models, which in turn rely on accurate environmental and personal input data. Environmental parameters are measured or prescribed, but personal factors such as clothing properties and metabolic rates have to be estimated. Data for estimating the overall values of clothing properties and metabolic rates are available in several papers and standards. However, local values are more difficult to retrieve. For local clothing, this study revealed that full and consistent data sets are not available in the published literature for typical office clothing sets. Furthermore, the values for local heat production were not verified for characteristic office activities, but were adapted empirically. Further analyses showed that variations in input parameters can lead to local skin temperature differences (∆T skin,loc  = 0.4-4.4°C). These differences can affect the local sensation output, where ∆T skin,loc  = 1°C is approximately one step on a 9-point thermal sensation scale. In conclusion, future research should include a systematic study of local clothing properties and the development of feasible methods for measuring and validating local heat production. © 2016 The Authors. Indoor Air published by John Wiley & Sons Ltd.

  17. Local Chain Segregation and Entanglements in a Confined Polymer Melt

    NASA Astrophysics Data System (ADS)

    Lee, Nam-Kyung; Diddens, Diddo; Meyer, Hendrik; Johner, Albert

    2017-02-01

    The reptation mechanism, introduced by de Gennes and Edwards, where a polymer diffuses along a fluffy tube, defined by the constraints imposed by its surroundings, convincingly describes the relaxation of long polymers in concentrated solutions and melts. We propose that the scale for the tube diameter is set by local chain segregation, which we study analytically. We show that the concept of local segregation is especially operational for confined geometries, where segregation extends over mesoscopic domains, drastically reducing binary contacts, and provide an estimate of the entanglement length. Our predictions are quantitatively supported by extensive molecular dynamics simulations on systems consisting of long, entangled chains.

  18. Lessons Learned from a Decade of Sudden Oak Death in California: Evaluating Local Management

    NASA Astrophysics Data System (ADS)

    Alexander, Janice; Lee, Christopher A.

    2010-09-01

    Sudden Oak Death has been impacting California’s coastal forests for more than a decade. In that time, and in the absence of a centrally organized and coordinated set of mandatory management actions for this disease in California’s wildlands and open spaces, many local communities have initiated their own management programs. We present five case studies to explore how local-level management has attempted to control this disease. From these case studies, we glean three lessons: connections count, scale matters, and building capacity is crucial. These lessons may help management, research, and education planning for future pest and disease outbreaks.

  19. Lessons Learned from a Decade of Sudden Oak Death in California: Evaluating Local Management

    PubMed Central

    Alexander, Janice

    2010-01-01

    Sudden Oak Death has been impacting California’s coastal forests for more than a decade. In that time, and in the absence of a centrally organized and coordinated set of mandatory management actions for this disease in California’s wildlands and open spaces, many local communities have initiated their own management programs. We present five case studies to explore how local-level management has attempted to control this disease. From these case studies, we glean three lessons: connections count, scale matters, and building capacity is crucial. These lessons may help management, research, and education planning for future pest and disease outbreaks. PMID:20559634

  20. Visible Motion Blur

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)

    2014-01-01

    A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.

  1. Spatially disaggregated population estimates in the absence of national population and housing census data.

    PubMed

    Wardrop, N A; Jochem, W C; Bird, T J; Chamberlain, H R; Clarke, D; Kerr, D; Bengtsson, L; Juran, S; Seaman, V; Tatem, A J

    2018-04-03

    Population numbers at local levels are fundamental data for many applications, including the delivery and planning of services, election preparation, and response to disasters. In resource-poor settings, recent and reliable demographic data at subnational scales can often be lacking. National population and housing census data can be outdated, inaccurate, or missing key groups or areas, while registry data are generally lacking or incomplete. Moreover, at local scales accurate boundary data are often limited, and high rates of migration and urban growth make existing data quickly outdated. Here we review past and ongoing work aimed at producing spatially disaggregated local-scale population estimates, and discuss how new technologies are now enabling robust and cost-effective solutions. Recent advances in the availability of detailed satellite imagery, geopositioning tools for field surveys, statistical methods, and computational power are enabling the development and application of approaches that can estimate population distributions at fine spatial scales across entire countries in the absence of census data. We outline the potential of such approaches as well as their limitations, emphasizing the political and operational hurdles for acceptance and sustainable implementation of new approaches, and the continued importance of traditional sources of national statistical data. Copyright © 2018 the Author(s). Published by PNAS.

  2. Analysis of Surface Electric Field Measurements from an Array of Electric Field Mills

    NASA Astrophysics Data System (ADS)

    Lucas, G.; Thayer, J. P.; Deierling, W.

    2016-12-01

    Kennedy Space Center (KSC) has operated an distributed array of over 30 electric field mills over the past 18 years, providing a unique data set of surface electric field measurements over a very long timespan. In addition to the electric field instruments there are many meteorological towers around KSC that monitor the local meteorological conditions. Utilizing these datasets we have investigated and found unique spatial and temporal signatures in the electric field data that are attributed to local meteorological effects and the global electric circuit. The local and global scale influences on the atmospheric electric field will be discussed including the generation of space charge from the ocean surf, local cloud cover, and a local enhancement in the electric field that is seen at sunrise.

  3. National Earthquake Information Center Seismic Event Detections on Multiple Scales

    NASA Astrophysics Data System (ADS)

    Patton, J.; Yeck, W. L.; Benz, H.; Earle, P. S.; Soto-Cordero, L.; Johnson, C. E.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (NEIC) monitors seismicity on local, regional, and global scales using automatic picks from more than 2,000 near-real time seismic stations. This presents unique challenges in automated event detection due to the high variability in data quality, network geometries and density, and distance-dependent variability in observed seismic signals. To lower the overall detection threshold while minimizing false detection rates, NEIC has begun to test the incorporation of new detection and picking algorithms, including multiband (Lomax et al., 2012) and kurtosis (Baillard et al., 2014) pickers, and a new bayesian associator (Glass 3.0). The Glass 3.0 associator allows for simultaneous processing of variably scaled detection grids, each with a unique set of nucleation criteria (e.g., nucleation threshold, minimum associated picks, nucleation phases) to meet specific monitoring goals. We test the efficacy of these new tools on event detection in networks of various scales and geometries, compare our results with previous catalogs, and discuss lessons learned. For example, we find that on local and regional scales, rapid nucleation of small events may require event nucleation with both P and higher-amplitude secondary phases (e.g., S or Lg). We provide examples of the implementation of a scale-independent associator for an induced seismicity sequence (local-scale), a large aftershock sequence (regional-scale), and for monitoring global seismicity. Baillard, C., Crawford, W. C., Ballu, V., Hibert, C., & Mangeney, A. (2014). An automatic kurtosis-based P-and S-phase picker designed for local seismic networks. Bulletin of the Seismological Society of America, 104(1), 394-409. Lomax, A., Satriano, C., & Vassallo, M. (2012). Automatic picker developments and optimization: FilterPicker - a robust, broadband picker for real-time seismic monitoring and earthquake early-warning, Seism. Res. Lett. , 83, 531-540, doi: 10.1785/gssrl.83.3.531.

  4. Sensitivity of extreme precipitation to temperature: the variability of scaling factors from a regional to local perspective

    NASA Astrophysics Data System (ADS)

    Schroeer, K.; Kirchengast, G.

    2018-06-01

    Potential increases in extreme rainfall induced hazards in a warming climate have motivated studies to link precipitation intensities to temperature. Increases exceeding the Clausius-Clapeyron (CC) rate of 6-7%/°C-1 are seen in short-duration, convective, high-percentile rainfall at mid latitudes, but the rates of change cease or revert at regionally variable threshold temperatures due to moisture limitations. It is unclear, however, what these findings mean in term of the actual risk of extreme precipitation on a regional to local scale. When conditioning precipitation intensities on local temperatures, key influences on the scaling relationship such as from the annual cycle and regional weather patterns need better understanding. Here we analyze these influences, using sub-hourly to daily precipitation data from a dense network of 189 stations in south-eastern Austria. We find that the temperature sensitivities in the mountainous western region are lower than in the eastern lowlands. This is due to the different weather patterns that cause extreme precipitation in these regions. Sub-hourly and hourly intensities intensify at super-CC and CC-rates, respectively, up to temperatures of about 17 °C. However, we also find that, because of the regional and seasonal variability of the precipitation intensities, a smaller scaling factor can imply a larger absolute change in intensity. Our insights underline that temperature precipitation scaling requires careful interpretation of the intent and setting of the study. When this is considered, conditional scaling factors can help to better understand which influences control the intensification of rainfall with temperature on a regional scale.

  5. Long-wave instabilities of two interlaced helical vortices

    NASA Astrophysics Data System (ADS)

    Quaranta, H. U.; Brynjell-Rahkola, M.; Leweke, T.; Henningson, D. S.

    2016-09-01

    We present a comparison between experimental observations and theoretical predictions concerning long-wave displacement instabilities of the helical vortices in the wake of a two-bladed rotor. Experiments are performed with a small-scale rotor in a water channel, using a set-up that allows the individual triggering of various instability modes at different azimuthal wave numbers, leading to local or global pairing of successive vortex loops. The initial development of the instability and the measured growth rates are in good agreement with the predictions from linear stability theory, based on an approach where the helical vortex system is represented by filaments. At later times, local pairing develops into large-scale distortions of the vortices, whereas for global pairing the non-linear evolution returns the system almost to its initial geometry.

  6. Discovery of Localized Regions of Excess 10-TeV Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Abdo, A. A.; Allen, B.; Aune, T.; Berley, D.; Blaufuss, E.; Casanova, S.; Chen, C.; Dingus, B. L.; Ellsworth, R. W.; Fleysher, L.; Fleysher, R.; Gonzalez, M. M.; Goodman, J. A.; Hoffman, C. M.; Hüntemeyer, P. H.; Kolterman, B. E.; Lansdell, C. P.; Linnemann, J. T.; McEnery, J. E.; Mincer, A. I.; Nemethy, P.; Noyes, D.; Pretz, J.; Ryan, J. M.; Parkinson, P. M. Saz; Shoup, A.; Sinnis, G.; Smith, A. J.; Sullivan, G. W.; Vasileiou, V.; Walker, G. P.; Williams, D. A.; Yodh, G. B.

    2008-11-01

    The 7 year data set of the Milagro TeV observatory contains 2.2×1011 events of which most are due to hadronic cosmic rays. These data are searched for evidence of intermediate scale structure. Excess emission on angular scales of ˜10° has been found in two localized regions of unknown origin with greater than 12σ significance. Both regions are inconsistent with pure gamma-ray emission with high confidence. One of the regions has a different energy spectrum than the isotropic cosmic-ray flux at a level of 4.6σ, and it is consistent with hard spectrum protons with an exponential cutoff, with the most significant excess at ˜10TeV. Potential causes of these excesses are explored, but no compelling explanations are found.

  7. Discovery of localized regions of excess 10-TeV cosmic rays.

    PubMed

    Abdo, A A; Allen, B; Aune, T; Berley, D; Blaufuss, E; Casanova, S; Chen, C; Dingus, B L; Ellsworth, R W; Fleysher, L; Fleysher, R; Gonzalez, M M; Goodman, J A; Hoffman, C M; Hüntemeyer, P H; Kolterman, B E; Lansdell, C P; Linnemann, J T; McEnery, J E; Mincer, A I; Nemethy, P; Noyes, D; Pretz, J; Ryan, J M; Parkinson, P M Saz; Shoup, A; Sinnis, G; Smith, A J; Sullivan, G W; Vasileiou, V; Walker, G P; Williams, D A; Yodh, G B

    2008-11-28

    The 7 year data set of the Milagro TeV observatory contains 2.2 x 10(11) events of which most are due to hadronic cosmic rays. These data are searched for evidence of intermediate scale structure. Excess emission on angular scales of approximately 10 degrees has been found in two localized regions of unknown origin with greater than 12sigma significance. Both regions are inconsistent with pure gamma-ray emission with high confidence. One of the regions has a different energy spectrum than the isotropic cosmic-ray flux at a level of 4.6sigma, and it is consistent with hard spectrum protons with an exponential cutoff, with the most significant excess at approximately 10 TeV. Potential causes of these excesses are explored, but no compelling explanations are found.

  8. Climate Change and Conservation Planning in California: The San Francisco Bay Area Upland Habitat Goals Approach

    NASA Astrophysics Data System (ADS)

    Branciforte, R.; Weiss, S. B.; Schaefer, N.

    2008-12-01

    Climate change threatens California's vast and unique biodiversity. The Bay Area Upland Habitat Goals is a comprehensive regional biodiversity assessment of the 9 counties surrounding San Francisco Bay, and is designing conservation land networks that will serve to protect, manage, and restore that biodiversity. Conservation goals for vegetation, rare plants, mammals, birds, fish, amphibians, reptiles, and invertebrates are set, and those goals are met using the optimization algorithm MARXAN. Climate change issues are being considered in the assessment and network design in several ways. The high spatial variability at mesoclimatic and topoclimatic scales in California creates high local biodiversity, and provides some degree of local resiliency to macroclimatic change. Mesoclimatic variability from 800 m scale PRISM climatic norms is used to assess "mesoclimate spaces" in distinct mountain ranges, so that high mesoclimatic variability, especially local extremes that likely support range limits of species and potential climatic refugia, can be captured in the network. Quantitative measures of network resiliency to climate change include the spatial range of key temperature and precipitation variables within planning units. Topoclimatic variability provides a finer-grained spatial patterning. Downscaling to the topoclimatic scale (10-50 m scale) includes modeling solar radiation across DEMs for predicting maximum temperature differentials, and topographic position indices for modeling minimum temperature differentials. PRISM data are also used to differentiate grasslands into distinct warm and cool types. The overall conservation strategy includes local and regional connectivity so that range shifts can be accommodated.

  9. Characterizing multi-scale self-similar behavior and non-statistical properties of fluctuations in financial time series

    NASA Astrophysics Data System (ADS)

    Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.

    2011-11-01

    We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.

  10. Strain Localization on Different Scales and their Related Microstructures - Comparison of Microfabrics of Calcite Mylonites from Naxos (Greece) and Helvetic Nappes (Switzerland)

    NASA Astrophysics Data System (ADS)

    Ebert, A.; Herwegh, M.; Karl, R.; Edwin, G.; Decrouez, D.

    2007-12-01

    In the upper crust, shear zones are widespread and appear at different scales. Although deformation conditions, shear zone history, and displacements vary in time and space between shear zones and also within them, in all shear zones similar trends in the evolution of large- to micro-scale fabrics can be observed. The microstructural analyses of calcite mylonites from Naxos and various Helvetic nappes show that microstructures from different metamorphic zones vary considerably on the outcrop- and even on the sample- scale. However, grain sizes tend to increase with metamorphic degree in case of Naxos and the Helvetic nappes. Although deformation conditions (e.g. deformation temperature, strain rate, and shear zone geometry, i.e. shear zone width and rock type above/below thrust) vary between the different tectonic settings, microstructural trends (e.g. grain size) correlate with each other. This is in contrast to many previous studies, where no corrections for second phase contents have been applied. In an Arrhenius-type diagram, the grain growth trends of calcite of all studied shear zones fit on a single trend, independent of the dimensions of localized large-scale structures, which is in the dm to m- and km-range in case of the Helvetic thrusts and the marble suite of Naxos, respectively. The calcite grain size increases continuously from few μm to >2mm with a temperature increase from <300°C to >700°C. In a field geologist's point of view, this is an important observation because it shows that natural dynamically stabilized steady state microfabrics can be used to estimate temperature conditions during deformation, although the tectonic settings are different (e.g. strain rate, fluid flow). The reason for this agreement might be related to a scale-dependence of the shear zone dimensions, where the widths increase with increasing metamorphic conditions. In this sense, the deformation volumes affected by localization must closely be linked to the strength of the affected rocks. In comparison to experiments, similar microstructural trends are observed. Here, however, shifts of these trends occur due to the higher strain rates.

  11. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  12. Temporal length-scale cascade and expansion rate on planar liquid jet instability

    NASA Astrophysics Data System (ADS)

    Sirignano, William; Zandian, Arash; Hussain, Fazle

    2016-11-01

    Using the local radius of curvature of the surface and the local transverse dimension of the two-phase (i.e., spray) domain as length scales, we obtained two PDFs over a wide range of length-scales at different times and for different Reynolds and Weber (We) numbers. The PDFs were developed via post-processing of DNS Navier-Stokes results for a 3D planar liquid sheet segment with level-set and Volume-of-Fluid surface tracking, giving better statistical data for the length scales compared to the former methods. The radius PDF shows that, with increasing We , the average radius of curvature decreases, number of small droplets increases, and cascade occurs at a faster rate. In time, the mean of the radius PDF decreases while the rms increases. The other PDF represents the spray expansion in a more realistic and meaningful form, showing that the spray angle is larger at higher We and density-ratios. Both the mean and the rms of the spray-size PDF increase with time. The PDFs also track the transitions between symmetric and anti-symmetric modes.

  13. The watershed-scale optimized and rearranged landscape design (WORLD) model and local biomass processing depots for sustainable biofuel production: Integrated life cycle assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eranki, Pragnya L.; Manowitz, David H.; Bals, Bryan D.

    An array of feedstock is being evaluated as potential raw material for cellulosic biofuel production. Thorough assessments are required in regional landscape settings before these feedstocks can be cultivated and sustainable management practices can be implemented. On the processing side, a potential solution to the logistical challenges of large biorefi neries is provided by a network of distributed processing facilities called local biomass processing depots. A large-scale cellulosic ethanol industry is likely to emerge soon in the United States. We have the opportunity to influence the sustainability of this emerging industry. The watershed-scale optimized and rearranged landscape design (WORLD) modelmore » estimates land allocations for different cellulosic feedstocks at biorefinery scale without displacing current animal nutrition requirements. This model also incorporates a network of the aforementioned depots. An integrated life cycle assessment is then conducted over the unified system of optimized feedstock production, processing, and associated transport operations to evaluate net energy yields (NEYs) and environmental impacts.« less

  14. Hydrochemical buffer assessment in agricultural landscapes: from local to catchment scale.

    PubMed

    Viaud, Valérie; Merot, Philippe; Baudry, Jacques

    2004-10-01

    Non-point-source pollution of surface and groundwater is a prominent environmental issue in rural catchments, with major consequences on water supply and aquatic ecosystem quality. Among surface-water protection measures, environmental or landscape management policies support the implementation and the management of buffer zones. Although a great number of studies have focused on buffer zones, quantification of the buffer effect is still a recurring question. The purpose of this article is a critical review of the assessment of buffer-zone functioning. Our objective is to provide land planners and managers with a set of variables to assess the limits and possibilities for quantifying buffer impact at the catchment scale. We first consider the scale of the local landscape feature. The most commonly used empirical method for assessing buffers is to calculate water/nutrient budgets from inflow-outflow monitoring at the level of landscape structures. We show that several other parameters apart from mean depletion of flux can be used to describe buffer functions. Such parameters include variability, with major implication for water management. We develop a theoretical framework to clarify the assessment of the buffer effect and propose a systematic analysis taking account of temporal variability. Second, we review the current assessment of buffer effects at the catchment scale according to the theoretical framework established at the local scale. Finally, we stress the limits of direct empirical assessment at the catchment scale and, in particular, we emphasize the hierarchy in hydrological processes involved at the catchment scale: The landscape feature function is constrained by other factors (climate and geology) that are of importance at a broader spatial and temporal scale.

  15. Are We "Experienced Listeners"? A Review of the Musical Capacities that Do Not Depend on Formal Musical Training

    ERIC Educational Resources Information Center

    Bigand, E.; Poulin-Charronnat, B.

    2006-01-01

    The present paper reviews a set of studies designed to investigate different aspects of the capacity for processing Western music. This includes perceiving the relationships between a theme and its variations, perceiving musical tensions and relaxations, generating musical expectancies, integrating local structures in large-scale structures,…

  16. Changing Cognitions in Parents of Two-Year-Olds Attending Scottish Sure Start Centres

    ERIC Educational Resources Information Center

    Woolfson, Lisa Marks; Durkin, Kevin; King, Julia

    2010-01-01

    The study examined how preschool intervention programmes set up by three Scottish local authorities changed parents' cognitions. Quantitative parent outcomes were measured using Parenting Daily Hassles Scales (N = 88). A matched comparison group of parents (N = 55) recruited from the same areas of disadvantage but whose children did not attend the…

  17. "Developing culturally sensitive affect scales for global mental health research and practice: Emotional balance, not named syndromes, in Indian Adivasi subjective well-being".

    PubMed

    Snodgrass, Jeffrey G; Lacy, Michael G; Upadhyay, Chakrapani

    2017-08-01

    We present a perspective to analyze mental health without either a) imposing Western illness categories or b) adopting local or "native" categories of mental distress. Our approach takes as axiomatic only that locals within any culture share a cognitive and verbal lexicon of salient positive and negative emotional experiences, which an appropriate and repeatable set of ethnographic procedures can elicit. Our approach is provisionally agnostic with respect to either Western or native nosological categories, and instead focuses on persons' relative frequency of experiencing emotions. Putting this perspective into practice in India, our ethnographic fieldwork (2006-2014) and survey analysis (N = 219) resulted in a 40-item Positive and Negative Affect Scale (PANAS), which we used to assess the mental well-being of Indigenous persons (the tribal Sahariya) in the Indian states of Rajasthan and Madhya Pradesh. Generated via standard cognitive anthropological procedures that can be replicated elsewhere, measures such as this possess features of psychiatric scales favored by leaders in global mental health initiatives. Though not capturing locally named distress syndromes, our scale is nonetheless sensitive to local emotional experiences, frames of meaning, and "idioms of distress." By sharing traits of both global and also locally-derived diagnoses, approaches like ours can help identify synergies between them. For example, employing data reduction techniques such as factor analysis-where diagnostic and screening categories emerge inductively ex post facto from emotional symptom clusters, rather than being deduced or assigned a priori by either global mental health experts or locals themselves-reveals hidden overlaps between local wellness idioms and global ones. Practically speaking, our perspective, which assesses both emotional frailty and also potential sources of emotional resilience and balance, while eschewing all named illness categories, can be deployed in mental health initiatives in ways that minimize stigma and increase both the acceptability and validity of assessment instruments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Effectiveness of reactive case detection for malaria elimination in three archetypical transmission settings: a modelling study.

    PubMed

    Gerardin, Jaline; Bever, Caitlin A; Bridenbecker, Daniel; Hamainza, Busiku; Silumbe, Kafula; Miller, John M; Eisele, Thomas P; Eckhoff, Philip A; Wenger, Edward A

    2017-06-12

    Reactive case detection could be a powerful tool in malaria elimination, as it selectively targets transmission pockets. However, field operations have yet to demonstrate under which conditions, if any, reactive case detection is best poised to push a region to elimination. This study uses mathematical modelling to assess how baseline transmission intensity and local interconnectedness affect the impact of reactive activities in the context of other possible intervention packages. Communities in Southern Province, Zambia, where elimination operations are currently underway, were used as representatives of three archetypes of malaria transmission: low-transmission, high household density; high-transmission, low household density; and high-transmission, high household density. Transmission at the spatially-connected household level was simulated with a dynamical model of malaria transmission, and local variation in vectorial capacity and intervention coverage were parameterized according to data collected from the area. Various potential intervention packages were imposed on each of the archetypical settings and the resulting likelihoods of elimination by the end of 2020 were compared. Simulations predict that success of elimination campaigns in both low- and high-transmission areas is strongly dependent on stemming the flow of imported infections, underscoring the need for regional-scale strategies capable of reducing transmission concurrently across many connected areas. In historically low-transmission areas, treatment of clinical malaria should form the cornerstone of elimination operations, as most malaria infections in these areas are symptomatic and onward transmission would be mitigated through health system strengthening; reactive case detection has minimal impact in these settings. In historically high-transmission areas, vector control and case management are crucial for limiting outbreak size, and the asymptomatic reservoir must be addressed through reactive case detection or mass drug campaigns. Reactive case detection is recommended only for settings where transmission has recently been reduced rather than all low-transmission settings. This is demonstrated in a modelling framework with strong out-of-sample accuracy across a range of transmission settings while including methodologies for understanding the most resource-effective allocations of health workers. This approach generalizes to providing a platform for planning rational scale-up of health systems based on locally-optimized impact according to simplified stratification.

  19. Thresholds of understanding: Exploring assumptions of scale invariance vs. scale dependence in global biogeochemical models

    NASA Astrophysics Data System (ADS)

    Wieder, W. R.; Bradford, M.; Koven, C.; Talbot, J. M.; Wood, S.; Chadwick, O.

    2016-12-01

    High uncertainty and low confidence in terrestrial carbon (C) cycle projections reflect the incomplete understanding of how best to represent biologically-driven C cycle processes at global scales. Ecosystem theories, and consequently biogeochemical models, are based on the assumption that different belowground communities function similarly and interact with the abiotic environment in consistent ways. This assumption of "Scale Invariance" posits that environmental conditions will change the rate of ecosystem processes, but the biotic response will be consistent across sites. Indeed, cross-site comparisons and global-scale analyses suggest that climate strongly controls rates of litter mass loss and soil organic matter turnover. Alternatively, activities of belowground communities are shaped by particular local environmental conditions, such as climate and edaphic conditions. Under this assumption of "Scale Dependence", relationships generated by evolutionary trade-offs in acquiring resources and withstanding environmental stress dictate the activities of belowground communities and their functional response to environmental change. Similarly, local edaphic conditions (e.g. permafrost soils or reactive minerals that physicochemically stabilize soil organic matter on mineral surfaces) may strongly constrain the availability of substrates that biota decompose—altering the trajectory of soil biogeochemical response to perturbations. Identifying when scale invariant assumptions hold vs. where local variation in biotic communities or edaphic conditions must be considered is critical to advancing our understanding and representation of belowground processes in the face of environmental change. Here we introduce data sets that support assumptions of scale invariance and scale dependent processes and discuss their application in global-scale biogeochemical models. We identify particular domains over which assumptions of scale invariance may be appropriate and potential thresholds where shifts in ecosystem function may be expected. Finally, we discuss the mechanistic insight that can be applied in process-based models and datasets that can evaluate models across spatial and temporal scales.

  20. DMI's Baltic Sea Coastal operational forecasting system

    NASA Astrophysics Data System (ADS)

    Murawski, Jens; Berg, Per; Weismann Poulsen, Jacob

    2017-04-01

    Operational forecasting is challenged with bridging the gap between the large scales of the driving weather systems and the local, human scales of the model applications. The limit of what can be represented by local model has been continuously shifted to higher and higher spatial resolution, with the aim to better resolve the local dynamic and to make it possible to describe processes that could only be parameterised in older versions, with the ultimate goal to improve the quality of the forecast. Current hardware trends demand a str onger focus on the development of efficient, highly parallelised software and require a refactoring of the code with a solid focus on portable performance. The gained performance can be used for running high resolution model with a larger coverage. Together with the development of efficient two-way nesting routines, this has made it possible to approach the near-coastal zone with model applications that can run in a time effective way. Denmarks Meteorological Institute uses the HBM(1) ocean circulation model for applications that covers the entire Baltic Sea and North Sea with an integrated model set-up that spans the range of horizontal resolution from 1nm for the entire Baltic Sea to approx. 200m resolution in local fjords (Limfjord). For the next model generation, the high resolution set-ups are going to be extended and new high resolution domains in coastal zones are either implemented or tested for operational use. For the first time it will be possible to cover large stretches of the Baltic coastal zone with sufficiently high resolution to model the local hydrodynamic adequately. (1) HBM stands for HIROMB-BOOS-Model, whereas HIROMB stands for "High Resolution Model for the Baltic Sea" and BOOS stands for "Baltic Operational Oceanography System".

  1. Validation of the Edinburgh Postnatal Depression Scale (EPDS) on the Thai–Myanmar border

    PubMed Central

    Ing, Harriet; Fellmeth, Gracia; White, Jitrachote; Stein, Alan; Simpson, Julie A; McGready, Rose

    2017-01-01

    Postnatal depression is common and may have severe consequences for women and their children. Locally validated screening tools are required to identify at-risk women in marginalised populations. The Edinburgh Postnatal Depression Scale (EPDS) is one of the most frequently used tools globally. This cross-sectional study assessed the validity and acceptability of the EPDS in Karen and Burmese among postpartum migrant and refugee women on the Thai–Myanmar border. The EPDS was administered to participants and results compared with a diagnostic interview. Local staff provided feedback on the acceptability of the EPDS through a focus group discussion. Results from 670 women showed high accuracy and reasonable internal consistency of the EPDS. However, acceptability to local staff was low, limiting the utility of the EPDS in this setting despite its good psychometrics. Further work is required to identify a tool that is acceptable and sensitive to cultural manifestations of depression in this vulnerable population. PMID:28699396

  2. The f ( R ) halo mass function in the cosmic web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun-Bates, F. von; Winther, H.A.; Alonso, D.

    An important indicator of modified gravity is the effect of the local environment on halo properties. This paper examines the influence of the local tidal structure on the halo mass function, the halo orientation, spin and the concentration-mass relation. We use the excursion set formalism to produce a halo mass function conditional on large-scale structure. Our simple model agrees well with simulations on large scales at which the density field is linear or weakly non-linear. Beyond this, our principal result is that f ( R ) does affect halo abundances, the halo spin parameter and the concentration-mass relationship in anmore » environment-independent way, whereas we find no appreciable deviation from \\text(ΛCDM) for the mass function with fixed environment density, nor the alignment of the orientation and spin vectors of the halo to the eigenvectors of the local cosmic web. There is a general trend for greater deviation from \\text(ΛCDM) in underdense environments and for high-mass haloes, as expected from chameleon screening.« less

  3. Geographic variation in opinions on climate change at state and local scales in the USA

    NASA Astrophysics Data System (ADS)

    Howe, Peter D.; Mildenberger, Matto; Marlon, Jennifer R.; Leiserowitz, Anthony

    2015-06-01

    Addressing climate change in the United States requires enactment of national, state and local mitigation and adaptation policies. The success of these initiatives depends on public opinion, policy support and behaviours at appropriate scales. Public opinion, however, is typically measured with national surveys that obscure geographic variability across regions, states and localities. Here we present independently validated high-resolution opinion estimates using a multilevel regression and poststratification model. The model accurately predicts climate change beliefs, risk perceptions and policy preferences at the state, congressional district, metropolitan and county levels, using a concise set of demographic and geographic predictors. The analysis finds substantial variation in public opinion across the nation. Nationally, 63% of Americans believe global warming is happening, but county-level estimates range from 43 to 80%, leading to a diversity of political environments for climate policy. These estimates provide an important new source of information for policymakers, educators and scientists to more effectively address the challenges of climate change.

  4. Cascade heterogeneous face sketch-photo synthesis via dual-scale Markov Network

    NASA Astrophysics Data System (ADS)

    Yao, Saisai; Chen, Zhenxue; Jia, Yunyi; Liu, Chengyun

    2018-03-01

    Heterogeneous face sketch-photo synthesis is an important and challenging task in computer vision, which has widely applied in law enforcement and digital entertainment. According to the different synthesis results based on different scales, this paper proposes a cascade sketch-photo synthesis method via dual-scale Markov Network. Firstly, Markov Network with larger scale is used to synthesise the initial sketches and the local vertical and horizontal neighbour search (LVHNS) method is used to search for the neighbour patches of test patches in training set. Then, the initial sketches and test photos are jointly entered into smaller scale Markov Network. Finally, the fine sketches are obtained after cascade synthesis process. Extensive experimental results on various databases demonstrate the superiority of the proposed method compared with several state-of-the-art methods.

  5. Sustainable wastewater treatment of temporary events: the Dranouter Music Festival case study.

    PubMed

    Van Hulle, S W H; Audenaert, W; Decostere, B; Hogie, J; Dejans, P

    2008-01-01

    Music festivals and other temporary events, such as bicycle races, lay a heavy burden on the surrounding environment. Treatment of the wastewater originating from such events is necessary if no municipal treatment plant is available. This study demonstrated that activated carbon is a performant technique for the treatment of wastewaters originating from these temporary events. Freundlich isotherms and maximum operational linear velocity (6 m/h) were determined on a lab-scale set-up. A pilot-scale set up was used to treat part (5%) of the total volume of the Dranouter Music Festival shower wastewater. On average 90% removal of COD and suspended solids concentration was obtained. Application of the activated carbon filter resulted in the fact that the local discharge limits were met without operational problems. IWA Publishing 2008.

  6. Travel determinants and multi-scale transferability of national activity patterns to local populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henson, Kriste M; Gou; ias, Konstadinos G

    The ability to transfer national travel patterns to a local population is of interest when attempting to model megaregions or areas that exceed metropolitan planning organization (MPO) boundaries. At the core of this research are questions about the connection between travel behavior and land use, urban form, and accessibility. As a part of this process, a group of land use variables have been identified to define activity and travel patterns for individuals and households. The 2001 National Household Travel Survey (NHTS) participants are divided into categories comprised of a set of latent cluster models representing persons, travel, and land use.more » These are compared to two sets of cluster models constructed for two local travel surveys. Comparison of means statistical tests are used to assess differences among sociodemographic groups residing in localities with similar land uses. The results show that the NHTS and the local surveys share mean population activity and travel characteristics. However, these similarities mask behavioral heterogeneity that are shown when distributions of activity and travel behavior are examined. Therefore, data from a national household travel survey cannot be used to model local population travel characteristics if the goal to model the actual distributions and not mean travel behavior characteristics.« less

  7. Small lakes in big landscape: Multi-scale drivers of littoral ecosystem in alpine lakes.

    PubMed

    Zaharescu, Dragos G; Burghelea, Carmen I; Hooda, Peter S; Lester, Richard N; Palanca-Soler, Antonio

    2016-05-01

    In low nutrient alpine lakes, the littoral zone is the most productive part of the ecosystem, and it is a biodiversity hotspot. It is not entirely clear how the scale and physical heterogeneity of surrounding catchment, its ecological composition, and larger landscape gradients work together to sustain littoral communities. A total of 113 alpine lakes from the central Pyrenees were surveyed to evaluate the functional connectivity between littoral zoobenthos and landscape physical and ecological elements at geographical, catchment and local scales, and to ascertain how they affect the formation of littoral communities. At each lake, the zoobenthic composition was assessed together with geolocation, catchment hydrodynamics, geomorphology and topography, riparian vegetation composition, the presence of trout and frogs, water pH and conductivity. Multidimensional fuzzy set models integrating benthic biota and environmental variables revealed that at geographical scale, longitude unexpectedly surpassed altitude and latitude in its effect on littoral ecosystem. This reflects a sharp transition between Atlantic and Mediterranean climates and suggests a potentially high horizontal vulnerability to climate change. Topography (controlling catchment type, snow coverage and lakes connectivity) was the most influential catchment-scale driver, followed by hydrodynamics (waterbody size, type and volume of inflow/outflow). Locally, riparian plant composition significantly related to littoral community structure, richness and diversity. These variables, directly and indirectly, create habitats for aquatic and terrestrial stages of invertebrates, and control nutrient and water cycles. Three benthic associations characterised distinct lakes. Vertebrate predation, water conductivity and pH had no major influence on littoral taxa. This work provides exhaustive information from relatively pristine sites, and unveils a strong connection between littoral ecosystem and catchment heterogeneity at scales beyond the local environment. This underpins the role of alpine lakes as sensors of local and large-scale environmental changes, which can be used in monitoring networks to evaluate further impacts. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Tone mapping infrared images using conditional filtering-based multi-scale retinex

    NASA Astrophysics Data System (ADS)

    Luo, Haibo; Xu, Lingyun; Hui, Bin; Chang, Zheng

    2015-10-01

    Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing "halo" effect and provides a good rendition of visual effect.

  9. Mapping regional livelihood benefits from local ecosystem services assessments in rural Sahel

    PubMed Central

    Sinare, Hanna; Enfors Kautsky, Elin; Ouedraogo, Issa; Gordon, Line J.

    2018-01-01

    Most current approaches to landscape scale ecosystem service assessments rely on detailed secondary data. This type of data is seldom available in regions with high levels of poverty and strong local dependence on provisioning ecosystem services for livelihoods. We develop a method to extrapolate results from a previously published village scale ecosystem services assessment to a higher administrative level, relevant for land use decision making. The method combines remote sensing (using a hybrid classification method) and interviews with community members. The resulting landscape scale maps show the spatial distribution of five different livelihood benefits (nutritional diversity, income, insurance/saving, material assets and energy, and crops for consumption) that illustrate the strong multifunctionality of the Sahelian landscapes. The maps highlight the importance of a diverse set of sub-units of the landscape in supporting Sahelian livelihoods. We see a large potential in using the resulting type of livelihood benefit maps for guiding future land use decisions in the Sahel. PMID:29389965

  10. Mapping regional livelihood benefits from local ecosystem services assessments in rural Sahel.

    PubMed

    Malmborg, Katja; Sinare, Hanna; Enfors Kautsky, Elin; Ouedraogo, Issa; Gordon, Line J

    2018-01-01

    Most current approaches to landscape scale ecosystem service assessments rely on detailed secondary data. This type of data is seldom available in regions with high levels of poverty and strong local dependence on provisioning ecosystem services for livelihoods. We develop a method to extrapolate results from a previously published village scale ecosystem services assessment to a higher administrative level, relevant for land use decision making. The method combines remote sensing (using a hybrid classification method) and interviews with community members. The resulting landscape scale maps show the spatial distribution of five different livelihood benefits (nutritional diversity, income, insurance/saving, material assets and energy, and crops for consumption) that illustrate the strong multifunctionality of the Sahelian landscapes. The maps highlight the importance of a diverse set of sub-units of the landscape in supporting Sahelian livelihoods. We see a large potential in using the resulting type of livelihood benefit maps for guiding future land use decisions in the Sahel.

  11. Coastal hazards in a changing world: projecting and communicating future coastal flood risk at the local-scale using the Coastal Storm Modeling System (CoSMoS)

    NASA Astrophysics Data System (ADS)

    O'Neill, Andrea; Barnard, Patrick; Erikson, Li; Foxgrover, Amy; Limber, Patrick; Vitousek, Sean; Fitzgibbon, Michael; Wood, Nathan

    2017-04-01

    The risk of coastal flooding will increase for many low-lying coastal regions as predominant contributions to flooding, including sea level, storm surge, wave setup, and storm-related fluvial discharge, are altered with climate change. Community leaders and local governments therefore look to science to provide insight into how climate change may affect their areas. Many studies of future coastal flooding vulnerability consider sea level and tides, but ignore other important factors that elevate flood levels during storm events, such as waves, surge, and discharge. Here we present a modelling approach that considers a broad range of relevant processes contributing to elevated storm water levels for open coast and embayment settings along the U.S. West Coast. Additionally, we present online tools for communicating community-relevant projected vulnerabilities. The Coastal Storm Modeling System (CoSMoS) is a numerical modeling system developed to predict coastal flooding due to both sea-level rise (SLR) and plausible 21st century storms for active-margin settings like the U.S. West Coast. CoSMoS applies a predominantly deterministic framework of multi-scale models encompassing large geographic scales (100s to 1000s of kilometers) to small-scale features (10s to 1000s of meters), resulting in flood extents that can be projected at a local resolution (2 meters). In the latest iteration of CoSMoS applied to Southern California, U.S., efforts were made to incorporate water level fluctuations in response to regional storm impacts, locally wind-generated waves, coastal river discharge, and decadal-scale shoreline and cliff changes. Coastal hazard projections are available in a user-friendly web-based tool (www.prbo.org/ocof), where users can view variations in flood extent, maximum flood depth, current speeds, and wave heights in response to a range of potential SLR and storm combinations, providing direct support to adaptation and management decisions. In order to capture the societal aspect of the hazard, projections are combined with socioeconomic exposure to produce clear, actionable information (https://www.usgs.gov/apps/hera/); this integrated approach to hazard displays provides an example of how to effectively translate complex climate impacts projections into simple, societally-relevant information.

  12. Global evolutionary isolation measures can capture key local conservation species in Nearctic and Neotropical bird communities

    PubMed Central

    Redding, David W.; Mooers, Arne O.; Şekercioğlu, Çağan H.; Collen, Ben

    2015-01-01

    Understanding how to prioritize among the most deserving imperilled species has been a focus of biodiversity science for the past three decades. Though global metrics that integrate evolutionary history and likelihood of loss have been successfully implemented, conservation is typically carried out at sub-global scales on communities of species rather than among members of complete taxonomic assemblages. Whether and how global measures map to a local scale has received little scrutiny. At a local scale, conservation-relevant assemblages of species are likely to be made up of relatively few species spread across a large phylogenetic tree, and as a consequence there are potentially relatively large amounts of evolutionary history at stake. We ask to what extent global metrics of evolutionary history are useful for conservation priority setting at the community level by evaluating the extent to which three global measures of evolutionary isolation (evolutionary distinctiveness (ED), average pairwise distance (APD) and the pendant edge or unique phylogenetic diversity (PD) contribution) capture community-level phylogenetic and trait diversity for a large sample of Neotropical and Nearctic bird communities. We find that prioritizing the most ED species globally safeguards more than twice the total PD of local communities on average, but that this does not translate into increased local trait diversity. By contrast, global APD is strongly related to the APD of those same species at the community level, and prioritizing these species also safeguards local PD and trait diversity. The next step for biologists is to understand the variation in the concordance of global and local level scores and what this means for conservation priorities: we need more directed research on the use of different measures of evolutionary isolation to determine which might best capture desirable aspects of biodiversity. PMID:25561674

  13. From bird's eye views to molecular communities: two-layered visualization of structure-activity relationships in large compound data sets

    NASA Astrophysics Data System (ADS)

    Kayastha, Shilva; Kunimoto, Ryo; Horvath, Dragos; Varnek, Alexandre; Bajorath, Jürgen

    2017-11-01

    The analysis of structure-activity relationships (SARs) becomes rather challenging when large and heterogeneous compound data sets are studied. In such cases, many different compounds and their activities need to be compared, which quickly goes beyond the capacity of subjective assessments. For a comprehensive large-scale exploration of SARs, computational analysis and visualization methods are required. Herein, we introduce a two-layered SAR visualization scheme specifically designed for increasingly large compound data sets. The approach combines a new compound pair-based variant of generative topographic mapping (GTM), a machine learning approach for nonlinear mapping, with chemical space networks (CSNs). The GTM component provides a global view of the activity landscapes of large compound data sets, in which informative local SAR environments are identified, augmented by a numerical SAR scoring scheme. Prioritized local SAR regions are then projected into CSNs that resolve these regions at the level of individual compounds and their relationships. Analysis of CSNs makes it possible to distinguish between regions having different SAR characteristics and select compound subsets that are rich in SAR information.

  14. Newspaper coverage of controversies about large-scale swine facilities in rural communities in Illinois.

    PubMed

    Reisner, A E

    2005-11-01

    The building and expansion of large-scale swine facilities has created considerable controversy in many neighboring communities, but to date, no systematic analysis has been done of the types of claims made during these conflicts. This study examined how local newspapers in one state covered the transition from the dominance of smaller, diversified swine operations to large, single-purpose pig production facilities. To look at publicly made statements concerning large-scale swine facilities (LSSF), the study collected all articles related to LSSF from 22 daily Illinois newspapers over a 3-yr period (a total of 1,737 articles). The most frequent sets of claims used by proponents of LSSF were that the environment was not harmed, that state regulations were sufficiently strict, and that the state economically needed this type of agriculture. The most frequent claims made by opponents were that LSSF harmed the environment and neighboring communities and that stricter regulations were needed. Proponents' claims were primarily defensive and, to some degree, underplayed the advantages of LSSF. Pro-and anti-LSSF groups were talking at cross-purposes, to some degree. Even across similar themes, those in favor of LSSF and those opposed were addressing different sets of concerns. The newspaper claims did not indicate any effective alliances forming between local anti-LSSF groups and national environmental or animal rights groups.

  15. A multiscale numerical study into the cascade of kinetic energy leading to severe local storms

    NASA Technical Reports Server (NTRS)

    Paine, D. A.; Kaplan, M. L.

    1977-01-01

    The cascade of kinetic energy from macro- through mesoscales is studied on the basis of a nested grid system used to solve a set of nonlinear differential equations. The kinetic energy cascade and the concentration of vorticity through the hydrodynamic spectrum provide a means for predicting the location and intensity of severe weather from large-scale data sets. A mechanism described by the surface pressure tendency equation proves to be important in explaining how initial middle-tropospheric mass-momentum imbalances alter the low-level pressure field.

  16. Spatiotemporal correlation structure of the Earth's surface temperature

    NASA Astrophysics Data System (ADS)

    Fredriksen, Hege-Beate; Rypdal, Kristoffer; Rypdal, Martin

    2015-04-01

    We investigate the spatiotemporal temperature variability for several gridded instrumental and climate model data sets. The temporal variability is analysed by estimating the power spectral density and studying the differences between local and global temperatures, land and sea, and among local temperature records at different locations. The spatiotemporal correlation structure is analysed through cross-spectra that allow us to compute frequency-dependent spatial autocorrelation functions (ACFs). Our results are then compared to theoretical spectra and frequency-dependent spatial ACFs derived from a fractional stochastic-diffusive energy balance model (FEBM). From the FEBM we expect both local and global temperatures to have a long-range persistent temporal behaviour, and the spectral exponent (β) is expected to increase by a factor of two when going from local to global scales. Our comparison of the average local spectrum and the global spectrum shows good agreement with this model, although the FEBM has so far only been studied for a pure land planet and a pure ocean planet, respectively, with no seasonal forcing. Hence it cannot capture the substantial variability among the local spectra, in particular between the spectra for land and sea, and for equatorial and non-equatorial temperatures. Both models and observation data show that land temperatures in general have a low persistence, while sea surface temperatures show a higher, and also more variable degree of persistence. Near the equator the spectra deviate from the power-law shape expected from the FEBM. Instead we observe large variability at time scales of a few years due to ENSO, and a flat spectrum at longer time scales, making the spectrum more reminiscent of that of a red noise process. From the frequency-dependent spatial ACFs we observe that the spatial correlation length increases with increasing time scale, which is also consistent with the FEBM. One consequence of this is that longer-lasting structures must also be wider in space. The spatial correlation length is also observed to be longer for land than for sea. The climate model simulations studied are mainly CMIP5 control runs of length 500-1000 yr. On time scales up to several centuries we do not observe that the difference between the local and global spectral exponents vanish. This also follows from the FEBM and shows that the dynamics is spatiotemporal (not just temporal) even on these time scales.

  17. Study of solar wind spectra by nonlinear waves interaction

    NASA Astrophysics Data System (ADS)

    Dwivedi, Navin; Sharma, Rampal; Narita, Yasuhito

    2014-05-01

    The nature of small-scale turbulent fluctuations in the solar wind (SW) turbulence is a topic that is being investigated extensively nowadays, both theoretically and observationally. Although recent observations predict the evidence of the dominance of kinetic Alfvén waves (KAW) at sub-ion scales with frequency below than ion cyclotron frequency, while other studies suggest that the KAW mode cannot carry the turbulence cascade down to electron scales and that the whistler mode is more relevant. In the present work, nonlinear interaction of kinetic Alfvén wave with whistler wave is considered as one of the possible cause responsible for the solar wind turbulence. A set of coupled dimensionless equations are derived for the intermediate beta plasmas and the nonlinear interaction between these two wave modes has been studied. As a consequence of ponderomotive nonlinearity, the pump KAW becomes filamented when its power exceeds the threshold for the filamentation instability. Whistler is considered to be weak and thus doesn't have enough intensity to initiate its own localization. It gets localized while propagating through the density channel created by KAW localization. In addition, spectral scales of power spectra of KAW and whistler are also calculated. The steeper spectra are found with scaling greater than -5/3. This type of nonlinear interaction between different wave modes and steeper spectra is one of the reasons for the solar wind turbulence and particles acceleration. This work is partially supported by DST (India) and FP7/STORM (313038)

  18. Participatory approach: from problem identification to setting strategies for increased productivity and sustainability in small scale irrigated agriculture

    NASA Astrophysics Data System (ADS)

    Habtu, Solomon; Ludi, Eva; Jamin, Jean Yves; Oates, Naomi; Fissahaye Yohannes, Degol

    2014-05-01

    Practicing various innovations pertinent to irrigated farming at local field scale is instrumental to increase productivity and yield for small holder farmers in Africa. However the translation of innovations from local scale to the scale of a jointly operated irrigation scheme is far from trivial. It requires insight on the drivers for adoption of local innovations within the wider farmer communities. Participatory methods are expected to improve not only the acceptance of locally developed innovations within the wider farmer communities, but to allow also an estimation to which extend changes will occur within the entire irrigation scheme. On such a base, more realistic scenarios of future water productivity within an irrigation scheme, which is operated by small holder farmers, can be estimated. Initial participatory problem and innovation appraisal was conducted in Gumselassa small scale irrigation scheme, Ethiopia, from Feb 27 to March 3, 2012 as part of the EAU4FOOD project funded by EC. The objective was to identify and appraise problems which hinder sustainable water management to enhance production and productivity and to identify future research strategies. Workshops were conducted both at local (Community of Practices) and regional (Learning Practice Alliance) level. At local levels, intensive collaboration with farmers using participatory methods produced problem trees and a "Photo Safari" documented a range of problems that negatively impact on productive irrigated farming. A range of participatory methods were also used to identify local innovations. At regional level a Learning Platform was established that includes a wide range of stakeholders (technical experts from various government ministries, policy makers, farmers, extension agents, researchers). This stakeholder group did a range of exercise as well to identify major problems related to irrigated smallholder farming and already identified innovations. Both groups identified similar problems to productive smallholder irrigation: soil nutrient depletion, salinization, disease and pest resulting from inefficient irrigation practices, infrastructure problems leading to a reduction of the size of the command area and decrease in reservoir volume. The major causes have been poor irrigation infrastructure, poor on-farm soil and water management, prevalence of various crop pests and diseases, lack of inputs and reservoir siltation. On-farm participatory research focusing on soil, crop and water management issues, including technical, institutional and managerial aspects, to identify best performing innovations while taking care of the environment was recommended. Currently, a range of interlinked activities are implemented a multiple scales, combining participatory and scientific approaches towards innovation development and up-scaling of promising technologies and institutional and managerial approaches from local to regional scales. ____________________________ Key words: Irrigation scheme, productivity, innovation, participatory method, Gumselassa, Ethiopia

  19. DISPATCH: a numerical simulation framework for the exa-scale era - I. Fundamentals

    NASA Astrophysics Data System (ADS)

    Nordlund, Åke; Ramsey, Jon P.; Popovas, Andrius; Küffmeier, Michael

    2018-06-01

    We introduce a high-performance simulation framework that permits the semi-independent, task-based solution of sets of partial differential equations, typically manifesting as updates to a collection of `patches' in space-time. A hybrid MPI/OpenMP execution model is adopted, where work tasks are controlled by a rank-local `dispatcher' which selects, from a set of tasks generally much larger than the number of physical cores (or hardware threads), tasks that are ready for updating. The definition of a task can vary, for example, with some solving the equations of ideal magnetohydrodynamics (MHD), others non-ideal MHD, radiative transfer, or particle motion, and yet others applying particle-in-cell (PIC) methods. Tasks do not have to be grid based, while tasks that are, may use either Cartesian or orthogonal curvilinear meshes. Patches may be stationary or moving. Mesh refinement can be static or dynamic. A feature of decisive importance for the overall performance of the framework is that time-steps are determined and applied locally; this allows potentially large reductions in the total number of updates required in cases when the signal speed varies greatly across the computational domain, and therefore a corresponding reduction in computing time. Another feature is a load balancing algorithm that operates `locally' and aims to simultaneously minimize load and communication imbalance. The framework generally relies on already existing solvers, whose performance is augmented when run under the framework, due to more efficient cache usage, vectorization, local time-stepping, plus near-linear and, in principle, unlimited OpenMP and MPI scaling.

  20. Airframe Noise Results from the QTD II Flight Test Program

    NASA Technical Reports Server (NTRS)

    Elkoby, Ronen; Brusniak, Leon; Stoker, Robert W.; Khorrami, Mehdi R.; Abeysinghe, Amal; Moe, Jefferey W.

    2007-01-01

    With continued growth in air travel, sensitivity to community noise intensifies and materializes in the form of increased monitoring, regulations, and restrictions. Accordingly, realization of quieter aircraft is imperative, albeit only achievable with reduction of both engine and airframe components of total aircraft noise. Model-scale airframe noise testing has aided in this pursuit; however, the results are somewhat limited due to lack of fidelity of model hardware, particularly in simulating full-scale landing gear. Moreover, simulation of true in-flight conditions is non-trivial if not infeasible. This paper reports on an investigation of full-scale landing gear noise measured as part of the 2005 Quiet Technology Demonstrator 2 (QTD2) flight test program. Conventional Boeing 777-300ER main landing gear were tested, along with two noise reduction concepts, namely a toboggan fairing and gear alignment with the local flow, both of which were down-selected from various other noise reduction devices evaluated in model-scale testing at Virginia Tech. The full-scale toboggan fairings were designed by Goodrich Aerostructures as add-on devices allowing for complete retraction of the main gear. The baseline-conventional gear, faired gear, and aligned gear were all evaluated with the high-lift system in the retracted position and deployed at various flap settings, all at engine idle power setting. Measurements were taken with flyover community noise microphones and a large aperture acoustic phased array, yielding far-field spectra, and localized sources (beamform maps). The results were utilized to evaluate qualitatively and quantitatively the merit of each noise reduction concept. Complete similarity between model-scale and full-scale noise reduction levels was not found and requires further investigation. Far-field spectra exhibited no noise reduction for both concepts across all angles and frequencies. Phased array beamform maps show inconclusive evidence of noise reduction at selective frequencies (1500 to 3000 Hz) but are otherwise in general agreement with the far-field spectra results (within measurement uncertainty).

  1. Radiometric & Geometric normalization of Sentinel optical data and VHR data to build-up time-series, an example in Tonga for the monitoring of mangrove health vs. climate change

    NASA Astrophysics Data System (ADS)

    Serra, Romain; Valette, Anne; Taji, Amine; Emsley, Stephen

    2017-04-01

    Building climate resilience (i.e. climate change adaptation or self-renew of ecosystems) or planning environment rehabilitations and nature-based solutions to address their vulnerabilities to disturbances has prerequisites: 1- identify the disorder, i.e. stresses caused by events such as hurricanes, tsunamis, heavy rains, hailstone falls, smog… or piled-up along-time such as warming, rainfalls, ocean acidification, soil salinization… and measured by trends; and 2- qualify its impact on the ecosystems, i.e. the resulting strains. Mitigation of threats is accordingly twofold, i. on locally temporal scales for protection, ii. on long scale for prevention and sustainability. For assessment and evaluation prior to design future scenarios, it requires concomitant acquisition of (a) climate data at global and local spatial scale which describe the changes at the various temporal scales of phenomena without signal aliasing, and of (b) the ecosystems' status at the scales of the forcing and of relaxation times, hysteresis lags, periodicities of orbits in chaotic systems, shifts from one attractor in ecosystems to the others, etc. Dissociating groups of timescales and spatial scales facilitates the analysis and help set-up monitoring schemes. The Sentinel-2 mission, with a revisit of the earth every few days and a 10m resolution on-ground is a good automatic spectro-analytical monitoring system because detecting changes in numerous optical & IR bands at proper spatial scales for the description of land parcels. Combined with photo-interpreted VHR data which describe the environment more crudely but with high precision of land parcels' border locations, it helps find the relationship between stress and strains to empirically understand the relationships. An example is provided for Tonga, courtesy of ESA support and ADB request, with a focus on time-series' consistency that requires radiometric and geometric normalisation of EO data sets. Methodologies have been developed in the frame of ESA programs and EC program (H2020 Co-Resyf).

  2. Landscape- and local-scale habitat influences on occupancy and detection probability of stream-dwelling crayfish: Implications for conservation

    USGS Publications Warehouse

    Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Nolen, Matthew S.; Wagner, Brian K.

    2017-01-01

    Crayfish are ecologically important in freshwater systems worldwide and are imperiled in North America and globally. We sought to examine landscape- to local-scale environmental variables related to occupancy and detection probability of a suite of stream-dwelling crayfish species. We used a quantitative kickseine method to sample crayfish presence at 102 perennial stream sites with eight surveys per site. We modeled occupancy (psi) and detection probability (P) and local- and landscape-scale environmental covariates. We developed a set of a priori candidate models for each species and ranked models using (Q)AICc. Detection probabilities and occupancy estimates differed among crayfish species with Orconectes eupunctus, O. marchandi, and Cambarus hubbsi being relatively rare (psi < 0.20) with moderate (0.46–0.60) to high (0.81) detection probability and O. punctimanus and O. ozarkae being relatively common (psi > 0.60) with high detection probability (0.81). Detection probability was often related to local habitat variables current velocity, depth, or substrate size. Important environmental variables for crayfish occupancy were species dependent but were mainly landscape variables such as stream order, geology, slope, topography, and land use. Landscape variables strongly influenced crayfish occupancy and should be considered in future studies and conservation plans.

  3. Intrinsic anomalous surface roughening of TiN films deposited by reactive sputtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auger, M. A.; Centro Nacional de Investigaciones Metalurgicas; Vazquez, L.

    2006-01-15

    We study surface kinetic roughening of TiN films grown on Si(100) substrates by dc reactive sputtering. The surface morphology of films deposited for different growth times under the same experimental conditions were analyzed by atomic force microscopy. The TiN films exhibit intrinsic anomalous scaling and multiscaling. The film kinetic roughening is characterized by a set of local exponent values {alpha}{sub loc}=1.0 and {beta}{sub loc}=0.39, and global exponent values {alpha}=1.7 and {beta}=0.67, with a coarsening exponent of 1/z=0.39. These properties are correlated to the local height-difference distribution function obeying power-law statistics. We associate this intrinsic anomalous scaling with the instability duemore » to nonlocal shadowing effects that take place during thin-film growth by sputtering.« less

  4. Development and validation of a socioculturally competent trust in physician scale for a developing country setting.

    PubMed

    Gopichandran, Vijayaprasad; Wouters, Edwin; Chetlapalli, Satish Kumar

    2015-05-03

    Trust in physicians is the unwritten covenant between the patient and the physician that the physician will do what is in the best interest of the patient. This forms the undercurrent of all healthcare relationships. Several scales exist for assessment of trust in physicians in developed healthcare settings, but to our knowledge none of these have been developed in a developing country context. To develop and validate a new trust in physician scale for a developing country setting. Dimensions of trust in physicians, which were identified in a previous qualitative study in the same setting, were used to develop a scale. This scale was administered among 616 adults selected from urban and rural areas of Tamil Nadu, south India, using a multistage sampling cross sectional survey method. The individual items were analysed using a classical test approach as well as item response theory. Cronbach's α was calculated and the item to total correlation of each item was assessed. After testing for unidimensionality and absence of local dependence, a 2 parameter logistic Semajima's graded response model was fit and item characteristics assessed. Competence, assurance of treatment, respect for the physician and loyalty to the physician were important dimensions of trust. A total of 31 items were developed using these dimensions. Of these, 22 were selected for final analysis. The Cronbach's α was 0.928. The item to total correlations were acceptable for all the 22 items. The item response analysis revealed good item characteristic curves and item information for all the items. Based on the item parameters and item information, a final 12 item scale was developed. The scale performs optimally in the low to moderate trust range. The final 12 item trust in physician scale has a good construct validity and internal consistency. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Development and validation of a socioculturally competent trust in physician scale for a developing country setting

    PubMed Central

    Gopichandran, Vijayaprasad; Wouters, Edwin; Chetlapalli, Satish Kumar

    2015-01-01

    Trust in physicians is the unwritten covenant between the patient and the physician that the physician will do what is in the best interest of the patient. This forms the undercurrent of all healthcare relationships. Several scales exist for assessment of trust in physicians in developed healthcare settings, but to our knowledge none of these have been developed in a developing country context. Objectives To develop and validate a new trust in physician scale for a developing country setting. Methods Dimensions of trust in physicians, which were identified in a previous qualitative study in the same setting, were used to develop a scale. This scale was administered among 616 adults selected from urban and rural areas of Tamil Nadu, south India, using a multistage sampling cross sectional survey method. The individual items were analysed using a classical test approach as well as item response theory. Cronbach's α was calculated and the item to total correlation of each item was assessed. After testing for unidimensionality and absence of local dependence, a 2 parameter logistic Semajima's graded response model was fit and item characteristics assessed. Results Competence, assurance of treatment, respect for the physician and loyalty to the physician were important dimensions of trust. A total of 31 items were developed using these dimensions. Of these, 22 were selected for final analysis. The Cronbach's α was 0.928. The item to total correlations were acceptable for all the 22 items. The item response analysis revealed good item characteristic curves and item information for all the items. Based on the item parameters and item information, a final 12 item scale was developed. The scale performs optimally in the low to moderate trust range. Conclusions The final 12 item trust in physician scale has a good construct validity and internal consistency. PMID:25941182

  6. Dynamic subfilter-scale stress model for large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Rouhi, A.; Piomelli, U.; Geurts, B. J.

    2016-08-01

    We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.

  7. Multiscale 3-D shape representation and segmentation using spherical wavelets.

    PubMed

    Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen

    2007-04-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.

  8. Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets

    PubMed Central

    Nain, Delphine; Haker, Steven; Bobick, Aaron

    2013-01-01

    This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745

  9. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 1. A new information geodatabase with uncertainty characterizations

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bisson, M.; Neri, A.; Cioni, R.; Bevilacqua, A.; Aspinall, W. P.

    2017-06-01

    This study presents new and revised data sets about the spatial distribution of past volcanic vents, eruptive fissures, and regional/local structures of the Somma-Vesuvio volcanic system (Italy). The innovative features of the study are the identification and quantification of important sources of uncertainty affecting interpretations of the data sets. In this regard, the spatial uncertainty of each feature is modeled by an uncertainty area, i.e., a geometric element typically represented by a polygon drawn around points or lines. The new data sets have been assembled as an updatable geodatabase that integrates and complements existing databases for Somma-Vesuvio. The data are organized into 4 data sets and stored as 11 feature classes (points and lines for feature locations and polygons for the associated uncertainty areas), totaling more than 1700 elements. More specifically, volcanic vent and eruptive fissure elements are subdivided into feature classes according to their associated eruptive styles: (i) Plinian and sub-Plinian eruptions (i.e., large- or medium-scale explosive activity); (ii) violent Strombolian and continuous ash emission eruptions (i.e., small-scale explosive activity); and (iii) effusive eruptions (including eruptions from both parasitic vents and eruptive fissures). Regional and local structures (i.e., deep faults) are represented as linear feature classes. To support interpretation of the eruption data, additional data sets are provided for Somma-Vesuvio geological units and caldera morphological features. In the companion paper, the data presented here, and the associated uncertainties, are used to develop a first vent opening probability map for the Somma-Vesuvio caldera, with specific attention focused on large or medium explosive events.

  10. Shifting Scales of Education Politics in a Vernacular of Disruption and Dislocation

    ERIC Educational Resources Information Center

    Mawhinney, Hanne B.

    2010-01-01

    Article comments on contributions to an issue of Educational Policy that focuses on glocal politics of education in multiple national and international arenas. Commentary offered considers the ways in which the set of articles in this issue of EP require readers to take scalar leaps across the semiotic landscape of the local into the global. The…

  11. A landscape inventory framework: scenic analyses of the Northern Great Plains

    Treesearch

    Litton R. Burton Jr.; Robert J. Tetlow

    1978-01-01

    A set of four visual inventories are proposed. They are designed to document scenic resources for varied scales of application, from regional and general to local and specific. The Northern Great Plains is used as a case study. Scenic analysis and identification of criteria extend earlier work. The inventory is based on (1) study of previously developed landscape...

  12. A comparison of three methods for measuring local urban tree canopy cover

    Treesearch

    Kristen L. King; Dexter H. Locke

    2013-01-01

    Measurements of urban tree canopy cover are crucial for managing urban forests and required for the quantification of the benefits provided by trees. These types of data are increasingly used to secure funding and justify large-scale planting programs in urban areas. Comparisons of tree canopy measurement methods have been conducted before, but a rapidly evolving set...

  13. A Validity Agenda for Growth Models: One Size Doesn't Fit All!

    ERIC Educational Resources Information Center

    Patelis, Thanos

    2012-01-01

    This is a keynote presentation given at AERA on developing a validity agenda for growth models in a large scale (e.g., state) setting. The emphasis of this presentation was to indicate that growth models and the validity agenda designed to provide evidence in supporting the claims to be made need to be personalized to meet the local or…

  14. Spatiotemporal Dependency of Age-Related Changes in Brain Signal Variability

    PubMed Central

    McIntosh, A. R.; Vakorin, V.; Kovacevic, N.; Wang, H.; Diaconescu, A.; Protzner, A. B.

    2014-01-01

    Recent theoretical and empirical work has focused on the variability of network dynamics in maturation. Such variability seems to reflect the spontaneous formation and dissolution of different functional networks. We sought to extend these observations into healthy aging. Two different data sets, one EEG (total n = 48, ages 18–72) and one magnetoencephalography (n = 31, ages 20–75) were analyzed for such spatiotemporal dependency using multiscale entropy (MSE) from regional brain sources. In both data sets, the changes in MSE were timescale dependent, with higher entropy at fine scales and lower at more coarse scales with greater age. The signals were parsed further into local entropy, related to information processed within a regional source, and distributed entropy (information shared between two sources, i.e., functional connectivity). Local entropy increased for most regions, whereas the dominant change in distributed entropy was age-related reductions across hemispheres. These data further the understanding of changes in brain signal variability across the lifespan, suggesting an inverted U-shaped curve, but with an important qualifier. Unlike earlier in maturation, where the changes are more widespread, changes in adulthood show strong spatiotemporal dependence. PMID:23395850

  15. Total-energy Assisted Tight-binding Method Based on Local Density Approximation of Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki

    2018-06-01

    A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.

  16. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    PubMed

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  17. Global and local scale flood discharge simulations in the Rhine River basin for flood risk reduction benchmarking in the Flagship Project

    NASA Astrophysics Data System (ADS)

    Gädeke, Anne; Gusyev, Maksym; Magome, Jun; Sugiura, Ai; Cullmann, Johannes; Takeuchi, Kuniyoshi

    2015-04-01

    The global flood risk assessment is prerequisite to set global measurable targets of post-Hyogo Framework for Action (HFA) that mobilize international cooperation and national coordination towards disaster risk reduction (DRR) and requires the establishment of a uniform flood risk assessment methodology on various scales. To address these issues, the International Flood Initiative (IFI) has initiated a Flagship Project, which was launched in year 2013, to support flood risk reduction benchmarking at global, national and local levels. In the Flagship Project road map, it is planned to identify the original risk (1), to identify the reduced risk (2), and to facilitate the risk reduction actions (3). In order to achieve this goal at global, regional and local scales, international research collaboration is absolutely necessary involving domestic and international institutes, academia and research networks such as UNESCO International Centres. The joint collaboration by ICHARM and BfG was the first attempt that produced the first step (1a) results on the flood discharge estimates with inundation maps under way. As a result of this collaboration, we demonstrate the outcomes of the first step of the IFI Flagship Project to identify flood hazard in the Rhine river basin on the global and local scale. In our assessment, we utilized a distributed hydrological Block-wise TOP (BTOP) model on 20-km and 0.5-km scales with local precipitation and temperature input data between 1980 and 2004. We utilized existing 20-km BTOP model, which is applied globally, and constructed the local scale 0.5-km BTOP model for the Rhine River basin. For the BTOP model results, both calibrated 20-km and 0.5-km BTOP models had similar statistical performance and represented observed flood river discharges, epecially for 1993 and 1995 floods. From 20-km and 0.5-km BTOP simulation, the flood discharges of the selected return period were estimated using flood frequency analysis and were comparable to the the river gauging station data at the German part of the Rhine river basin. This is an important finding that both 0.5-km and 20-km BTOP models produce similar flood peak discharges although the 0.5-km BTOP model results indicate the importance of scale in the local flood hazard assessment. In summary, we highlight that this study serves as a demonstrative example of institutional collaboration and is stepping stone for the next step implementation of the IFI Flagship Project.

  18. Feasibility of a shorter Goal Attainment Scaling method for a pediatric spasticity clinic - The 3-milestones GAS.

    PubMed

    Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M

    2017-07-01

    Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  19. Multi-criteria decision analysis of breast cancer control in low- and middle- income countries: development of a rating tool for policy makers.

    PubMed

    Venhorst, Kristie; Zelle, Sten G; Tromp, Noor; Lauer, Jeremy A

    2014-01-01

    The objective of this study was to develop a rating tool for policy makers to prioritize breast cancer interventions in low- and middle- income countries (LMICs), based on a simple multi-criteria decision analysis (MCDA) approach. The definition and identification of criteria play a key role in MCDA, and our rating tool could be used as part of a broader priority setting exercise in a local setting. This tool may contribute to a more transparent priority-setting process and fairer decision-making in future breast cancer policy development. First, an expert panel (n = 5) discussed key considerations for tool development. A literature review followed to inventory all relevant criteria and construct an initial set of criteria. A Delphi study was then performed and questionnaires used to discuss a final list of criteria with clear definitions and potential scoring scales. For this Delphi study, multiple breast cancer policy and priority-setting experts from different LMICs were selected and invited by the World Health Organization. Fifteen international experts participated in all three Delphi rounds to assess and evaluate each criterion. This study resulted in a preliminary rating tool for assessing breast cancer interventions in LMICs. The tool consists of 10 carefully crafted criteria (effectiveness, quality of the evidence, magnitude of individual health impact, acceptability, cost-effectiveness, technical complexity, affordability, safety, geographical coverage, and accessibility), with clear definitions and potential scoring scales. This study describes the development of a rating tool to assess breast cancer interventions in LMICs. Our tool can offer supporting knowledge for the use or development of rating tools as part of a broader (MCDA based) priority setting exercise in local settings. Further steps for improving the tool are proposed and should lead to its useful adoption in LMICs.

  20. Validity and reliability of the Malay version of the Hill-Bone compliance to high blood pressure therapy scale for use in primary healthcare settings in Malaysia: A cross-sectional study.

    PubMed

    Cheong, A T; Tong, S F; Sazlina, S G

    2015-01-01

    Hill-Bone compliance to high blood pressure therapy scale (HBTS) is one of the useful scales in primary care settings. It has been tested in America, Africa and Turkey with variable validity and reliability. The aim of this paper was to determine the validity and reliability of the Malay version of HBTS (HBTS-M) for the Malaysian population. HBTS comprises three subscales assessing compliance to medication, appointment and salt intake. The content validity of HBTS to the local population was agreed through consensus of expert panel. The 14 items used in the HBTS were adapted to reflect the local situations. It was translated into Malay and then back-translated into English. The translated version was piloted in 30 participants. This was followed by structural and predictive validity, and internal consistency testing in 262 patients with hypertension, who were on antihypertensive agent(s) for at least 1 year in two primary healthcare clinics in Kuala Lumpur, Malaysia. Exploratory factor analyses and the correlation between HBTS-M total score and blood pressure were performed. The Cronbach's alpha was calculated accordingly. Factor analysis revealed a three-component structure represented by two components on medication adherence and one on salt intake adherence. The Kaiser-Meyer-Olkin statistic was 0.764. The variance explained by each factors were 23.6%, 10.4% and 9.8%, respectively. However, the internal consistency for each component was suboptimal with Cronbach's alpha of 0.64, 0.55 and 0.29, respectively. Although there were two components representing medication adherence, the theoretical concepts underlying each concept cannot be differentiated. In addition, there was no correlation between the HBTS-M total score and blood pressure. HBTS-M did not conform to the structural and predictive validity of the original scale. Its reliability on assessing medication and salt intake adherence would most probably to be suboptimal in the Malaysian primary care setting.

  1. Observation of Anisotropy in the Arrival Directions of Galactic Cosmic Rays at Multiple Angular Scales with IceCube

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brown, A. M.; Buitink, S.; Caballero-Mora, K. S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; De Clercq, C.; Demirörs, L.; Denger, T.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Gora, D.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hajismail, A.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madajczyk, B.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, C. C.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stössl, A.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Stür, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.; IceCube Collaboration

    2011-10-01

    Between 2009 May and 2010 May, the IceCube neutrino detector at the South Pole recorded 32 billion muons generated in air showers produced by cosmic rays with a median energy of 20 TeV. With a data set of this size, it is possible to probe the southern sky for per-mil anisotropy on all angular scales in the arrival direction distribution of cosmic rays. Applying a power spectrum analysis to the relative intensity map of the cosmic ray flux in the southern hemisphere, we show that the arrival direction distribution is not isotropic, but shows significant structure on several angular scales. In addition to previously reported large-scale structure in the form of a strong dipole and quadrupole, the data show small-scale structure on scales between 15° and 30°. The skymap exhibits several localized regions of significant excess and deficit in cosmic ray intensity. The relative intensity of the smaller-scale structures is about a factor of five weaker than that of the dipole and quadrupole structure. The most significant structure, an excess localized at (right ascension α = 122fdg4 and declination δ = -47fdg4), extends over at least 20° in right ascension and has a post-trials significance of 5.3σ. The origin of this anisotropy is still unknown.

  2. A new ionospheric storm scale based on TEC and foF2 statistics

    NASA Astrophysics Data System (ADS)

    Nishioka, Michi; Tsugawa, Takuya; Jin, Hidekatsu; Ishii, Mamoru

    2017-01-01

    In this paper, we propose the I-scale, a new ionospheric storm scale for general users in various regions in the world. With the I-scale, ionospheric storms can be classified at any season, local time, and location. Since the ionospheric condition largely depends on many factors such as solar irradiance, energy input from the magnetosphere, and lower atmospheric activity, it had been difficult to scale ionospheric storms, which are mainly caused by solar and geomagnetic activities. In this study, statistical analysis was carried out for total electron content (TEC) and F2 layer critical frequency (foF2) in Japan for 18 years from 1997 to 2014. Seasonal, local time, and latitudinal dependences of TEC and foF2 variabilities are excluded by normalizing each percentage variation using their statistical standard deviations. The I-scale is defined by setting thresholds to the normalized numbers to seven categories: I0, IP1, IP2, IP3, IN1, IN2, and IN3. I0 represents a quiet state, and IP1 (IN1), IP2 (IN2), and IP3 (IN3) represent moderate, strong, and severe positive (negative) storms, respectively. The proposed I-scale can be used for other locations, such as polar and equatorial regions. It is considered that the proposed I-scale can be a standardized scale to help the users to assess the impact of space weather on their systems.

  3. Density Functional O(N) Calculations

    NASA Astrophysics Data System (ADS)

    Ordejón, Pablo

    1998-03-01

    We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.

  4. A policy analysis of the implementation of a Reproductive Health Vouchers Program in Kenya.

    PubMed

    Abuya, Timothy; Njuki, Rebecca; Warren, Charlotte E; Okal, Jerry; Obare, Francis; Kanya, Lucy; Askew, Ian; Bellows, Ben

    2012-07-23

    Innovative financing strategies such as those that integrate supply and demand elements like the output-based approach (OBA) have been implemented to reduce financial barriers to maternal health services. The Kenyan government with support from the German Development Bank (KfW) implemented an OBA voucher program to subsidize priority reproductive health services. Little evidence exists on the experience of implementing such programs in different settings. We describe the implementation process of the Kenyan OBA program and draw implications for scale up. Policy analysis using document review and qualitative data from 10 in-depth interviews with facility in-charges and 18 with service providers from the contracted facilities, local administration, health and field managers in Kitui, Kiambu and Kisumu districts as well as Korogocho and Viwandani slums in Nairobi. The OBA implementation process was designed in phases providing an opportunity for learning and adapting the lessons to local settings; the design consisted of five components: a defined benefit package, contracting and quality assurance; marketing and distribution of vouchers and claims processing and reimbursement. Key implementation challenges included limited feedback to providers on the outcomes of quality assurance and accreditation and budgetary constraints that limited effective marketing leading to inadequate information to clients on the benefit package. Claims processing and reimbursement was sophisticated but required adherence to time consuming procedures and in some cases private providers complained of low reimbursement rates for services provided. OBA voucher schemes can be implemented successfully in similar settings. For effective scale up, strong partnership will be required between the public and private entities. The government's role is key and should include provision of adequate funding, stewardship and looking for opportunities to utilize existing platforms to scale up such strategies.

  5. A Policy Analysis of the implementation of a Reproductive Health Vouchers Program in Kenya

    PubMed Central

    2012-01-01

    Background Innovative financing strategies such as those that integrate supply and demand elements like the output-based approach (OBA) have been implemented to reduce financial barriers to maternal health services. The Kenyan government with support from the German Development Bank (KfW) implemented an OBA voucher program to subsidize priority reproductive health services. Little evidence exists on the experience of implementing such programs in different settings. We describe the implementation process of the Kenyan OBA program and draw implications for scale up. Methods Policy analysis using document review and qualitative data from 10 in-depth interviews with facility in-charges and 18 with service providers from the contracted facilities, local administration, health and field managers in Kitui, Kiambu and Kisumu districts as well as Korogocho and Viwandani slums in Nairobi. Results The OBA implementation process was designed in phases providing an opportunity for learning and adapting the lessons to local settings; the design consisted of five components: a defined benefit package, contracting and quality assurance; marketing and distribution of vouchers and claims processing and reimbursement. Key implementation challenges included limited feedback to providers on the outcomes of quality assurance and accreditation and budgetary constraints that limited effective marketing leading to inadequate information to clients on the benefit package. Claims processing and reimbursement was sophisticated but required adherence to time consuming procedures and in some cases private providers complained of low reimbursement rates for services provided. Conclusions OBA voucher schemes can be implemented successfully in similar settings. For effective scale up, strong partnership will be required between the public and private entities. The government’s role is key and should include provision of adequate funding, stewardship and looking for opportunities to utilize existing platforms to scale up such strategies. PMID:22823923

  6. [Evaluation of the learning curve of residents in localizing a phantom target with ultrasonography].

    PubMed

    Dessieux, T; Estebe, J-P; Bloc, S; Mercadal, L; Ecoffey, C

    2008-10-01

    Few information are available regarding the learning curve in ultrasonography and even less for ultrasound-guided regional anesthesia. This study aimed to evaluate in a training program the learning curve on a phantom of 12 residents novice in ultrasonography. Twelve trainees inexperienced in ultrasonography were given introductory training consisting of didactic formation on the various components of the portable ultrasound machine (i.e. on/off button, gain, depth, resolution, and image storage). Then, students performed three trials, in two sets of increased difficulty, at executing these predefined tasks: adjustments of the machine, then localization of a small plastic piece introduced into roasting pork (3 cm below the surface). At the end of the evaluation, the residents were asked to insert a 22 G needle into an exact predetermined target (i.e. point of fascia intersection). The progression of the needle was continuously controlled by ultrasound visualization using injection of a small volume of water (needle perpendicular to the longitudinal plane of the ultrasound beam). Two groups of two different examiners evaluated for each three trials the skill of the residents (quality, time to perform the machine adjustments, to localize the plastic target, and to hydrolocalize, and volume used for hydrolocalization). After each trial, residents evaluated their performance using a difficulty scale (0: easy to 10: difficult). All residents performed the adjustments from the last trial of each set, with a learning curve observed in terms of duration. Localization of the plastic piece was achieved by all residents at the 6th trial, with a shorter duration of localization. Hydrolocalization was achieved after the 4th trial by all subjects. Difficulty scale was correlated to the number of trials. All these results were independent of the experience of residents in regional anesthesia. Four trials were necessary to adjust correctly the machine, to localize a target, and to complete hydrolocalization. Ultrasonography in regional anesthesia seems to be a fast-learning technique, using this kind of practical training.

  7. Modular GIS Framework for National Scale Hydrologic and Hydraulic Modeling Support

    NASA Astrophysics Data System (ADS)

    Djokic, D.; Noman, N.; Kopp, S.

    2015-12-01

    Geographic information systems (GIS) have been extensively used for pre- and post-processing of hydrologic and hydraulic models at multiple scales. An extensible GIS-based framework was developed for characterization of drainage systems (stream networks, catchments, floodplain characteristics) and model integration. The framework is implemented as a set of free, open source, Python tools and builds on core ArcGIS functionality and uses geoprocessing capabilities to ensure extensibility. Utilization of COTS GIS core capabilities allows immediate use of model results in a variety of existing online applications and integration with other data sources and applications.The poster presents the use of this framework to downscale global hydrologic models to local hydraulic scale and post process the hydraulic modeling results and generate floodplains at any local resolution. Flow forecasts from ECMWF or WRF-Hydro are downscaled and combined with other ancillary data for input into the RAPID flood routing model. RAPID model results (stream flow along each reach) are ingested into a GIS-based scale dependent stream network database for efficient flow utilization and visualization over space and time. Once the flows are known at localized reaches, the tools can be used to derive the floodplain depth and extent for each time step in the forecast at any available local resolution. If existing rating curves are available they can be used to relate the flow to the depth of flooding, or synthetic rating curves can be derived using the tools in the toolkit and some ancillary data/assumptions. The results can be published as time-enabled spatial services to be consumed by web applications that use floodplain information as an input. Some of the existing online presentation templates can be easily combined with available online demographic and infrastructure data to present the impact of the potential floods on the local community through simple, end user products. This framework has been successfully used in both the data rich environments as well as in locales with minimum available spatial and hydrographic data.

  8. Nonlinear time series analysis of normal and pathological human walking

    NASA Astrophysics Data System (ADS)

    Dingwell, Jonathan B.; Cusumano, Joseph P.

    2000-12-01

    Characterizing locomotor dynamics is essential for understanding the neuromuscular control of locomotion. In particular, quantifying dynamic stability during walking is important for assessing people who have a greater risk of falling. However, traditional biomechanical methods of defining stability have not quantified the resistance of the neuromuscular system to perturbations, suggesting that more precise definitions are required. For the present study, average maximum finite-time Lyapunov exponents were estimated to quantify the local dynamic stability of human walking kinematics. Local scaling exponents, defined as the local slopes of the correlation sum curves, were also calculated to quantify the local scaling structure of each embedded time series. Comparisons were made between overground and motorized treadmill walking in young healthy subjects and between diabetic neuropathic (NP) patients and healthy controls (CO) during overground walking. A modification of the method of surrogate data was developed to examine the stochastic nature of the fluctuations overlying the nominally periodic patterns in these data sets. Results demonstrated that having subjects walk on a motorized treadmill artificially stabilized their natural locomotor kinematics by small but statistically significant amounts. Furthermore, a paradox previously present in the biomechanical literature that resulted from mistakenly equating variability with dynamic stability was resolved. By slowing their self-selected walking speeds, NP patients adopted more locally stable gait patterns, even though they simultaneously exhibited greater kinematic variability than CO subjects. Additionally, the loss of peripheral sensation in NP patients was associated with statistically significant differences in the local scaling structure of their walking kinematics at those length scales where it was anticipated that sensory feedback would play the greatest role. Lastly, stride-to-stride fluctuations in the walking patterns of all three subject groups were clearly distinguishable from linearly autocorrelated Gaussian noise. As a collateral benefit of the methodological approach taken in this study, some of the first steps at characterizing the underlying structure of human locomotor dynamics have been taken. Implications for understanding the neuromuscular control of locomotion are discussed.

  9. And young child feeding practices in different country settings.

    PubMed

    Sanghvi, Tina; Jimerson, Ann; Hajeebhoy, Nemat; Zewale, Medhanit; Nguyen, Giang Huong

    2013-09-01

    Alive & Thrive aims to increase exclusive breastfeeding and complementary feeding practices in Bangladesh, Ethiopia, and Vietnam. To develop and execute comprehensive communication strategies adapted to each context. We documented how three countries followed an established iterative planning process, with research steps followed by key decisions, to develop a communication strategy in each country. Secondary analysis and formative research identified the priority practices to focus on, and locally specific constraints to proper infant and young child feeding (IYCF). Communication strategies were then developed based on the social, cultural, economic, epidemiological, media use, and programmatic contexts of each country. There were widespread gaps between recommended and actual feeding practices, and these varied by country. Gaps were identified in household, community, and institutional levels of awareness and skills. Strategies were designed that would enable mothers in each specific setting to adopt practices. To improve priority behaviors, messaging and media strategies addressed the most salient behavioral determinants through face-to-face communication, social mobilization, and mass media. Trials of improved practices (TIPs), concept testing, and pretesting of materials proved useful to verify the relevance and likely effectiveness of communication messages and materials tailored for different audiences in each setting. Coordination and collaboration with multiple stakeholders from the start was important to harmonize messages and approaches, expand geographic coverage to national scale, and sustain the interventions. Our experience with designing large-scale communication strategies for behavior change confirms that systematic analysis and local planning cannot be omitted from the critical process of strategic design tailored to each context. Multiple communication channels matched to media habits in each setting can reach a substantial proportion of mothers and others who influence their IYCF practices. Preliminary data suggest that exposure to mass media plays a critical role in rapidly reaching mothers, household members, community influentials, and health workers on a large scale. Combining face-to-face interventions for mothers with social mobilization and mass media was effective in improving IYCF practices.

  10. Scale dependence of halo bispectrum from non-Gaussian initial conditions in cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Taruya, Atsushi; Koyama, Kazuya; Sabiu, Cristiano

    2010-07-01

    We study the halo bispectrum from non-Gaussian initial conditions. Based on a set of large N-body simulations starting from initial density fields with local type non-Gaussianity, we find that the halo bispectrum exhibits a strong dependence on the shape and scale of Fourier space triangles near squeezed configurations at large scales. The amplitude of the halo bispectrum roughly scales as fNL2. The resultant scaling on the triangular shape is consistent with that predicted by Jeong & Komatsu based on perturbation theory. We systematically investigate this dependence with varying redshifts and halo mass thresholds. It is shown that the fNL dependence of the halo bispectrum is stronger for more massive haloes at higher redshifts. This feature can be a useful discriminator of inflation scenarios in future deep and wide galaxy redshift surveys.

  11. FLARE: A New User Facility for Laboratory Studies of Multiple-Scale Physics of Magnetic Reconnection and Related Phenomena in Heliophysics and Astrophysics

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.

    2017-10-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton with first plasmas expected in the fall of 2017, based on the design of Magnetic Reconnection Experiment (MRX; mrx.pppl.gov) with much extended parameter ranges. Its main objective is to provide an experimental platform for the studies of magnetic reconnection and related phenomena in the multiple X-line regimes directly relevant to space, solar, astrophysical and fusion plasmas. The main diagnostics is an extensive set of magnetic probe arrays, simultaneously covering multiple scales from local electron scales ( 2 mm), to intermediate ion scales ( 10 cm), and global MHD scales ( 1 m). Specific example space physics topics which can be studied on FLARE will be discussed.

  12. Ab initio molecular simulations with numeric atom-centered orbitals

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Gehrke, Ralf; Hanke, Felix; Havu, Paula; Havu, Ville; Ren, Xinguo; Reuter, Karsten; Scheffler, Matthias

    2009-11-01

    We describe a complete set of algorithms for ab initio molecular simulations based on numerically tabulated atom-centered orbitals (NAOs) to capture a wide range of molecular and materials properties from quantum-mechanical first principles. The full algorithmic framework described here is embodied in the Fritz Haber Institute "ab initio molecular simulations" (FHI-aims) computer program package. Its comprehensive description should be relevant to any other first-principles implementation based on NAOs. The focus here is on density-functional theory (DFT) in the local and semilocal (generalized gradient) approximations, but an extension to hybrid functionals, Hartree-Fock theory, and MP2/GW electron self-energies for total energies and excited states is possible within the same underlying algorithms. An all-electron/full-potential treatment that is both computationally efficient and accurate is achieved for periodic and cluster geometries on equal footing, including relaxation and ab initio molecular dynamics. We demonstrate the construction of transferable, hierarchical basis sets, allowing the calculation to range from qualitative tight-binding like accuracy to meV-level total energy convergence with the basis set. Since all basis functions are strictly localized, the otherwise computationally dominant grid-based operations scale as O(N) with system size N. Together with a scalar-relativistic treatment, the basis sets provide access to all elements from light to heavy. Both low-communication parallelization of all real-space grid based algorithms and a ScaLapack-based, customized handling of the linear algebra for all matrix operations are possible, guaranteeing efficient scaling (CPU time and memory) up to massively parallel computer systems with thousands of CPUs.

  13. Maintaining a Local Data Integration System in Support of Weather Forecast Operations

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian

    2010-01-01

    Since 2000, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) at Johnson Space Center in Houston, TX have used a local data integration system (LDIS) as part of their forecast and warning operations. The original LDIS was developed by NASA's Applied Meteorology Unit (AMU; Bauman et ai, 2004) in 1998 (Manobianco and Case 1998) and has undergone subsequent improvements. Each has benefited from three-dimensional (3-D) analyses that are delivered to forecasters every 15 minutes across the peninsula of Florida. The intent is to generate products that enhance short-range weather forecasts issued in support of NWS MLB and SMG operational requirements within East Central Florida. The current LDIS uses the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) package as its core, which integrates a wide variety of national, regional, and local observational data sets. It assimilates all available real-time data within its domain and is run at a finer spatial and temporal resolution than current national- or regional-scale analysis packages. As such, it provides local forecasters with a more comprehensive understanding of evolving fine-scale weather features

  14. Putting the public (back) into public health: leadership, evidence and action.

    PubMed

    South, J; Connolly, A M; Stansfield, J A; Johnstone, P; Henderson, G; Fenton, K A

    2018-03-13

    There is a strong evidence-based rationale for community capacity building and community empowerment as part of a strategic response to reduce health inequalities. Within the current UK policy context, there are calls for increased public engagement in prevention and local decision-making in order to give people greater control over the conditions that determine health. With reference to the challenges and opportunities within the English public health system, this essay seeks to open debate about what is required to mainstream community-centred approaches and ensure that the public is central to public health. The essay sets out the case for a reorientation of public health practice in order to build impactful action with communities at scale leading to a reduction in the health gap. National frameworks that support local practice are described. Four areas of challenge that could potentially drive an implementation gap are discussed: (i) achieving integration and scale, (ii) effective community mobilization, (iii) evidencing impact and (iv) achieving a shift in power. The essay concludes with a call to action for developing a contemporary public health practice that is rooted in communities and offers local leadership to strengthen local assets, increase community control and reduce health inequalities.

  15. Meteorological Controls on Local and Regional Volcanic Ash Dispersal.

    PubMed

    Poulidis, Alexandros P; Phillips, Jeremy C; Renfrew, Ian A; Barclay, Jenni; Hogg, Andrew; Jenkins, Susanna F; Robertson, Richard; Pyle, David M

    2018-05-02

    Volcanic ash has the capacity to impact human health, livestock, crops and infrastructure, including international air traffic. For recent major eruptions, information on the volcanic ash plume has been combined with relatively coarse-resolution meteorological model output to provide simulations of regional ash dispersal, with reasonable success on the scale of hundreds of kilometres. However, to predict and mitigate these impacts locally, significant improvements in modelling capability are required. Here, we present results from a dynamic meteorological-ash-dispersion model configured with sufficient resolution to represent local topographic and convectively-forced flows. We focus on an archetypal volcanic setting, Soufrière, St Vincent, and use the exceptional historical records of the 1902 and 1979 eruptions to challenge our simulations. We find that the evolution and characteristics of ash deposition on St Vincent and nearby islands can be accurately simulated when the wind shear associated with the trade wind inversion and topographically-forced flows are represented. The wind shear plays a primary role and topographic flows a secondary role on ash distribution on local to regional scales. We propose a new explanation for the downwind ash deposition maxima, commonly observed in volcanic eruptions, as resulting from the detailed forcing of mesoscale meteorology on the ash plume.

  16. The impact of local surface changes in Borneo on atmospheric composition at wider spatial scales: coastal processes, land-use change and air quality

    PubMed Central

    Pyle, J. A.; Warwick, N. J.; Harris, N. R. P.; Abas, Mohd Radzi; Archibald, A. T.; Ashfold, M. J.; Ashworth, K.; Barkley, Michael P.; Carver, G. D.; Chance, K.; Dorsey, J. R.; Fowler, D.; Gonzi, S.; Gostlow, B.; Hewitt, C. N.; Kurosu, T. P.; Lee, J. D.; Langford, S. B.; Mills, G.; Moller, S.; MacKenzie, A. R.; Manning, A. J.; Misztal, P.; Nadzir, Mohd Shahrul Mohd; Nemitz, E.; Newton, H. M.; O'Brien, L. M.; Ong, Simon; Oram, D.; Palmer, P. I.; Peng, Leong Kok; Phang, Siew Moi; Pike, R.; Pugh, T. A. M.; Rahman, Noorsaadah Abdul; Robinson, A. D.; Sentian, J.; Samah, Azizan Abu; Skiba, U.; Ung, Huan Eng; Yong, Sei Eng; Young, P. J.

    2011-01-01

    We present results from the OP3 campaign in Sabah during 2008 that allow us to study the impact of local emission changes over Borneo on atmospheric composition at the regional and wider scale. OP3 constituent data provide an important constraint on model performance. Treatment of boundary layer processes is highlighted as an important area of model uncertainty. Model studies of land-use change confirm earlier work, indicating that further changes to intensive oil palm agriculture in South East Asia, and the tropics in general, could have important impacts on air quality, with the biggest factor being the concomitant changes in NOx emissions. With the model scenarios used here, local increases in ozone of around 50 per cent could occur. We also report measurements of short-lived brominated compounds around Sabah suggesting that oceanic (and, especially, coastal) emission sources dominate locally. The concentration of bromine in short-lived halocarbons measured at the surface during OP3 amounted to about 7 ppt, setting an upper limit on the amount of these species that can reach the lower stratosphere. PMID:22006963

  17. The impact of local surface changes in Borneo on atmospheric composition at wider spatial scales: coastal processes, land-use change and air quality.

    PubMed

    Pyle, J A; Warwick, N J; Harris, N R P; Abas, Mohd Radzi; Archibald, A T; Ashfold, M J; Ashworth, K; Barkley, Michael P; Carver, G D; Chance, K; Dorsey, J R; Fowler, D; Gonzi, S; Gostlow, B; Hewitt, C N; Kurosu, T P; Lee, J D; Langford, S B; Mills, G; Moller, S; MacKenzie, A R; Manning, A J; Misztal, P; Nadzir, Mohd Shahrul Mohd; Nemitz, E; Newton, H M; O'Brien, L M; Ong, Simon; Oram, D; Palmer, P I; Peng, Leong Kok; Phang, Siew Moi; Pike, R; Pugh, T A M; Rahman, Noorsaadah Abdul; Robinson, A D; Sentian, J; Samah, Azizan Abu; Skiba, U; Ung, Huan Eng; Yong, Sei Eng; Young, P J

    2011-11-27

    We present results from the OP3 campaign in Sabah during 2008 that allow us to study the impact of local emission changes over Borneo on atmospheric composition at the regional and wider scale. OP3 constituent data provide an important constraint on model performance. Treatment of boundary layer processes is highlighted as an important area of model uncertainty. Model studies of land-use change confirm earlier work, indicating that further changes to intensive oil palm agriculture in South East Asia, and the tropics in general, could have important impacts on air quality, with the biggest factor being the concomitant changes in NO(x) emissions. With the model scenarios used here, local increases in ozone of around 50 per cent could occur. We also report measurements of short-lived brominated compounds around Sabah suggesting that oceanic (and, especially, coastal) emission sources dominate locally. The concentration of bromine in short-lived halocarbons measured at the surface during OP3 amounted to about 7 ppt, setting an upper limit on the amount of these species that can reach the lower stratosphere.

  18. Correlation Between Fracture Network Properties and Stress Variability in Geological Media

    NASA Astrophysics Data System (ADS)

    Lei, Qinghua; Gao, Ke

    2018-05-01

    We quantitatively investigate the stress variability in fractured geological media under tectonic stresses. The fracture systems studied include synthetic fracture networks following power law length scaling and natural fracture patterns based on outcrop mapping. The stress field is derived from a finite-discrete element model, and its variability is analyzed using a set of mathematical formulations that honor the tensorial nature of stress data. We show that local stress perturbation, quantified by the Euclidean distance of a local stress tensor to the mean stress tensor, has a positive, linear correlation with local fracture intensity, defined as the total fracture length per unit area within a local sampling window. We also evaluate the stress dispersion of the entire stress field using the effective variance, that is, a scalar-valued measure of the overall stress variability. The results show that a well-connected fracture system under a critically stressed state exhibits strong local and global stress variabilities.

  19. Integrating geo-referenced multiscale and multidisciplinary data for the management of biodiversity in livestock genetic resources.

    PubMed

    Joost, S; Colli, L; Baret, P V; Garcia, J F; Boettcher, P J; Tixier-Boichard, M; Ajmone-Marsan, P

    2010-05-01

    In livestock genetic resource conservation, decision making about conservation priorities is based on the simultaneous analysis of several different criteria that may contribute to long-term sustainable breeding conditions, such as genetic and demographic characteristics, environmental conditions, and role of the breed in the local or regional economy. Here we address methods to integrate different data sets and highlight problems related to interdisciplinary comparisons. Data integration is based on the use of geographic coordinates and Geographic Information Systems (GIS). In addition to technical problems related to projection systems, GIS have to face the challenging issue of the non homogeneous scale of their data sets. We give examples of the successful use of GIS for data integration and examine the risk of obtaining biased results when integrating datasets that have been captured at different scales.

  20. Map projections for global and continental data sets and an analysis of pixel distortion caused by reprojection

    USGS Publications Warehouse

    Steinwand, Daniel R.; Hutchinson, John A.; Snyder, J.P.

    1995-01-01

    In global change studies the effects of map projection properties on data quality are apparent, and the choice of projection is significant. To aid compilers of global and continental data sets, six equal-area projections were chosen: the interrupted Goode Homolosine, the interrupted Mollweide, the Wagner IV, and the Wagner VII for global maps; the Lambert Azimuthal Equal-Area for hemisphere maps; and the Oblated Equal-Area and the Lambert Azimuthal Equal-Area for continental maps. Distortions in small-scale maps caused by reprojection, and the additional distortions incurred when reprojecting raster images, were quantified and graphically depicted. For raster images, the errors caused by the usual resampling methods (pixel brightness level interpolation) were responsible for much of the additional error where the local resolution and scale change were the greatest.

  1. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  3. A major locus controls local adaptation and adaptive life history variation in a perennial plant.

    PubMed

    Wang, Jing; Ding, Jihua; Tan, Biyue; Robinson, Kathryn M; Michelson, Ingrid H; Johansson, Anna; Nystedt, Björn; Scofield, Douglas G; Nilsson, Ove; Jansson, Stefan; Street, Nathaniel R; Ingvarsson, Pär K

    2018-06-04

    The initiation of growth cessation and dormancy represent critical life-history trade-offs between survival and growth and have important fitness effects in perennial plants. Such adaptive life-history traits often show strong local adaptation along environmental gradients but, despite their importance, the genetic architecture of these traits remains poorly understood. We integrate whole genome re-sequencing with environmental and phenotypic data from common garden experiments to investigate the genomic basis of local adaptation across a latitudinal gradient in European aspen (Populus tremula). A single genomic region containing the PtFT2 gene mediates local adaptation in the timing of bud set and explains 65% of the observed genetic variation in bud set. This locus is the likely target of a recent selective sweep that originated right before or during colonization of northern Scandinavia following the last glaciation. Field and greenhouse experiments confirm that variation in PtFT2 gene expression affects the phenotypic variation in bud set that we observe in wild natural populations. Our results reveal a major effect locus that determines the timing of bud set and that has facilitated rapid adaptation to shorter growing seasons and colder climates in European aspen. The discovery of a single locus explaining a substantial fraction of the variation in a key life-history trait is remarkable, given that such traits are generally considered to be highly polygenic. These findings provide a dramatic illustration of how loci of large-effect for adaptive traits can arise and be maintained over large geographical scales in natural populations.

  4. A nationwide quality improvement project to accelerate Ghana's progress toward Millennium Development Goal Four: design and implementation progress.

    PubMed

    Twum-Danso, Nana A Y; Akanlu, George B; Osafo, Enoch; Sodzi-Tettey, Sodzi; Boadu, Richard O; Atinbire, Solomon; Adondiwo, Ane; Amenga-Etego, Isaac; Ashagbley, Francis; Boadu, Eric A; Dasoberi, Ireneous; Kanyoke, Ernest; Yabang, Elma; Essegbey, Ivan T; Adjei, George A; Buckle, Gilbert B; Awoonor-Williams, J Koku; Nang-Beifubah, Alexis; Twumasi, Akwasi; McCannon, C Joseph; Barker, Pierre M

    2012-12-01

    The gap between evidence-based guidelines and practice of care is reflected, in low- and middle-income countries, by high rates of maternal and child mortality and limited effectiveness of large-scale programing to decrease those rates. We designed a phased, rapid, national scale-up quality improvement (QI) intervention to accelerate the achievement of Millennium Development Goal Four in Ghana. Our intervention promoted systems thinking, active participation of managers and frontline providers, generation and testing of local change ideas using iterative learning from transparent district and local data, local ownership and sustainability. After 50 months of implementation, we have completed two prototype learning phases and have begun regional spread phases to all health facilities in all 38 districts of the three northernmost regions and all 29 Catholic hospitals in the remaining regions of the country. To accelerate the spread of improvement, we developed 'change packages' of rigorously tested process changes along the continuum of care from pregnancy to age 5 in both inpatient and outpatient settings. The primary successes for the project so far include broad and deep adoption of QI by local stakeholders for improving system performance, widespread capacitation of leaders, managers and frontline providers in QI methods, incorporation of local ideas into change packages and successful scale-up to approximately 25% of the country's districts in 3 years. Implementation challenges include variable leadership uptake and commitment at the district level, delays due to recruiting and scheduling barriers, weak data systems and repeated QI training due to high staff turnover.

  5. Cosmicflows Constrained Local UniversE Simulations

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  6. 3D robust Chan-Vese model for industrial computed tomography volume data segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Linghui; Zeng, Li; Luan, Xiao

    2013-11-01

    Industrial computed tomography (CT) has been widely applied in many areas of non-destructive testing (NDT) and non-destructive evaluation (NDE). In practice, CT volume data to be dealt with may be corrupted by noise. This paper addresses the segmentation of noisy industrial CT volume data. Motivated by the research on the Chan-Vese (CV) model, we present a region-based active contour model that draws upon intensity information in local regions with a controllable scale. In the presence of noise, a local energy is firstly defined according to the intensity difference within a local neighborhood. Then a global energy is defined to integrate local energy with respect to all image points. In a level set formulation, this energy is represented by a variational level set function, where a surface evolution equation is derived for energy minimization. Comparative analysis with the CV model indicates the comparable performance of the 3D robust Chan-Vese (RCV) model. The quantitative evaluation also shows the segmentation accuracy of 3D RCV. In addition, the efficiency of our approach is validated under several types of noise, such as Poisson noise, Gaussian noise, salt-and-pepper noise and speckle noise.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeBaron, Robin; Saul-Rinaldi, Kara

    There has never been a better time to launch initiatives to promote residential energy efficiency savings. Over the past several decades, residential retrofit programs have demonstrated that energy efficiency measures contribute to achieving multiple benefits, including but not limited to reductions in home energy consumption, stabilization improvements for the grid by shaving peak loads, saving consumers millions on utility bills, and significantly reducing carbon emissions. Although a number of barriers to widespread uptake of home energy upgrades persist, the lessons learned as a result of the 2009 stimulus funding1 have resulted in a set of policy approaches that create newmore » strategies for taking residential energy efficiency to scale.2 The identification of these approaches is well timed; energy efficiency is often the least expensive and most cost effective way to comply with a variety of federal, state and local policies. This Guide is designed to help state and local policymakers to take full advantage of new policy developments by providing them with a comprehensive set of tools to support launching or accelerating residential energy efficiency programs. It is written primarily for state and local policymakers, including state and local executives, legislators, public utility commissioners, and the staff who advise them.« less

  8. Moral parochialism and contextual contingency across seven societies

    PubMed Central

    Fessler, Daniel M. T.; Barrett, H. Clark; Kanovsky, Martin; Stich, Stephen; Holbrook, Colin; Henrich, Joseph; Bolyanatz, Alexander H.; Gervais, Matthew M.; Gurven, Michael; Kushnick, Geoff; Pisor, Anne C.; von Rueden, Christopher; Laurence, Stephen

    2015-01-01

    Human moral judgement may have evolved to maximize the individual's welfare given parochial culturally constructed moral systems. If so, then moral condemnation should be more severe when transgressions are recent and local, and should be sensitive to the pronouncements of authority figures (who are often arbiters of moral norms), as the fitness pay-offs of moral disapproval will primarily derive from the ramifications of condemning actions that occur within the immediate social arena. Correspondingly, moral transgressions should be viewed as less objectionable if they occur in other places or times, or if local authorities deem them acceptable. These predictions contrast markedly with those derived from prevailing non-evolutionary perspectives on moral judgement. Both classes of theories predict purportedly species-typical patterns, yet to our knowledge, no study to date has investigated moral judgement across a diverse set of societies, including a range of small-scale communities that differ substantially from large highly urbanized nations. We tested these predictions in five small-scale societies and two large-scale societies, finding substantial evidence of moral parochialism and contextual contingency in adults' moral judgements. Results reveal an overarching pattern in which moral condemnation reflects a concern with immediate local considerations, a pattern consistent with a variety of evolutionary accounts of moral judgement. PMID:26246545

  9. Moral parochialism and contextual contingency across seven societies.

    PubMed

    Fessler, Daniel M T; Barrett, H Clark; Kanovsky, Martin; Stich, Stephen; Holbrook, Colin; Henrich, Joseph; Bolyanatz, Alexander H; Gervais, Matthew M; Gurven, Michael; Kushnick, Geoff; Pisor, Anne C; von Rueden, Christopher; Laurence, Stephen

    2015-08-22

    Human moral judgement may have evolved to maximize the individual's welfare given parochial culturally constructed moral systems. If so, then moral condemnation should be more severe when transgressions are recent and local, and should be sensitive to the pronouncements of authority figures (who are often arbiters of moral norms), as the fitness pay-offs of moral disapproval will primarily derive from the ramifications of condemning actions that occur within the immediate social arena. Correspondingly, moral transgressions should be viewed as less objectionable if they occur in other places or times, or if local authorities deem them acceptable. These predictions contrast markedly with those derived from prevailing non-evolutionary perspectives on moral judgement. Both classes of theories predict purportedly species-typical patterns, yet to our knowledge, no study to date has investigated moral judgement across a diverse set of societies, including a range of small-scale communities that differ substantially from large highly urbanized nations. We tested these predictions in five small-scale societies and two large-scale societies, finding substantial evidence of moral parochialism and contextual contingency in adults' moral judgements. Results reveal an overarching pattern in which moral condemnation reflects a concern with immediate local considerations, a pattern consistent with a variety of evolutionary accounts of moral judgement. © 2015 The Authors.

  10. Concordance cosmology without dark energy

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Dobos, László; Beck, Róbert; Szapudi, István; Csabai, István

    2017-07-01

    According to the separate universe conjecture, spherically symmetric sub-regions in an isotropic universe behave like mini-universes with their own cosmological parameters. This is an excellent approximation in both Newtonian and general relativistic theories. We estimate local expansion rates for a large number of such regions, and use a scale parameter calculated from the volume-averaged increments of local scale parameters at each time step in an otherwise standard cosmological N-body simulation. The particle mass, corresponding to a coarse graining scale, is an adjustable parameter. This mean field approximation neglects tidal forces and boundary effects, but it is the first step towards a non-perturbative statistical estimation of the effect of non-linear evolution of structure on the expansion rate. Using our algorithm, a simulation with an initial Ωm = 1 Einstein-de Sitter setting closely tracks the expansion and structure growth history of the Λ cold dark matter (ΛCDM) cosmology. Due to small but characteristic differences, our model can be distinguished from the ΛCDM model by future precision observations. Moreover, our model can resolve the emerging tension between local Hubble constant measurements and the Planck best-fitting cosmology. Further improvements to the simulation are necessary to investigate light propagation and confirm full consistency with cosmic microwave background observations.

  11. Where Are All of the Gas-bearing Local Dwarf Galaxies? Quantifying Possible Impacts of Reionization

    NASA Astrophysics Data System (ADS)

    Tollerud, Erik J.; Peek, J. E. G.

    2018-04-01

    We present an approach for comparing the detections and non-detections of Local Group (LG) dwarf galaxies in large H I surveys to the predictions of a suite of n-body simulations of the LG. This approach depends primarily on a set of empirical scaling relations to connect the simulations to the observations, rather than making strong theoretical assumptions. We then apply this methodology to the Galactic Arecibo L-band Feed Array Hi (GALFA-HI) Compact Cloud Catalog (CCC), and compare it to the suite Exploring the Local Volume In Simulations (ELVIS) of simulations. This approach reveals a strong tension between the naïve results of the model and the observations: while there are no LG dwarfs in the GALFA-HI CCC, the simulations predict ∼10. Applying a simple model of reionization can resolve this tension by preventing low-mass halos from forming gas. However, and if this effect operates as expected, the observations provide a constraint on the mass scale of the dwarf galaxy that reionization impacts. Combined with the observed properties of Leo T, the halo virial mass scale at which reionization impacts dwarf galaxy gas content is constrained to be ∼ {10}8.5 {M}ȯ , independent of any assumptions about star formation.

  12. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation.

    PubMed

    Sountsov, Pavel; Santucci, David M; Lisman, John E

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated.

  13. A Biologically Plausible Transform for Visual Recognition that is Invariant to Translation, Scale, and Rotation

    PubMed Central

    Sountsov, Pavel; Santucci, David M.; Lisman, John E.

    2011-01-01

    Visual object recognition occurs easily despite differences in position, size, and rotation of the object, but the neural mechanisms responsible for this invariance are not known. We have found a set of transforms that achieve invariance in a neurally plausible way. We find that a transform based on local spatial frequency analysis of oriented segments and on logarithmic mapping, when applied twice in an iterative fashion, produces an output image that is unique to the object and that remains constant as the input image is shifted, scaled, or rotated. PMID:22125522

  14. Surface topography and electrical properties in Sr2FeMoO6 films studied at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Angervo, I.; Saloaro, M.; Mäkelä, J.; Lehtiö, J.-P.; Huhtinen, H.; Paturi, P.

    2018-03-01

    Pulsed laser deposited Sr2FeMoO6 thin films were investigated for the first time with scanning tunneling microscopy and spectroscopy. The results confirm atomic scale layer growth, with step-terrace structure corresponding to a single lattice cell scale. The spectroscopy research reveals a distribution of local electrical properties linked to structural deformation in the initial thin film layers at the film substrate interface. Significant hole structure giving rise to electrically distinctive regions in thinner film also seems to set a thickness limit for the thinnest films to be used in applications.

  15. The hubble constant.

    PubMed

    Huchra, J P

    1992-04-17

    The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution.

  16. Geological modeling of submeter scale heterogeneity and its influence on tracer transport in a fluvial aquifer

    NASA Astrophysics Data System (ADS)

    Ronayne, Michael J.; Gorelick, Steven M.; Zheng, Chunmiao

    2010-10-01

    We developed a new model of aquifer heterogeneity to analyze data from a single-well injection-withdrawal tracer test conducted at the Macrodispersion Experiment (MADE) site on the Columbus Air Force Base in Mississippi (USA). The physical heterogeneity model is a hybrid that combines 3-D lithofacies to represent submeter scale, highly connected channels within a background matrix based on a correlated multivariate Gaussian hydraulic conductivity field. The modeled aquifer architecture is informed by a variety of field data, including geologic core sampling. Geostatistical properties of this hybrid heterogeneity model are consistent with the statistics of the hydraulic conductivity data set based on extensive borehole flowmeter testing at the MADE site. The representation of detailed, small-scale geologic heterogeneity allows for explicit simulation of local preferential flow and slow advection, processes that explain the complex tracer response from the injection-withdrawal test. Based on the new heterogeneity model, advective-dispersive transport reproduces key characteristics of the observed tracer recovery curve, including a delayed concentration peak and a low-concentration tail. Importantly, our results suggest that intrafacies heterogeneity is responsible for local-scale mass transfer.

  17. Development, implementation and evaluation of a clinical research engagement and leadership capacity building program in a large Australian health care service.

    PubMed

    Misso, Marie L; Ilic, Dragan; Haines, Terry P; Hutchinson, Alison M; East, Christine E; Teede, Helena J

    2016-01-14

    Health professionals need to be integrated more effectively in clinical research to ensure that research addresses clinical needs and provides practical solutions at the coal face of care. In light of limited evidence on how best to achieve this, evaluation of strategies to introduce, adapt and sustain evidence-based practices across different populations and settings is required. This project aims to address this gap through the co-design, development, implementation, evaluation, refinement and ultimately scale-up of a clinical research engagement and leadership capacity building program in a clinical setting with little to no co-ordinated approach to clinical research engagement and education. The protocol is based on principles of research capacity building and on a six-step framework, which have previously led to successful implementation and long-term sustainability. A mixed methods study design will be used. Methods will include: (1) a review of the literature about strategies that engage health professionals in research through capacity building and/or education in research methods; (2) a review of existing local research education and support elements; (3) a needs assessment in the local clinical setting, including an online cross-sectional survey and semi-structured interviews; (4) co-design and development of an educational and support program; (5) implementation of the program in the clinical environment; and (6) pre- and post-implementation evaluation and ultimately program scale-up. The evaluation focuses on research activity and knowledge, attitudes and preferences about clinical research, evidence-based practice and leadership and post implementation, about their satisfaction with the program. The investigators will evaluate the feasibility and effect of the program according to capacity building measures and will revise where appropriate prior to scale-up. It is anticipated that this clinical research engagement and leadership capacity building program will enable and enhance clinically relevant research to be led and conducted by health professionals in the health setting. This approach will also encourage identification of areas of clinical uncertainty and need that can be addressed through clinical research within the health setting.

  18. Comment on "Worldwide evidence of a unimodal relationship between productivity and plant species richness"

    USGS Publications Warehouse

    Tredennick, Andrew T.; Adler, Peter B.; Grace, James B.; Harpole, W Stanley; Borer, Elizabeth T.; Seabloom, Eric W.; Anderson, T. Michael; Bakker, Jonathan D.; Biederman, Lori A.; Brown, Cynthia S.; Buckley, Yvonne M.; Chu, Cheng-Jin; Collins, Scott L.; Crawley, Michael J.; Fay, Philip A.; Firn, Jennifer; Gruner, Daniel S.; Hagenah, Nicole; Hautier, Yann; Hector, Andy; Hillebrand, Helmut; Kirkman, Kevin P.; Knops, Johannes M. H.; Laungani, Ramesh; Lind, Eric M.; MacDougall, Andrew S.; McCulley, Rebecca L.; Mitchell, Charles E.; Moore, Joslin L.; Morgan, John W.; Orrock, John L.; Peri, Pablo L.; Prober, Suzanne M.; Risch, Anita C.; Schuetz, Martin; Speziale, Karina L.; Standish, Rachel J.; Sullivan, Lauren L.; Wardle, Glenda M.; Williams, Ryan J.; Yang, Louie H.

    2016-01-01

    Fraser et al. (Reports, 17 July 2015, p. 302) report a unimodal relationship between productivity and species richness at regional and global scales, which they contrast with the results of Adler et al. (Reports, 23 September 2011, p. 1750). However, both data sets, when analyzed correctly, show clearly and consistently that productivity is a poor predictor of local species richness.

  19. Subalpine vegetation pattern three decades after stand-replacing fire: Effects of landscape context and topography on plant community composition, tree regeneration, and diversity

    Treesearch

    Jonathan D. Coop; Robert T. Massatti; Anna W. Schoettle

    2010-01-01

    These subalpine wildfires generated considerable, persistent increases in plant species richness at local and landscape scales, and a diversity of plant communities. The findings suggest that fire suppression in such systems must lead to reduced diversity. Concerns about post-fire invasion by exotic plants appear unwarranted in high-elevation wilderness settings.

  20. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  1. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  2. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    NASA Astrophysics Data System (ADS)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  3. Slope-scale dynamic states of rockfalls

    NASA Astrophysics Data System (ADS)

    Agliardi, F.; Crosta, G. B.

    2009-04-01

    Rockfalls are common earth surface phenomena characterised by complex dynamics at the slope scale, depending on local block kinematics and slope geometry. We investigated the nature of this slope-scale dynamics by parametric 3D numerical modelling of rockfalls over synthetic slopes with different inclination, roughness and spatial resolution. Simulations were performed through an original code specifically designed for rockfall modeling, incorporating kinematic and hybrid algorithms with different damping functions available to model local energy loss by impact and pure rolling. Modelling results in terms of average velocity profiles suggest that three dynamic regimes (i.e. decelerating, steady-state and accelerating), previously recognized in the literature through laboratory experiments on granular flows, can set up at the slope scale depending on slope average inclination and roughness. Sharp changes in rock fall kinematics, including motion type and lateral dispersion of trajectories, are associated to the transition among different regimes. Associated threshold conditions, portrayed in "phase diagrams" as slope-roughness critical lines, were analysed depending on block size, impact/rebound angles, velocity and energy, and model spatial resolution. Motion in regime B (i.e. steady state) is governed by a slope-scale "viscous friction" with average velocity linearly related to the sine of slope inclination. This suggest an analogy between rockfall motion in regime B and newtonian flow, whereas in regime C (i.e. accelerating) an analogy with a dilatant flow was observed. Thus, although local behavior of single falling blocks is well described by rigid body dynamics, the slope scale dynamics of rockfalls seem to statistically approach that of granular media. Possible outcomes of these findings include a discussion of the transition from rockfall to granular flow, the evaluation of the reliability of predictive models, and the implementation of criteria for a preliminary evaluation of hazard assessment and countermeasure planning.

  4. Mobilizing women's groups for improved maternal and newborn health: evidence for impact, and challenges for sustainability and scale up.

    PubMed

    Nair, Nirmala; Tripathy, Prasanta; Costello, Anthony; Prost, Audrey

    2012-10-01

    Research conducted over the past decade has shown that community-based interventions can improve the survival and health of mothers and newborns in low- and middle-income countries. Interventions engaging women's groups in participatory learning and action meetings and other group activities, for example, have led to substantial increases in neonatal survival in high-mortality settings. Participatory interventions with women's groups work by providing a forum for communities to develop a common understanding of maternal and neonatal problems, as well as locally acceptable and sustainable strategies to address these. Potential partners for scaling up interventions with women's groups include government community health workers and volunteers, as well as organizations working with self-help groups. It is important to tailor scale-up efforts to local contexts, while retaining fidelity to the intervention, by ensuring that the mobilization of women's groups complements other local programs (e.g. home visits), and by providing capacity building for participatory learning and action methods across a range of nongovernmental organizations and government stakeholders. Research into scale-up mechanisms and effectiveness is needed to inform further implementation, and prospective surveillance of maternal and neonatal mortality in key scale-up sites can provide valuable data for measuring impact and for advocacy. There is a need for further research into the role of participatory interventions with women's groups to improve the quality of health services, health, and nutrition beyond the perinatal period, as well as the role of groups in influencing non-health issues, such as women's decision-making power. Copyright © 2012. Published by Elsevier Ireland Ltd.

  5. CONEDEP: COnvolutional Neural network based Earthquake DEtection and Phase Picking

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Huang, Y.; Yue, H.; Zhou, S.; An, S.; Yun, N.

    2017-12-01

    We developed an automatic local earthquake detection and phase picking algorithm based on Fully Convolutional Neural network (FCN). The FCN algorithm detects and segments certain features (phases) in 3 component seismograms to realize efficient picking. We use STA/LTA algorithm and template matching algorithm to construct the training set from seismograms recorded 1 month before and after the Wenchuan earthquake. Precise P and S phases are identified and labeled to construct the training set. Noise data are produced by combining back-ground noise and artificial synthetic noise to form the equivalent scale of noise set as the signal set. Training is performed on GPUs to achieve efficient convergence. Our algorithm has significantly improved performance in terms of the detection rate and precision in comparison with STA/LTA and template matching algorithms.

  6. Wearable Technology for Global Surgical Teleproctoring.

    PubMed

    Datta, Néha; MacQueen, Ian T; Schroeder, Alexander D; Wilson, Jessica J; Espinoza, Juan C; Wagner, Justin P; Filipi, Charles J; Chen, David C

    2015-01-01

    In underserved communities around the world, inguinal hernias represent a significant burden of surgically-treatable disease. With traditional models of international surgical assistance limited to mission trips, a standardized framework to strengthen local healthcare systems is lacking. We established a surgical education model using web-based tools and wearable technology to allow for long-term proctoring and assessment in a resource-poor setting. This is a feasibility study examining wearable technology and web-based performance rating tools for long-term proctoring in an international setting. Using the Lichtenstein inguinal hernia repair as the index surgical procedure, local surgeons in Paraguay and Brazil were trained in person by visiting international expert trainers using a formal, standardized teaching protocol. Surgeries were captured in real-time using Google Glass and transmitted wirelessly to an online video stream, permitting real-time observation and proctoring by mentoring surgeon experts in remote locations around the world. A system for ongoing remote evaluation and support by experienced surgeons was established using the Lichtenstein-specific Operative Performance Rating Scale. Data were collected from 4 sequential training operations for surgeons trained in both Paraguay and Brazil. With continuous internet connectivity, live streaming of the surgeries was successful. The Operative Performance Rating Scale was immediately used after each operation. Both surgeons demonstrated proficiency at the completion of the fourth case. A sustainable model for surgical training and proctoring to empower local surgeons in resource-poor locations and "train trainers" is feasible with wearable technology and web-based communication. Capacity building by maximizing use of local resources and expertise offers a long-term solution to reducing the global burden of surgically-treatable disease. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  7. Anderson transition in a three-dimensional kicked rotor

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; García-García, Antonio M.

    2009-03-01

    We investigate Anderson localization in a three-dimensional (3D) kicked rotor. By a finite-size scaling analysis we identify a mobility edge for a certain value of the kicking strength k=kc . For k>kc dynamical localization does not occur, all eigenstates are delocalized and the spectral correlations are well described by Wigner-Dyson statistics. This can be understood by mapping the kicked rotor problem onto a 3D Anderson model (AM) where a band of metallic states exists for sufficiently weak disorder. Around the critical region k≈kc we carry out a detailed study of the level statistics and quantum diffusion. In agreement with the predictions of the one parameter scaling theory (OPT) and with previous numerical simulations, the number variance is linear, level repulsion is still observed, and quantum diffusion is anomalous with ⟨p2⟩∝t2/3 . We note that in the 3D kicked rotor the dynamics is not random but deterministic. In order to estimate the differences between these two situations we have studied a 3D kicked rotor in which the kinetic term of the associated evolution matrix is random. A detailed numerical comparison shows that the differences between the two cases are relatively small. However in the deterministic case only a small set of irrational periods was used. A qualitative analysis of a much larger set suggests that deviations between the random and the deterministic kicked rotor can be important for certain choices of periods. Heuristically it is expected that localization effects will be weaker in a nonrandom potential since destructive interference will be less effective to arrest quantum diffusion. However we have found that certain choices of irrational periods enhance Anderson localization effects.

  8. Dressed photons from the viewpoint of photon localization: the entrance to the off-shell science

    NASA Astrophysics Data System (ADS)

    Saigo, Hayato; Ojima, Izumi; Ohtsu, Motoichi

    2017-12-01

    In the present paper, a new aspect of the interplay is examined between mathematical-physical arguments and light-matter fusion technologies in terms of the concept of "effective mass", starting from a question: Who has seen a free photon? Owing to the general results due to Newton-Wigner and to Wightman, a position operator is absent for massless free particles with non-zero finite spins, and hence, we cannot observe free photons in any local space regions. To solve this paradox of "photon localization", the effective mass of a photon needs to be generated through the couplings of photons with matter. Here "polaritons" picture as a basic notion in optical and solid physics is shown to verify this viewpoint, which is seen to apply also to more general settings . Focusing on the role played by nanoparticles, we reach a new look at the notion of "dressed photons" as off-shell particles. The perspective above shows that essential mathematical structure of quantum field theory for the so-called elementary particles in subatomic scale can also be applied to certain phenomena in the nano-scale.

  9. A user-friendly, open-source tool to project impact and cost of diagnostic tests for tuberculosis

    PubMed Central

    Dowdy, David W; Andrews, Jason R; Dodd, Peter J; Gilman, Robert H

    2014-01-01

    Most models of infectious diseases, including tuberculosis (TB), do not provide results customized to local conditions. We created a dynamic transmission model to project TB incidence, TB mortality, multidrug-resistant (MDR) TB prevalence, and incremental costs over 5 years after scale-up of nine alternative diagnostic strategies. A corresponding web-based interface allows users to specify local costs and epidemiology. In settings with little capacity for up-front investment, same-day microscopy had the greatest impact on TB incidence and became cost-saving within 5 years if delivered at $10/test. With greater initial investment, population-level scale-up of Xpert MTB/RIF or microcolony-based culture often averted 10 times more TB cases than narrowly-targeted strategies, at minimal incremental long-term cost. Xpert for smear-positive TB had reasonable impact on MDR-TB incidence, but at substantial price and little impact on overall TB incidence and mortality. This user-friendly modeling framework improves decision-makers' ability to evaluate the local impact of TB diagnostic strategies. DOI: http://dx.doi.org/10.7554/eLife.02565.001 PMID:24898755

  10. Spatial rule-based assessment of habitat potential to predict impact of land use changes on biodiversity at municipal scale.

    PubMed

    Scolozzi, Rocco; Geneletti, Davide

    2011-03-01

    In human dominated landscapes, ecosystems are under increasing pressures caused by urbanization and infrastructure development. In Alpine valleys remnant natural areas are increasingly affected by habitat fragmentation and loss. In these contexts, there is a growing risk of local extinction for wildlife populations; hence assessing the consequences on biodiversity of proposed land use changes is extremely important. The article presents a methodology to assess the impacts of land use changes on target species at a local scale. The approach relies on the application of ecological profiles of target species for habitat potential (HP) assessment, using high resolution GIS-data within a multiple level framework. The HP, in this framework, is based on a species-specific assessment of the suitability of a site, as well of surrounding areas. This assessment is performed through spatial rules, structured as sets of queries on landscape objects. We show that by considering spatial dependencies in habitat assessment it is possible to perform better quantification of impacts of local-level land use changes on habitats.

  11. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  12. Including local rainfall dynamics and uncertain boundary conditions into a 2-D regional-local flood modelling cascade

    NASA Astrophysics Data System (ADS)

    Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo

    2016-04-01

    Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed flood extent data and measured water levels from a 2007 summer flood event on the river Severn. The area of interest is a 7 km reach in which the river passes through the city of Worcester, a low water slope, subcritical reach in which backwater effects are significant. For this domain, the catchment area between flow gauging stations extends over 540 km2. Four hydrological models from the FUSE framework (Framework for Understanding Structural Errors) were set up to simulate the rainfall-runoff process over this area. At this regional scale, a 2-dimensional hydraulic model that solves the local inertial approximation of the shallow water equations was applied to route the flow, whereas the full form of these equations was solved at the local scale to predict the urban flow field. This nested approach hence allows an examination of water fluxes from the catchment to the building scale, while requiring short setup and computational times. An accurate prediction of the magnitude and timing of the flood peak was obtained with the proposed method, in spite of the unusual structure of the rain episode and the complexity of the River Severn system. The findings highlight the importance of estimating boundary condition uncertainty and local rainfall contribution for accurate prediction of river flows and inundation.

  13. Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.

    PubMed

    Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga

    2015-10-01

    The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

  14. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  15. Comparative analysis and visualization of multiple collinear genomes

    PubMed Central

    2012-01-01

    Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897

  16. Modeling habitat for Marbled Murrelets on the Siuslaw National Forest, Oregon, using lidar data

    USGS Publications Warehouse

    Hagar, Joan C.; Aragon, Ramiro; Haggerty, Patricia; Hollenbeck, Jeff P.

    2018-03-28

    Habitat models using lidar-derived variables that quantify fine-scale variation in vegetation structure can improve the accuracy of occupancy estimates for canopy-dwelling species over models that use variables derived from other remote sensing techniques. However, the ability of models developed at such a fine spatial scale to maintain accuracy at regional or larger spatial scales has not been tested. We tested the transferability of a lidar-based habitat model for the threatened Marbled Murrelet (Brachyramphus marmoratus) between two management districts within a larger regional conservation zone in coastal western Oregon. We compared the performance of the transferred model against models developed with data from the application location. The transferred model had good discrimination (AUC = 0.73) at the application location, and model performance was further improved by fitting the original model with coefficients from the application location dataset (AUC = 0.79). However, the model selection procedure indicated that neither of these transferred models were considered competitive with a model trained on local data. The new model trained on data from the application location resulted in the selection of a slightly different set of lidar metrics from the original model, but both transferred and locally trained models consistently indicated positive relationships between the probability of occupancy and lidar measures of canopy structural complexity. We conclude that while the locally trained model had superior performance for local application, the transferred model could reasonably be applied to the entire conservation zone.

  17. Landscape drivers of regional variation in the relationship between total phosphorus and chlorophyll in lakes

    USGS Publications Warehouse

    Wagner, Tyler; Soranno, Patricia A.; Webster, Katherine E.; Cheruvelil, Kendra Spence

    2011-01-01

    1. For north temperate lakes, the well-studied empirical relationship between phosphorus (as measured by total phosphorus, TP), the most commonly limiting nutrient and algal biomass (as measured by chlorophyll a, CHL) has been found to vary across a wide range of landscape settings. Variation in the parameters of these TP–CHL regressions has been attributed to such lake variables as nitrogen/phosphorus ratios, organic carbon and alkalinity, all of which are strongly related to catchment characteristics (e.g. natural land cover and human land use). Although this suggests that landscape setting can help to explain much of the variation in ecoregional TP–CHL regression parameters, few studies have attempted to quantify relationships at an ecoregional spatial scale.2. We tested the hypothesis that lake algal biomass and its predicted response to changes in phosphorus are related to both local-scale features (e.g. lake and catchment) and ecoregional-scale features, all of which affect the availability and transport of covarying solutes such as nitrogen, organic carbon and alkalinity. Specifically, we expected that land use and cover, acting at both local and ecoregional scales, would partially explain the spatial pattern in parameters of the TP–CHL regression.3. We used a multilevel modelling framework and data from 2105 inland lakes spanning 35 ecoregions in six US states to test our hypothesis and identify specific local and ecoregional features that explain spatial heterogeneity in TP–CHL relationships. We include variables such as lake depth, natural land cover (for instance, wetland cover in the catchment of lakes and in the ecoregions) and human land use (for instance, agricultural land use in the catchment of lakes and in the ecoregions).4. There was substantial heterogeneity in TP–CHL relationships across the 35 ecoregions. At the local scale, CHL was negatively and positively related to lake mean depth and percentage of wooded wetlands in the catchment, respectively. At the ecoregional scale, the slope parameter was positively related to the percentage of pasture in an ecoregion, indicating that CHL tends to respond more rapidly to changes in TP where there are high levels of agricultural pasture than where there is little. The intercept (i.e. the ecoregion-average CHL) was negatively related to the percentage of wooded wetlands in the ecoregion.5. By explicitly accounting for the hierarchical nature of lake–landscape interactions, we quantified the effects of landscape characteristics on the response of CHL to TP at two spatial scales. We provide new insight into ecoregional drivers of the rate at which algal biomass responds to changes in nutrient concentrations. Our results also indicate that the direction and magnitude of the effects of certain land use and cover characteristics on lake nutrient dynamics may be scale dependent and thus likely to represent different underlying mechanisms regulating lake productivity.

  18. Precision Scaling Relations for Disk Galaxies in the Local Universe

    NASA Astrophysics Data System (ADS)

    Lapi, A.; Salucci, P.; Danese, L.

    2018-05-01

    We build templates of rotation curves as a function of the I-band luminosity via the mass modeling (by the sum of a thin exponential disk and a cored halo profile) of suitably normalized, stacked data from wide samples of local spiral galaxies. We then exploit such templates to determine fundamental stellar and halo properties for a sample of about 550 local disk-dominated galaxies with high-quality measurements of the optical radius R opt and of the corresponding rotation velocity V opt. Specifically, we determine the stellar M ⋆ and halo M H masses, the halo size R H and velocity scale V H, and the specific angular momenta of the stellar j ⋆ and dark matter j H components. We derive global scaling relationships involving such stellar and halo properties both for the individual galaxies in our sample and for their mean within bins; the latter are found to be in pleasing agreement with previous determinations by independent methods (e.g., abundance matching techniques, weak-lensing observations, and individual rotation curve modeling). Remarkably, the size of our sample and the robustness of our statistical approach allow us to attain an unprecedented level of precision over an extended range of mass and velocity scales, with 1σ dispersion around the mean relationships of less than 0.1 dex. We thus set new standard local relationships that must be reproduced by detailed physical models, which offer a basis for improving the subgrid recipes in numerical simulations, that provide a benchmark to gauge independent observations and check for systematics, and that constitute a basic step toward the future exploitation of the spiral galaxy population as a cosmological probe.

  19. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  20. Natural disasters and population mobility in Bangladesh.

    PubMed

    Gray, Clark L; Mueller, Valerie

    2012-04-17

    The consequences of environmental change for human migration have gained increasing attention in the context of climate change and recent large-scale natural disasters, but as yet relatively few large-scale and quantitative studies have addressed this issue. We investigate the consequences of climate-related natural disasters for long-term population mobility in rural Bangladesh, a region particularly vulnerable to environmental change, using longitudinal survey data from 1,700 households spanning a 15-y period. Multivariate event history models are used to estimate the effects of flooding and crop failures on local population mobility and long-distance migration while controlling for a large set of potential confounders at various scales. The results indicate that flooding has modest effects on mobility that are most visible at moderate intensities and for women and the poor. However, crop failures unrelated to flooding have strong effects on mobility in which households that are not directly affected but live in severely affected areas are the most likely to move. These results point toward an alternate paradigm of disaster-induced mobility that recognizes the significant barriers to migration for vulnerable households as well their substantial local adaptive capacity.

  1. Estimation of Fractional Plant Lifeform Cover Using Landsat and Airborne LiDAR/hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Parra, A. S.; Xu, Q.; Dilts, T.; Weisberg, P.; Greenberg, J. A.

    2017-12-01

    Land-cover change has generally been understood as the result of local, landscape or regional-scale processes with most studies focusing on case-study landscapes or smaller regions. However, as we observe similar types of land-cover change occurring across different biomes worldwide, it becomes clear that global-scale processes such as climate change and CO2 fertilization, in interaction with local influences, are underlying drivers in land-cover change patterns. Prior studies on global land-cover change may not have had a suitable spatial, temporal and thematic resolution for allowing the identification of such patterns. Furthermore, the lack of globally consistent spatial data products also constitutes a limiting factor in evaluating both proximate and ultimate causes of land-cover change. In this study, we derived a global model for broadleaf tree, needleleaf tree, shrub, herbaceous, and "other" fractional cover using Landsat imagery. Combined LiDAR/hyperspectral data sets were used for calibration and validation of the Landsat-derived products. Spatially explicit uncertainties were also created as part of the data products. Our results highlight the potential for large-scale studies that model local and global influences on land-cover transition types and rates at fine thematic, spatial, and temporal resolutions. These spatial data products are relevant for identifying patterns in land-cover change due to underlying global-scale processes and can provide valuable insights into climatic and land-use factors determining vegetation distributions.

  2. Resilience and Professional Quality of Life in Staff Working with People with Intellectual Disabilities and Offending Behavior in Community Based and Institutional Settings.

    PubMed

    Søndenaa, Erik; Lauvrud, Christian; Sandvik, Marita; Nonstad, Kåre; Whittington, Richard

    2013-01-02

    Staff in forensic services for people with intellectual disabilities (ID) are expected to deal with a wide range of emotional challenges when providing care. The potential impact of this demanding work has not been systematically explored previously. This article explores the professional quality of life (QoL) and the resilience (hardiness) of the staff in this setting. The Professional QoL questionnaire and the Disposional Resilience Scale were completed by staff (n=85, 80% response rate) in the Norwegian forensic service for ID offenders. Responses from staff working in institutional settings were compared to those from staff in local community services. Staff in the local community services had higher resilience scores compared to the staff in the institutional setting, (t=2.19; P<0.05). However in the other QoL and resilience domains there were no differences between the staff in the two settings. The greater sense of resilient control among community staff may be a function of both the number of service users they work with and the institutional demands they face. Even though these participants worked with relatively high risk clients, they did not report significantly impaired quality of life compared to other occupations.

  3. Resilience and Professional Quality of Life in Staff Working with People with Intellectual Disabilities and Offending Behavior in Community Based and Institutional Settings

    PubMed Central

    Søndenaa, Erik; Lauvrud, Christian; Sandvik, Marita; Nonstad, Kåre; Whittington, Richard

    2013-01-01

    Staff in forensic services for people with intellectual disabilities (ID) are expected to deal with a wide range of emotional challenges when providing care. The potential impact of this demanding work has not been systematically explored previously. This article explores the professional quality of life (QoL) and the resilience (hardiness) of the staff in this setting. The Professional QoL questionnaire and the Disposional Resilience Scale were completed by staff (n=85, 80% response rate) in the Norwegian forensic service for ID offenders. Responses from staff working in institutional settings were compared to those from staff in local community services. Staff in the local community services had higher resilience scores compared to the staff in the institutional setting, (t=2.19; P<0.05). However in the other QoL and resilience domains there were no differences between the staff in the two settings. The greater sense of resilient control among community staff may be a function of both the number of service users they work with and the institutional demands they face. Even though these participants worked with relatively high risk clients, they did not report significantly impaired quality of life compared to other occupations. PMID:26973892

  4. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review.

    PubMed

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme--this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the 'principle of maximum conformality' (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the 'principle of minimum sensitivity' (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R(e+e-) and [Formula: see text] up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  5. Renormalization group invariance and optimal QCD renormalization scale-setting: a key issues review

    NASA Astrophysics Data System (ADS)

    Wu, Xing-Gang; Ma, Yang; Wang, Sheng-Quan; Fu, Hai-Bing; Ma, Hong-Hao; Brodsky, Stanley J.; Mojaza, Matin

    2015-12-01

    A valid prediction for a physical observable from quantum field theory should be independent of the choice of renormalization scheme—this is the primary requirement of renormalization group invariance (RGI). Satisfying scheme invariance is a challenging problem for perturbative QCD (pQCD), since a truncated perturbation series does not automatically satisfy the requirements of the renormalization group. In a previous review, we provided a general introduction to the various scale setting approaches suggested in the literature. As a step forward, in the present review, we present a discussion in depth of two well-established scale-setting methods based on RGI. One is the ‘principle of maximum conformality’ (PMC) in which the terms associated with the β-function are absorbed into the scale of the running coupling at each perturbative order; its predictions are scheme and scale independent at every finite order. The other approach is the ‘principle of minimum sensitivity’ (PMS), which is based on local RGI; the PMS approach determines the optimal renormalization scale by requiring the slope of the approximant of an observable to vanish. In this paper, we present a detailed comparison of the PMC and PMS procedures by analyzing two physical observables R e+e- and Γ(H\\to b\\bar{b}) up to four-loop order in pQCD. At the four-loop level, the PMC and PMS predictions for both observables agree within small errors with those of conventional scale setting assuming a physically-motivated scale, and each prediction shows small scale dependences. However, the convergence of the pQCD series at high orders, behaves quite differently: the PMC displays the best pQCD convergence since it eliminates divergent renormalon terms; in contrast, the convergence of the PMS prediction is questionable, often even worse than the conventional prediction based on an arbitrary guess for the renormalization scale. PMC predictions also have the property that any residual dependence on the choice of initial scale is highly suppressed even for low-order predictions. Thus the PMC, based on the standard RGI, has a rigorous foundation; it eliminates an unnecessary systematic error for high precision pQCD predictions and can be widely applied to virtually all high-energy hadronic processes, including multi-scale problems.

  6. Understanding the spatial complexity of surface hoar from slope to range scale

    NASA Astrophysics Data System (ADS)

    Hendrikx, J.

    2015-12-01

    Surface hoar, once buried, is a common weak layer type in avalanche accidents in continental and intermountain snowpacks around the World. Despite this, there is still limited understanding of the spatial variability in both the formation of, and eventual burial of, surface hoar at spatial scales which are of critical importance to avalanche forecasters. While it is relatively well understood that aspect plays an important role in the spatial location of the formation, and burial of these grain forms, due to the unequal distribution of incoming radiation, this factor alone does not explain the complex and often confusing spatial pattern of these grains forms throughout the landscape at different spatial scales. In this paper we present additional data from a unique data set including over two hundred days of manual observations of surface hoar at sixteen locations on Pioneer Mountain at the Yellowstone Club in southwestern Montana. Using this wealth of observational data located on different aspects, elevations and exposures, coupled with detailed meteorological observations, and detailed slope scale observation, we examine the spatial variability of surface hoar at this scale, and examine the factors that control its spatial distribution. Our results further supports our preliminary work, which shows that small-scale slope conditions, meteorological differences, and local scale lapse rates, can greatly influence the spatial variability of surface hoar, over and above that which aspect alone can explain. These results highlight our incomplete understanding of the processes at both the slope and range scale, and are likely to have implications for both regional and local scale avalanche forecasting in environments where surface hoar cause ongoing instabilities.

  7. Developing partnerships for implementing continental-scale citizen science programs at the local-level

    NASA Astrophysics Data System (ADS)

    Newman, S. J.; Henderson, S.; Ward, D.

    2012-12-01

    Project BudBurst is a citizen science project focused on monitoring plant phenology that resides at the National Ecological Observatory Network (NEON, Inc). A central question for Project BudBurst and other national outreach programs is: what are the most effective means of engaging and connecting with diverse communities throughout the country? How can continental scale programs like NEON's Project BudBurst engage audiences in such a way as to be relevant at both the local and continental scales? Staff with Project BudBurst pursued partnerships with several continental scale organizations: the National Wildlife Refuge System, the National Park Service, and botanic gardens to address these questions. The distributed nature of wildlife refuges, national parks, and botanic gardens around the country provided the opportunity to connect with participants locally while working with leadership at multiple scales. Project BudBurst staff talked with hundreds of staff and volunteers prior to setting a goal of obtaining and developing resources for several Refuge Partners, a pilot National Park partner, and an existing botanic garden partner during 2011. We were especially interested in learning best practices for future partnerships. The partnership efforts resulted in resource development for 12 Refuge partners, a pilot National Park partner, and 2 botanic garden partners. Early on, the importance of working with national level leaders to develop ownership of the partner program and input about resource needs became apparent. Once a framework for the partnership program was laid out, it became critical to work closely with staff and volunteers on the ground to ensure needs were met. In 2012 we began to develop an online assessment to allow our current and potential partners to provide feedback about whether or not the partnership program was meeting their needs and how the program could be improved. As the year progressed, the timeline for resource development became more of a suggestion than a set schedule. Maintaining flexibility was critical to the success of the partnerships. Unanticipated fieldwork, new priorities within organizations, and differing levels of involvement from partner staff, advisory boards, or Friends groups, led to varying resource development timelines. The distributed nature of and the willingness of partner staff and volunteers to implement Project BudBurst at their facilities have broadened the participation of the public in this program more than could have been accomplished alone. The new partners benefit from the free and customized education and outreach materials provided by Project BudBurst, while Project BudBurst benefits from the local knowledge and contacts with the public from the partner organizations.

  8. Effect of deformation induced nucleation and phase mixing, a two phase model for the ductile deformation of rocks.

    NASA Astrophysics Data System (ADS)

    Bevillard, Benoit; Richard, Guillaume; Raimbourg, Hugues

    2017-04-01

    Rocks are complex materials and particularly their rheological behavior under geological stresses remains a long-standing question in geodynamics. To test large scale lithosphere dynamics numerical modeling is the main tool but encounter substantial difficulties to account for this complexity. One major unknown is the origin and development of the localization of deformation. This localization is observed within a large range of scales and is commonly characterized by sharp grain size reduction. These considerations argues for a control of the microscopical scale over the largest ones through one predominant variable: the mean grain-size. However, the presence of second phase and broad grain-size distribution may also have a important impact on this phenomenon. To address this question, we built a model for ductile rocks deformation based on the two-phase damage theory of Bercovici & Ricard 2012. We aim to investigate the role of grain-size reduction but also phase mixing on strain localization. Instead of considering a Zener-pining effect on damage evolution, we propose to take into account the effect of the grain-boundary sliding (GBS)-induced nucleation mechanism which is better supported by experimental or natural observations (Precigout et al 2016). This continuum theory allows to represent a two mineral phases aggregate with explicit log-normal grain-size distribution as a reasonable approximation for polymineralic rocks. Quantifying microscopical variables using a statistical approach may allow for calibration at small (experimental) scale. The general set of evolutions equations remains up-scalable provided some conditions on the homogenization scale. Using the interface density as a measure of mixture quality, we assume unlike Bercovici & Ricard 2012 that it may depend for some part on grain-size . The grain-size independent part of it is being represented by a "contact fraction" variable, whose evolution may be constrained by the dominant deformation mechanism. To derive the related evolution equations and account for the interdependence of thermodynamic state variables, we use Onsager's thermodynamic extremum principle. Eventually, we solve for our set of equations using an Anorthite/Pyroxene gabbroic composition. The results are used to discuss the interaction between grain-size reduction and phase mixing on strain localization on several simple cases. Bercovici D, Ricard Y (2012) Mechanisms for the generation of plate tectonics by two phase grain damage and pinning. Physics of the Earth and Planetary Interiors 202-203:27-55 Precigout J, Stunitz H (2016) Evidence of phase nucleation during olivine diffusion creep: A new perspective for mantle strain localisation. Earth and Planetary Science Letters 405:94-105

  9. Vanishing-Overhead Linear-Scaling Random Phase Approximation by Cholesky Decomposition and an Attenuated Coulomb-Metric.

    PubMed

    Luenser, Arne; Schurkus, Henry F; Ochsenfeld, Christian

    2017-04-11

    A reformulation of the random phase approximation within the resolution-of-the-identity (RI) scheme is presented, that is competitive to canonical molecular orbital RI-RPA already for small- to medium-sized molecules. For electronically sparse systems drastic speedups due to the reduced scaling behavior compared to the molecular orbital formulation are demonstrated. Our reformulation is based on two ideas, which are independently useful: First, a Cholesky decomposition of density matrices that reduces the scaling with basis set size for a fixed-size molecule by one order, leading to massive performance improvements. Second, replacement of the overlap RI metric used in the original AO-RPA by an attenuated Coulomb metric. Accuracy is significantly improved compared to the overlap metric, while locality and sparsity of the integrals are retained, as is the effective linear scaling behavior.

  10. Woody plant encroachment of grasslands: a comparison of terrestrial and wetland settings.

    PubMed

    Saintilan, Neil; Rogers, Kerrylee

    2015-02-01

    A global trend of woody plant encroachment of terrestrial grasslands is co-incident with woody plant encroachment of wetland in freshwater and saline intertidal settings. There are several arguments for considering tree encroachment of wetlands in the context of woody shrub encroachment of grassland biomes. In both cases, delimitation of woody shrubs at regional scales is set by temperature thresholds for poleward extent, and by aridity within temperature limits. Latitudinal expansion has been observed for terrestrial woody shrubs and mangroves, following recent warming, but most expansion and thickening has been due to the occupation of previously water-limited grassland/saltmarsh environments. Increases in atmospheric CO₂, may facilitate the recruitment of trees in terrestrial and wetland settings. Improved water relations, a mechanism that would predict higher soil moisture in grasslands and saltmarshes, and also an enhanced capacity to survive arid conditions, reinforces local mechanisms of change. The expansion of woody shrubs and mangroves provides a negative feedback on elevated atmospheric CO₂ by increasing carbon sequestration in grassland and saltmarsh, and is a significant carbon sink globally. These broad-scale vegetation shifts may represent a new stable state, reinforced by positive feedbacks between global change drivers and endogenic mechanisms of persistence in the landscape.

  11. A comparison of remote vs. local influence of El Niño on the coastal circulation of the northeast Pacific

    NASA Astrophysics Data System (ADS)

    Hermann, Albert J.; Curchitser, Enrique N.; Haidvogel, Dale B.; Dobbins, Elizabeth L.

    2009-12-01

    A set of spatially nested circulation models is used to explore interannual change in the northeast Pacific (NEP) during 1997-2002, and remote vs. local influence of the 1997-1998 El Niño on this region. Our nested set is based on the primitive equations of motion, and includes a basin-scale model of the north Pacific at ˜40-km resolution (NPac), and a regional model of the Northeast Pacific at ˜10-km resolution. The NEP model spans an area from Baja California through the Bering Sea, from the coast to ˜2000-km offshore. In this context, "remote influence" refers to effects driven by changes in ocean velocity and temperature outside of the NEP domain; "local influence" refers to direct forcing by winds and runoff within the NEP domain. A base run of this model using hindcast winds and runoff for 1996-2002 replicates the dominant spatial modes of sea-surface height anomalies from satellite data, and coastal sea level from tide gauges. We have performed a series of sensitivity runs with the NEP model for 1997-1998, which analyze the response of coastal sea level to: (1) hindcast winds and coastal runoff, as compared to their monthly climatologies and (2) hindcast boundary conditions (from the NPac model), as compared to their monthly climatologies. Results indicate penetration of sea-surface height (SSH) from the basin-scale model into the NEP domain (e.g., remote influence), with propagation as coastal trapped waves from Baja up through Alaska. Most of the coastal sea-level anomaly off Alaska in El Niño years appears due to direct forcing by local winds and runoff (local influence), and such anomalies are much stronger than those produced off California. We quantify these effects as a function of distance along the coastline, and consider how they might impact the coastal ecosystems of the NEP.

  12. Meteorological impact assessment of possible large scale irrigation in Southwest Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Ter Maat, H. W.; Hutjes, R. W. A.; Ohba, R.; Ueda, H.; Bisselink, B.; Bauer, T.

    2006-11-01

    On continental to regional scales feedbacks between landuse and landcover change and climate have been widely documented over the past 10-15 years. In the present study we explore the possibility that also vegetation changes over much smaller areas may affect local precipitation regimes. Large scale (˜ 10 5 ha) irrigated plantations in semi-arid environments under particular conditions may affect local circulations and induce additional rainfall. Capturing this rainfall 'surplus' could then reduce the need for external irrigation sources and eventually lead to self-sustained water cycling. This concept is studied in the coastal plains in South West Saudi Arabia where the mountains of the Asir region exhibit the highest rainfall of the peninsula due to orographic lifting and condensation of moisture imported with the Indian Ocean monsoon and with disturbances from the Mediterranean Sea. We use a regional atmospheric modeling system (RAMS) forced by ECMWF analysis data to resolve the effect of complex surface conditions in high resolution (Δ x = 4 km). After validation, these simulations are analysed with a focus on the role of local processes (sea breezes, orographic lifting and the formation of fog in the coastal mountains) in generating rainfall, and on how these will be affected by large scale irrigated plantations in the coastal desert. The validation showed that the model simulates the regional and local weather reasonably well. The simulations exhibit a slightly larger diurnal temperature range than those captured by the observations, but seem to capture daily sea-breeze phenomena well. Monthly rainfall is well reproduced at coarse resolutions, but appears more localized at high resolutions. The hypothetical irrigated plantation (3.25 10 5 ha) has significant effects on atmospheric moisture, but due to weakened sea breezes this leads to limited increases of rainfall. In terms of recycling of irrigation gifts the rainfall enhancement in this particular setting is rather insignificant.

  13. Reimagining community health psychology: maps, journeys and new terrains.

    PubMed

    Campbell, Catherine; Cornish, Flora

    2014-01-01

    This special issue celebrates and maps out the 'coming of age' of community health psychology, demonstrating its confident and productive expansion beyond its roots in the theory and practice of small-scale collective action in local settings. Articles demonstrate the field's engagement with the growing complexity of local and global inequalities, contemporary forms of collective social protest and developments in critical social science. These open up novel problem spaces for the application and extension of its theories and methods, deepening our understandings of power, identity, community, knowledge and social change - in the context of evolving understandings of the spatial, embodied, relational, collaborative and historical dimensions of health.

  14. Lieb-Robinson bound and locality for general markovian quantum dynamics.

    PubMed

    Poulin, David

    2010-05-14

    The Lieb-Robinson bound shows the existence of a maximum speed of signal propagation in discrete quantum mechanical systems with local interactions. This generalizes the concept of relativistic causality beyond field theory, and provides a powerful tool in theoretical condensed matter physics and quantum information science. Here, we extend the scope of this seminal result by considering general markovian quantum evolution, where we prove that an equivalent bound holds. In addition, we use the generalized bound to demonstrate that correlations in the stationary state of a Markov process decay on a length scale set by the Lieb-Robinson velocity and the system's relaxation time.

  15. Fermion systems in discrete space-time

    NASA Astrophysics Data System (ADS)

    Finster, Felix

    2007-05-01

    Fermion systems in discrete space-time are introduced as a model for physics on the Planck scale. We set up a variational principle which describes a non-local interaction of all fermions. This variational principle is symmetric under permutations of the discrete space-time points. We explain how for minimizers of the variational principle, the fermions spontaneously break this permutation symmetry and induce on space-time a discrete causal structure.

  16. How do local and remote processes affect the distribution of iron in the Atlantic Ocean?

    NASA Astrophysics Data System (ADS)

    Tagliabue, A.; Boyd, P.; Rijkenberg, M. J. A.; Williams, R. G.

    2016-02-01

    Iron (Fe) plays an important role in governing the magnitudes and patterns of primary productivity, nitrogen fixation and phytoplankton community composition across the Atlantic Ocean. Variations in the supply of Fe to surface waters across the mixed layer interface, over seasonal to annual to decadal scales, are underpinned by it's vertical profile. Traditionally, nutrient profiles are understood in terms of surface depletion and subsurface regeneration, but for Fe this is more complicated due to the role of scavenging and organic complexation by ligands, as well as subsurface sources. This means that the Fe profile may be controlled locally, by sinking, regeneration and scavenging or remotely, by the upstream conditions of subducted water masses. Subduction drives the transfer of Fe across the interface between winter mixed layer and the ocean interior, but has received little attention thus far. Via the subduction of watermasses with distinct biogeochemical signatures to low latitudes, remote processes can regulate the Atlantic Ocean Fe distribution at local scales. Specifically, the formation of mode waters with excess Fe binding ligands (positive L*) enable these waters to stabilise any Fe flux from regeneration that would otherwise be lost by scavenging. The pattern of mode water ventilation then highlights those regions of the ocean where local processes are able to influence the Fe profile. Local process that augment L*, such as the production of ligands during particle regeneration, can also interact with the larger scale ventilation signature but do not alter the main trends. By applying our framework to recent GEOTRACES datasets over the Atlantic Ocean we are able to highlight regions where the Fe profile is forced locally or remotely, thereby providing an important process-based constraint on the biogeochemical models we rely on for future projections. Furthermore, we are able to appraise how the varying influence of local and remote processes drives the degree of agreement in the vertical profiles of Fe and macronutrients, which then sets the degree of surface water Fe limitation.

  17. Jena Reference Air Set (JRAS): a multi-point scale anchor for isotope measurements of CO2 in air

    NASA Astrophysics Data System (ADS)

    Wendeberg, M.; Richter, J. M.; Rothe, M.; Brand, W. A.

    2013-03-01

    The need for a unifying scale anchor for isotopes of CO2 in air was brought to light at the 11th WMO/IAEA Meeting of Experts on Carbon Dioxide in Tokyo 2001. During discussions about persistent discrepancies in isotope measurements between the worlds leading laboratories, it was concluded that a unifying scale anchor for Vienna Pee Dee Belemnite (VPDB) of CO2 in air was desperately needed. Ten years later, at the 2011 Meeting of Experts on Carbon Dioxide in Wellington, it was recommended that the Jena Reference Air Set (JRAS) become the official scale anchor for isotope measurements of CO2 in air (Brailsford, 2012). The source of CO2 used for JRAS is two calcites. After releasing CO2 by reaction with phosphoric acid, the gases are mixed into CO2-free air. This procedure ensures both isotopic stability and longevity of the CO2. That the reference CO2 is generated from calcites and supplied as an air mixture is unique to JRAS. This is made to ensure that any measurement bias arising from the extraction procedure is eliminated. As every laboratory has its own procedure for extracting the CO2, this is of paramount importance if the local scales are to be unified with a common anchor. For a period of four years, JRAS has been evaluated through the IMECC1 program, which made it possible to distribute sets of JRAS gases to 13 laboratories worldwide. A summary of data from the six laboratories that have reported the full set of results is given here along with a description of the production and maintenance of the JRAS scale anchors. 1 IMECC refers to the EU project "Infrastructure for Measurements of the European Carbon Cycle" (http://imecc.ipsl.jussieu.fr/).

  18. Prediction of resource volumes at untested locations using simple local prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  19. Seasonality and phenology alter functional leaf traits.

    PubMed

    McKown, Athena D; Guy, Robert D; Azam, M Shofiul; Drewes, Eric C; Quamme, Linda K

    2013-07-01

    In plant ecophysiology, functional leaf traits are generally not assessed in relation to phenological phase of the canopy. Leaf traits measured in deciduous perennial species are known to vary between spring and summer seasons, but there is a knowledge gap relating to the late-summer phase marked by growth cessation and bud set occurring well before fall leaf senescence. The effects of phenology on canopy physiology were tested using a common garden of over 2,000 black cottonwood (Populus trichocarpa) individuals originating from a wide geographical range (44-60ºN). Annual phenological events and 12 leaf-based functional trait measurements were collected spanning the entire summer season prior to, and following, bud set. Patterns of seasonal trait change emerged by synchronizing trees using their date of bud set. In particular, photosynthetic, mass, and N-based traits increased substantially following bud set. Most traits were significantly different between pre-bud set and post-bud set phase trees, with many traits showing at least 25% alteration in mean value. Post-bud set, both the significance and direction of trait-trait relationships could be modified, with many relating directly to changes in leaf mass. In Populus, these dynamics in leaf traits throughout the summer season reflected a shift in whole plant physiology, but occurred long before the onset of leaf senescence. The marked shifts in measured trait values following bud set underscores the necessity to include phenology in trait-based ecological studies or large-scale phenotyping efforts, both at the local level and larger geographical scale.

  20. Noncollinear magnetic ordering in a frustrated magnet: Metallic regime and the role of frustration

    NASA Astrophysics Data System (ADS)

    Shahzad, Munir; Sengupta, Pinaki

    2017-12-01

    We explore the magnetic phases in a Kondo lattice model on the geometrically frustrated Shastry-Sutherland lattice at metallic electron densities, searching for noncollinear and noncoplanar spin textures. Motivated by experimental observations in many rare-earth-based frustrated metallic magnets, we treat the local moments as classical spins and set the coupling between the itinerant electrons and local moments as the largest energy scale in the problem. Our results show that a noncollinear flux state is stabilized over an extended range of Hamiltonian parameters. These spin states can be quenched efficiently by external fields like temperature and magnetic field as well as by varying the degree of frustration in the electronic itinerancy and exchange coupling between local moments. Interestingly, unlike insulating electron densities that we discussed in paper I of this sequence, a Dzyaloshinskii-Moriya interaction between the local moments is not essential for the emergence of their noncollinear ordering.

  1. Testing deep-sea biodiversity paradigms on abyssal nematode genera and Acantholaimus species

    NASA Astrophysics Data System (ADS)

    Lins, Lidia; da Silva, Maria Cristina; Neres, Patrícia; Esteves, André Morgado; Vanreusel, Ann

    2018-02-01

    Biodiversity patterns in the deep sea have been extensively studied in the last decades. In this study, we investigated whether reputable concepts in deep-sea ecology also explain diversity and distribution patterns of nematode genera and species in the abyss. Among them, three paradigms were tackled: (1) the deep sea is a highly diverse environment at a local scale, while on a regional and even larger geographical scale, species and genus turnover is limited; (2) the biodiversity of deep-sea nematode communities changes with the nature and amount of organic matter input from the surface; and (3) patch-mosaic dynamics of the deep-sea environment drive local diversity. To test these hypotheses, diversity and density of nematode assemblages and of species of the genus Acantholaimus were studied along two abyssal E-W transects. These two transects were situated in the Southern Ocean ( 50°S) and the North Atlantic ( 10°N). Four different hierarchical scales were used to compare biodiversity: at the scale of cores, between stations from the same region, and between regions. Results revealed that the deep sea harbours a high diversity at a local scale (alpha diversity), but that turnover can be shaped by different environmental drivers. Therefore, these results question the second part of the paradigm about limited species turnover in the deep sea. Higher surface primary productivity was correlated with greater nematode densities, whereas diversity responses to the augmentation of surface productivity showed no trend. Areas subjected to a constant and low food input revealed similar nematode communities to other oligotrophic abyssal areas, while stations under high productivity were characterized by different dominant genera and Acantholaimus species, and by a generally low local diversity. Our results corroborate the species-energy hypothesis, where productivity can set a limit to the richness of an ecosystem. Finally, we observed no correlation between sediment variability and local diversity. Although differences in sediment variability were significant across stations, these had to be considered without effect on the nematode community structure in the studied abyssal areas.

  2. A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2016-02-01

    Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.

  3. Improving health aid for a better planet: The planning, monitoring and evaluation tool (PLANET)

    PubMed Central

    Sridhar, Devi; Car, Josip; Chopra, Mickey; Campbell, Harry; Woods, Ngaire; Rudan, Igor

    2015-01-01

    Background International development assistance for health (DAH) quadrupled between 1990 and 2012, from US$ 5.6 billion to US$ 28.1 billion. This generates an increasing need for transparent and replicable tools that could be used to set investment priorities, monitor the distribution of funding in real time, and evaluate the impact of those investments. Methods In this paper we present a methodology that addresses these three challenges. We call this approach PLANET, which stands for planning, monitoring and evaluation tool. Fundamentally, PLANET is based on crowdsourcing approach to obtaining information relevant to deployment of large–scale programs. Information is contributed in real time by a diverse group of participants involved in the program delivery. Findings PLANET relies on real–time information from three levels of participants in large–scale programs: funders, managers and recipients. At each level, information is solicited to assess five key risks that are most relevant to each level of operations. The risks at the level of funders involve systematic neglect of certain areas, focus on donor’s interests over that of program recipients, ineffective co–ordination between donors, questionable mechanisms of delivery and excessive loss of funding to “middle men”. At the level of managers, the risks are corruption, lack of capacity and/or competence, lack of information and /or communication, undue avoidance of governmental structures / preference to non–governmental organizations and exclusion of local expertise. At the level of primary recipients, the risks are corruption, parallel operations / “verticalization”, misalignment with local priorities and lack of community involvement, issues with ethics, equity and/or acceptability, and low likelihood of sustainability beyond the end of the program’s implementation. Interpretation PLANET is intended as an additional tool available to policy–makers to prioritize, monitor and evaluate large–scale development programs. In this, it should complement tools such as LiST (for health care/interventions), EQUIST (for health care/interventions) and CHNRI (for health research), which also rely on information from local experts and on local context to set priorities in a transparent, user–friendly, replicable, quantifiable and specific, algorithmic–like manner. PMID:26322228

  4. Improving health aid for a better planet: The planning, monitoring and evaluation tool (PLANET).

    PubMed

    Sridhar, Devi; Car, Josip; Chopra, Mickey; Campbell, Harry; Woods, Ngaire; Rudan, Igor

    2015-12-01

    International development assistance for health (DAH) quadrupled between 1990 and 2012, from US$ 5.6 billion to US$ 28.1 billion. This generates an increasing need for transparent and replicable tools that could be used to set investment priorities, monitor the distribution of funding in real time, and evaluate the impact of those investments. In this paper we present a methodology that addresses these three challenges. We call this approach PLANET, which stands for planning, monitoring and evaluation tool. Fundamentally, PLANET is based on crowdsourcing approach to obtaining information relevant to deployment of large-scale programs. Information is contributed in real time by a diverse group of participants involved in the program delivery. PLANET relies on real-time information from three levels of participants in large-scale programs: funders, managers and recipients. At each level, information is solicited to assess five key risks that are most relevant to each level of operations. The risks at the level of funders involve systematic neglect of certain areas, focus on donor's interests over that of program recipients, ineffective co-ordination between donors, questionable mechanisms of delivery and excessive loss of funding to "middle men". At the level of managers, the risks are corruption, lack of capacity and/or competence, lack of information and /or communication, undue avoidance of governmental structures / preference to non-governmental organizations and exclusion of local expertise. At the level of primary recipients, the risks are corruption, parallel operations / "verticalization", misalignment with local priorities and lack of community involvement, issues with ethics, equity and/or acceptability, and low likelihood of sustainability beyond the end of the program's implementation. PLANET is intended as an additional tool available to policy-makers to prioritize, monitor and evaluate large-scale development programs. In this, it should complement tools such as LiST (for health care/interventions), EQUIST (for health care/interventions) and CHNRI (for health research), which also rely on information from local experts and on local context to set priorities in a transparent, user-friendly, replicable, quantifiable and specific, algorithmic-like manner.

  5. Applying a framework for assessing the health system challenges to scaling up mHealth in South Africa

    PubMed Central

    2012-01-01

    Background Mobile phone technology has demonstrated the potential to improve health service delivery, but there is little guidance to inform decisions about acquiring and implementing mHealth technology at scale in health systems. Using the case of community-based health services (CBS) in South Africa, we apply a framework to appraise the opportunities and challenges to effective implementation of mHealth at scale in health systems. Methods A qualitative study reviewed the benefits and challenges of mHealth in community-based services in South Africa, through a combination of key informant interviews, site visits to local projects and document reviews. Using a framework adapted from three approaches to reviewing sustainable information and communication technology (ICT), the lessons from local experience and elsewhere formed the basis of a wider consideration of scale up challenges in South Africa. Results Four key system dimensions were identified and assessed: government stewardship and the organisational, technological and financial systems. In South Africa, the opportunities for successful implementation of mHealth include the high prevalence of mobile phones, a supportive policy environment for eHealth, successful use of mHealth for CBS in a number of projects and a well-developed ICT industry. However there are weaknesses in other key health systems areas such as organisational culture and capacity for using health information for management, and the poor availability and use of ICT in primary health care. The technological challenges include the complexity of ensuring interoperability and integration of information systems and securing privacy of information. Finally, there are the challenges of sustainable financing required for large scale use of mobile phone technology in resource limited settings. Conclusion Against a background of a health system with a weak ICT environment and limited implementation capacity, it remains uncertain that the potential benefits of mHealth for CBS would be retained with immediate large-scale implementation. Applying a health systems framework facilitated a systematic appraisal of potential challenges to scaling up mHealth for CBS in South Africa and may be useful for policy and practice decision-making in other low- and middle-income settings. PMID:23126370

  6. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    PubMed

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  7. Modeling Future Land Use Scenarios in South Korea: Applying the IPCC Special Report on Emissions Scenarios and the SLEUTH Model on a Local Scale

    NASA Astrophysics Data System (ADS)

    Han, Haejin; Hwang, YunSeop; Ha, Sung Ryong; Kim, Byung Sik

    2015-05-01

    This study developed three scenarios of future land use/land cover on a local level for the Kyung-An River Basin and its vicinity in South Korea at a 30-m resolution based on the two scenario families of the Intergovernmental Panel on Climate Change (IPCC) Special Report Emissions Scenarios (SRES): A2 and B1, as well as a business-as-usual scenario. The IPCC SRES A2 and B1 were used to define future local development patterns and associated land use change. We quantified the population-driven demand for urban land use for each qualitative storyline and allocated the urban demand in geographic space using the SLEUTH model. The model results demonstrate the possible land use/land cover change scenarios for the years from 2000 to 2070 by examining the broad narrative of each SRES within the context of a local setting, such as the Kyoungan River Basin, constructing narratives of local development shifts and modeling a set of `best guess' approximations of the future land use conditions in the study area. This study found substantial differences in demands and patterns of land use changes among the scenarios, indicating compact development patterns under the SRES B1 compared to the rapid and dispersed development under the SRES A2.

  8. Modeling future land use scenarios in South Korea: applying the IPCC special report on emissions scenarios and the SLEUTH model on a local scale.

    PubMed

    Han, Haejin; Hwang, YunSeop; Ha, Sung Ryong; Kim, Byung Sik

    2015-05-01

    This study developed three scenarios of future land use/land cover on a local level for the Kyung-An River Basin and its vicinity in South Korea at a 30-m resolution based on the two scenario families of the Intergovernmental Panel on Climate Change (IPCC) Special Report Emissions Scenarios (SRES): A2 and B1, as well as a business-as-usual scenario. The IPCC SRES A2 and B1 were used to define future local development patterns and associated land use change. We quantified the population-driven demand for urban land use for each qualitative storyline and allocated the urban demand in geographic space using the SLEUTH model. The model results demonstrate the possible land use/land cover change scenarios for the years from 2000 to 2070 by examining the broad narrative of each SRES within the context of a local setting, such as the Kyoungan River Basin, constructing narratives of local development shifts and modeling a set of 'best guess' approximations of the future land use conditions in the study area. This study found substantial differences in demands and patterns of land use changes among the scenarios, indicating compact development patterns under the SRES B1 compared to the rapid and dispersed development under the SRES A2.

  9. Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization

    NASA Astrophysics Data System (ADS)

    Mills, G.; Willson, C. S.

    2015-12-01

    Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.

  10. Chapter B: Regional Geologic Setting of Late Cenozoic Lacustrine Diatomite Deposits, Great Basin and Surrounding Region: Overview and Plans for Investigation

    USGS Publications Warehouse

    Wallace, Alan R.

    2003-01-01

    Freshwater diatomite deposits are present in all of the Western United States, including the Great Basin and surrounding regions. These deposits are important domestic sources of diatomite, and a better understanding of their formation and geologic settings may aid diatomite exploration and land-use management. Diatomite deposits in the Great Basin are the products of two stages: (1) formation in Late Cenozoic lacustrine basins and (2) preservation after formation. Processes that favored long-lived diatom activity and diatomite formation range in decreasing scale from global to local. The most important global process was climate, which became increasingly cool and dry from 15 Ma to the present. Regional processes included tectonic setting and volcanism, which varied considerably both spatially and temporally in the Great Basin region. Local processes included basin formation, sedimentation, hydrology, and rates of processes, including diatom growth and accumulation; basin morphology and nutrient and silica sources were important for robust activity of different diatom genera. Only optimum combinations of these processes led to the formation of large diatomite deposits, and less than optimum combinations resulted in lakebeds that contained little to no diatomite. Postdepositional processes can destroy, conceal, or preserve a diatomite deposit. These processes, which most commonly are local in scale, include uplift, with related erosion and changes in hydrology; burial beneath sedimentary deposits or volcanic flows and tuffs; and alteration during diagenesis and hydrothermal activity. Some sedimentary basins that may have contained diatomite deposits have largely been destroyed or significantly modified, whereas others, such as those in western Nevada, have been sufficiently preserved along with their contained diatomite deposits. Future research on freshwater diatomite deposits in the Western United States and Great Basin region should concentrate on the regional and local processes that led to the formation and preservation of the deposits. Major questions that need to be answered include (1) why were some basins favorable for diatomite formation, whereas others were not; (2) what post-depositional conditions are needed for diatomite preservation; and (3) what were the optimum process combinations that led to the formation and preservation of economic diatomite deposits?

  11. Generating High Resolution Climate Scenarios Through Regional Climate Modelling Over Southern Africa

    NASA Astrophysics Data System (ADS)

    Ndhlovu, G. Z.; Woyessa, Y. E.; Vijayaraghavan, S.

    2017-12-01

    limate change has impacted the global environment and the Continent of Africa, especially Southern Africa, regarded as one of the most vulnerable regions in Africa, has not been spared from these impacts. Global Climate Models (GCMs) with coarse horizontal resolutions of 150-300 km do not provide sufficient details at the local basin scale due to mismatch between the size of river basins and the grid cell of the GCM. This makes it difficult to apply the outputs of GCMs directly to impact studies such as hydrological modelling. This necessitates the use of regional climate modelling at high resolutions that provide detailed information at regional and local scales to study both climate change and its impacts. To this end, an experiment was set up and conducted with PRECIS, a regional climate model, to generate climate scenarios at a high resolution of 25km for the local region in Zambezi River basin of Southern Africa. The major input data used included lateral and surface boundary conditions based on the GCMs. The data is processed, analysed and compared with CORDEX climate change project data generated for Africa. This paper, highlights the major differences of the climate scenarios generated by PRECIS Model and CORDEX Project for Africa and further gives recommendations for further research on generation of climate scenarios. The climatic variables such as precipitation and temperatures have been analysed for flood and droughts in the region. The paper also describes the setting up and running of an experiment using a high-resolution PRECIS model. In addition, a description has been made in running the model and generating the output variables on a sub basin scale. Regional climate modelling which provides information on climate change impact may lead to enhanced understanding of adaptive water resources management. Understanding the regional climate modelling results on sub basin scale is the first step in analysing complex hydrological processes and a basis for designing of adaptation and mitigation strategies in the region. Key words: Climate change, regional climate modelling, hydrological processes, extremes, scenarios [1] Corresponding author: Email:gndhlovu@cut.ac.za Tel:+27 (0) 51 507 3072

  12. Context-dependent colonization dynamics: Regional reward contagion drives local compression in aquatic beetles.

    PubMed

    Pintar, Matthew R; Resetarits, William J

    2017-09-01

    Habitat selection by colonizing organisms is an important factor in determining species abundance and community dynamics at multiple spatial scales. Many organisms select habitat patches based on intrinsic patch quality, but patches exist in complex landscapes linked by dispersal and colonization, forming metapopulations and metacommunities. Perceived patch quality can be influenced by neighbouring patches through spatial contagion, wherein perceived quality of one patch can extend beyond its borders and either increase or decrease the colonization of neighbouring patches and localities. These spatially explicit colonization dynamics can result in habitat compression, wherein more colonists occupy a patch or locality than in the absence of spatial context dependence. Previous work on contagion/compression focused primarily on the role of predators in driving colonization patterns. Our goal was to determine whether resource abundance can drive multi-scale colonization dynamics of aquatic beetles through the processes of contagion and compression in naturally colonized experimental pools. We established two levels (high/low quality) of within-patch resource abundances (leaf litter) using an experimental landscape of mesocosms, and assayed colonization by 35 species of aquatic beetles. Patches were arranged in localities (sets of two patches), which consisted of a combination of two patch-level resource levels in a 2 × 2 factorial design, allowing us to assay colonization at both locality and patch levels. We demonstrate that patterns of species abundance and richness of colonizing aquatic beetles are determined by patch quality and context-dependent processes at multiple spatial scales. Localities that consisted of at least one high-quality patch were colonized at equivalent rates that were higher than localities containing only low-quality patches, displaying regional reward contagion. In localities that consisted of one high- and one low-quality patch, reward contagion produced by higher leaf litter levels resulted in greater abundance of beetles in such localities, which then compressed into the highest quality patches. Our results provide further support for the critical roles of habitat selection and spatial context, particularly the quality of neighbouring habitat patches, in generating patterns of species abundances and community structure across landscapes. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  13. Water, ecology and health: ecosystems as settings for promoting health and sustainability.

    PubMed

    Parkes, Margot W; Horwitz, Pierre

    2009-03-01

    Despite the proposed ecological and systems-based perspectives of the settings-based approach to health promotion, most initiatives have tended to overlook the fundamental nature of ecosystems. This paper responds to this oversight by proposing an explicit re-integration of ecosystems within the healthy settings approach. We make this case by focusing on water as an integrating unit of analysis. Water, on which all life depends, is not only an integral consideration for the existing healthy settings (schools, hospitals, workplaces) but also highlights the ecosystem context of health and sustainability. A focus on catchments (also know as watersheds and river basins) exemplifies the scaled and upstream/downstream nature of ecosystems and draws into sharp focus the cross-sectoral and transdisciplinary context of the social and environmental determinants of health. We position this work in relation to the converging agendas of health promotion and ecosystem management at the local, regional and global scales--and draw on evidence from international initiatives as diverse as the WHO Commission on Social Determinants of Health, and the Millennium Ecosystem Assessment. Using water as a vehicle for understanding the systemic context for human wellbeing, health promotion and disease prevention draws inevitable attention to key challenges of scale, intersectoral governance and the complementary themes of promoting resilience and preventing vulnerability. We conclude by highlighting the importance of building individual and institutional capacity for this kind of integration--equipping a new generation of researchers, practitioners and decision-makers to be conversant with the language of ecosystems, capable of systemic thought and focused on settings that can promote both health and sustainability.

  14. Developing a holistic strategy for integrated waste management within municipal planning: challenges, policies, solutions and perspectives for Hellenic municipalities in the zero-waste, low-cost direction.

    PubMed

    Zotos, G; Karagiannidis, A; Zampetoglou, S; Malamakis, A; Antonopoulos, I-S; Kontogianni, S; Tchobanoglous, G

    2009-05-01

    The present position paper addresses contemporary waste management options, weaknesses and opportunities faced by Hellenic local authorities. It focuses on state-of-the-art, tested as well as innovative, environmental management tools on a municipal scale and identifies a range of different collaboration schemes between local authorities and related service providers. Currently, a policy implementation gap is still experienced among Hellenic local authorities; it appears that administration at the local level is inadequate to manage and implement many of the general policies proposed; identify, collect, monitor and assess relevant data; and safeguard efficient and effective implementation of MSWM practices in the framework of integrated environmental management as well. This shortfall is partly due to the decentralisation of waste management issues to local authorities without a parallel substantial budgetary and capacity support, thus resulting in local activity remaining often disoriented and isolated from national strategies, therefore yielding significant planning and implementation problems and delays against pressing issues at hand as well as loss or poor use of available funds. This paper develops a systemic approach for MSWM at both the household and the non-household level, summarizes state-of-the-art available tools and compiles a set of guidelines for developing waste management master plans at the municipal level. It aims to provide a framework in the MSWM field for municipalities in Greece as well as other countries facing similar problems under often comparable socioeconomic settings.

  15. Small-Scale Habitat Structure Modulates the Effects of No-Take Marine Reserves for Coral Reef Macroinvertebrates

    PubMed Central

    Dumas, Pascal; Jimenez, Haizea; Peignon, Christophe; Wantiez, Laurent; Adjeroud, Mehdi

    2013-01-01

    No-take marine reserves are one of the oldest and most versatile tools used across the Pacific for the conservation of reef resources, in particular for invertebrates traditionally targeted by local fishers. Assessing their actual efficiency is still a challenge in complex ecosystems such as coral reefs, where reserve effects are likely to be obscured by high levels of environmental variability. The goal of this study was to investigate the potential interference of small-scale habitat structure on the efficiency of reserves. The spatial distribution of widely harvested macroinvertebrates was surveyed in a large set of protected vs. unprotected stations from eleven reefs located in New Caledonia. Abundance, density and individual size data were collected along random, small-scale (20×1 m) transects. Fine habitat typology was derived with a quantitative photographic method using 17 local habitat variables. Marine reserves substantially augmented the local density, size structure and biomass of the target species. Density of Trochus niloticus and Tridacna maxima doubled globally inside the reserve network; average size was greater by 10 to 20% for T. niloticus. We demonstrated that the apparent success of protection could be obscured by marked variations in population structure occurring over short distances, resulting from small-scale heterogeneity in the reef habitat. The efficiency of reserves appeared to be modulated by the availability of suitable habitats at the decimetric scale (“microhabitats”) for the considered sessile/low-mobile macroinvertebrate species. Incorporating microhabitat distribution could significantly enhance the efficiency of habitat surrogacy, a valuable approach in the case of conservation targets focusing on endangered or emblematic macroinvertebrate or relatively sedentary fish species PMID:23554965

  16. FLARE: a New User Facility to Study Multiple-Scale Physics of Magnetic Reconnection Through in-situ Measurements

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Prager, S.; Daughton, W. S.; Chen, Y.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S. E.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.

    2016-12-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; http://flare.pppl.gov) is a new intermediate-scale plasma experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram [Ji & Daughton, Physics of Plasmas 18, 111207 (2011)]. Most of major components either have been already fabricated or are near their completion, including the two most crucial magnets called flux cores. The hardware assembly and installation begin in this summer, followed by commissioning in 2017. Initial comprehensive set of research diagnostics will be constructed and installed also in 2017. The main diagnostics is an extensive set of magnetic probe arrays, covering multiple scales from local electron scales ( ˜ 2 mm) , to intermediate ion scales ( ˜10 cm), and global MHD scales ( ˜ 1 m). The main advantage for the magnetospheric community to use this facility is the ability to simultaneously provide in-situ measurements over all of these relevant scales. By using these laboratory data, not only the detailed spatial profiles around each reconnecting X-line are available for direct comparisons with spacecraft data, but also the global conditions and consequences of magnetic reconnection, which are often difficult to quantify in space, can be controlled or studied systematically. The planned procedures and example topics as a user facility will be discussed in details.

  17. Linear-scaling explicitly correlated treatment of solids: periodic local MP2-F12 method.

    PubMed

    Usvyat, Denis

    2013-11-21

    Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.

  18. Validation of soil hydraulic pedotransfer functions at the local and catchment scale for an Indonesian basin

    NASA Astrophysics Data System (ADS)

    Booij, Martijn J.; Oldhoff, Ruben J. J.; Rustanto, Andry

    2016-04-01

    In order to accurately model the hydrological processes in a catchment, information on the soil hydraulic properties is of great importance. These data can be obtained by conducting field work, which is costly and time consuming, or by using pedotransfer functions (PTFs). A PTF is an empirical relationship between easily obtainable soil characteristics and a soil hydraulic parameter. In this study, PTFs for the saturated hydraulic conductivity (Ks) and the available water content (AWC) are investigated. PTFs are area-specific, since for instance tropical soils often have a different composition and hydraulic behaviour compared to temperate soils. Application of temperate soil PTFs on tropical soils might result in poor performance, which is a problem as few tropical soil PTFs are available. The objective of this study is to determine whether Ks and AWC can be accurately approximated using PTFs, by analysing their performance at both the local scale and the catchment scale. Four published PTFs for Ks and AWC are validated on a data set of 91 soil samples collected in the Upper Bengawan Solo catchment on Java, Indonesia. The AWC is predicted very poorly, with Nash-Sutcliffe Efficiency (NSE) values below zero for all selected PTFs. For Ks PTFs better results were found. The Wösten and Rosetta-3 PTFs predict the Ks moderately accurate, with NSE values of 0.28 and 0.39, respectively. New PTFs for both AWC and Ks were developed using multiple linear regression and NSE values of 0.37 (AWC) and 0.55 (Ks) were obtained. Although these values are not very high, they are significantly higher than for the published PTFs. The hydrological SWAT model was set up for the Keduang, a sub-catchment of the Upper Bengawan Solo River, to simulate monthly catchment streamflow. Eleven cases were defined to validate the PTFs at the catchment scale. For the Ks-PTF cases NSE values of around 0.84 were obtained for the validation period. The use of AWC PTFs resulted in slightly lower NSE values, although the differences in model accuracy are low. The small differences between the cases are caused by the soil homogeneity in the Keduang catchment. Without model calibration an NSE value of 0.51 was found. At the local scale, the Wösten and Rosetta-3 PTFs can be used to predict Ks. AWC PTFs show insufficient accuracy at the local scale. At the catchment scale, the Wösten and Rosetta-3 Ks PTFs and the developed AWC and Ks PTFs are validated. It is recommended to use the PTFs developed in this study for the Upper Bengawan Solo catchment. More research is needed on the effect of PTF input on simulated hydrological state variables, such as soil moisture content, and the effect of catchment soil heterogeneity on the validation and application of PTFs.

  19. Assessing local instrument reliability and validity: a field-based example from northern Uganda.

    PubMed

    Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul

    2009-08-01

    This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.

  20. Applying Best Practices to Florida Local Government Retrofit Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIlvaine, J.; Sutherland, K.

    In some communities, local government and non-profit entities have funds to purchase and renovate distressed, foreclosed homes for resale in the affordable housing market. Numerous opportunities to improve whole house energy efficiency are inherent in these comprehensive renovations. BA-PIRC worked together in a multi-year field study making recommendations in individual homes, meanwhile compiling improvement costs, projected energy savings, practical challenges, and labor force factors surrounding common energy-related renovation measures. The field study, Phase 1 of this research, resulted in a set of best practices appropriate to the current labor pool and market conditions in central Florida to achieve projected annualmore » energy savings of 15-30% and higher. This report describes Phase 2 of the work where researchers worked with a local government partner to implement and refine the 'current best practices.' A simulation study was conducted to characterize savings potential under three sets of conditions representing varying replacement needs for energy-related equipment and envelope components. The three scenarios apply readily to the general remodeling industry as for renovation of foreclosed homes for the affordable housing market. Our new local government partner, the City of Melbourne, implemented the best practices in a community-scale renovation program that included ten homes in 2012.« less

  1. Existing Whole-House Solutions Case Study: Applying Best Practices to Florida Local Government Retrofit Programs - Central Florida

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    In some communities, local government and non-profit entities have funds to purchase and renovate distressed, foreclosed homes for resale in the affordable housing market. Numerous opportunities to improve whole house energy efficiency are inherent in these comprehensive renovations. BA-PIRC worked together in a multiyear field study making recommendations in individual homes, meanwhile compiling improvement costs, projected energy savings, practical challenges, and labor force factors surrounding common energy-related renovation measures. The field study, Phase 1 of this research, resulted in a set of best practices appropriate to the current labor pool and market conditions in central Florida to achieve projected annualmore » energy savings of 15%-30% and higher. This case study describes Phase 2 of the work where researchers worked with a local government partner to implement and refine the "current best practices". A simulation study was conducted to characterize savings potential under three sets of conditions representing varying replacement needs for energy-related equipment and envelope components. The three scenarios apply readily to the general remodeling industry as for renovation of foreclosed homes for the affordable housing market. The new local government partner, the City of Melbourne, implemented the best practices in a community-scale renovation program that included ten homes in 2012.« less

  2. Applying Best Practices to Florida Local Government Retrofit Programs, Central Florida (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    In some communities, local government and non-profit entities have funds to purchase and renovate distressed, foreclosed homes for resale in the affordable housing market. Numerous opportunities to improve whole house energy efficiency are inherent in these comprehensive renovations. BA-PIRC worked together in a multi-year field study making recommendations in individual homes, meanwhile compiling improvement costs, projected energy savings, practical challenges, and labor force factors surrounding common energy-related renovation measures. The field study, Phase 1 of this research, resulted in a set of best practices appropriate to the current labor pool and market conditions in central Florida to achieve projected annualmore » energy savings of 15-30% and higher. This report describes Phase 2 of the work where researchers worked with a local government partner to implement and refine the "current best practices". A simulation study was conducted to characterize savings potential under three sets of conditions representing varying replacement needs for energy-related equipment and envelope components. The three scenarios apply readily to the general remodeling industry as for renovation of foreclosed homes for the affordable housing market. Our new local government partner, the City of Melbourne, implemented the best practices in a community-scale renovation program that included ten homes in 2012.« less

  3. Applying Best Practices to Florida Local Government Retrofit Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIlvaine, J.; Sutherland, K.

    In some communities, local government and non-profit entities have funds to purchase and renovate distressed, foreclosed homes for resale in the affordable housing market. Numerous opportunities to improve whole house energy efficiency are inherent in these comprehensive renovations. BA-PIRC worked together in a multiyear field study making recommendations in individual homes, meanwhile compiling improvement costs, projected energy savings, practical challenges, and labor force factors surrounding common energy-related renovation measures. The field study, Phase 1 of this research, resulted in a set of best practices appropriate to the current labor pool and market conditions in central Florida to achieve projected annualmore » energy savings of 15%-30% and higher. This report describes Phase 2 of the work where researchers worked with a local government partner to implement and refine the "current best practices". A simulation study was conducted to characterize savings potential under three sets of conditions representing varying replacement needs for energy-related equipment and envelope components. The three scenarios apply readily to the general remodeling industry as for renovation of foreclosed homes for the affordable housing market. The new local government partner, the City of Melbourne, implemented the best practices in a community-scale renovation program that included ten homes in 2012.« less

  4. Local Perspectives on Environmental Insecurity and Its Influence on Illegal Biodiversity Exploitation

    PubMed Central

    Gore, Meredith L.; Lute, Michelle L.; Ratsimbazafy, Jonah H.; Rajaonson, Andry

    2016-01-01

    Environmental insecurity is a source and outcome of biodiversity declines and social conflict. One challenge to scaling insecurity reduction policies is that empirical evidence about local attitudes is overwhelmingly missing. We set three objectives: determine how local people rank risk associated with different sources of environmental insecurity; assess perceptions of environmental insecurity, biodiversity exploitation, myths of nature and risk management preferences; and explore relationships between perceptions and biodiversity exploitation. We conducted interviews (N = 88) with residents of Madagascar’s Torotorofotsy Protected Area, 2014. Risk perceptions had a moderate effect on perceptions of environmental insecurity. We found no effects of environmental insecurity on biodiversity exploitation. Results offer one if not the first exploration of local perceptions of illegal biodiversity exploitation and environmental security. Local people’s perception of risk seriousness associated with illegal biodiversity exploitation such as lemur hunting (low overall) may not reflect perceptions of policy-makers (considered to be high). Discord is a key entry point for attention. PMID:27082106

  5. Critical Infrastructure Vulnerability to Spatially Localized Failures with Applications to Chinese Railway System.

    PubMed

    Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun

    2017-01-17

    This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.

  6. Local Perspectives on Environmental Insecurity and Its Influence on Illegal Biodiversity Exploitation.

    PubMed

    Gore, Meredith L; Lute, Michelle L; Ratsimbazafy, Jonah H; Rajaonson, Andry

    2016-01-01

    Environmental insecurity is a source and outcome of biodiversity declines and social conflict. One challenge to scaling insecurity reduction policies is that empirical evidence about local attitudes is overwhelmingly missing. We set three objectives: determine how local people rank risk associated with different sources of environmental insecurity; assess perceptions of environmental insecurity, biodiversity exploitation, myths of nature and risk management preferences; and explore relationships between perceptions and biodiversity exploitation. We conducted interviews (N = 88) with residents of Madagascar's Torotorofotsy Protected Area, 2014. Risk perceptions had a moderate effect on perceptions of environmental insecurity. We found no effects of environmental insecurity on biodiversity exploitation. Results offer one if not the first exploration of local perceptions of illegal biodiversity exploitation and environmental security. Local people's perception of risk seriousness associated with illegal biodiversity exploitation such as lemur hunting (low overall) may not reflect perceptions of policy-makers (considered to be high). Discord is a key entry point for attention.

  7. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  8. Distributed resource allocation under communication constraints

    NASA Astrophysics Data System (ADS)

    Dodin, Pierre; Nimier, Vincent

    2001-03-01

    This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.

  9. Fault-slip inversions: Their importance in terms of strain, heterogeneity, and kinematics of brittle deformation

    NASA Astrophysics Data System (ADS)

    Riller, U.; Clark, M. D.; Daxberger, H.; Doman, D.; Lenauer, I.; Plath, S.; Santimano, T.

    2017-08-01

    Heterogeneous deformation is intrinsic in natural deformation, but often underestimated in the analysis and interpretation of mesoscopic brittle shear faults. Based on the analysis of 11,222 faults from two distinct tectonic settings, the Central Andes in Argentina and the Sudbury area in Canada, interpolation of principal strain directions and scaled analogue modelling, we revisit controversial issues of fault-slip inversions, collectively adhering to heterogeneous deformation. These issues include the significance of inversion solutions in terms of (1) strain or paleo-stress; (2) displacement, notably plate convergence; (3) local versus far-field deformation; (4) strain perturbations and (5) spacing between stations of fault-slip data acquisition. Furthermore, we highlight the value of inversions for identifying the kinematics of master fault zones in the absence of displaced geological markers. A key result of our assessment is that fault-slip inversions relate to local strain, not paleo-stress, and thus can aid in inferring, the kinematics of master faults. Moreover, strain perturbations caused by mechanical anomalies of the deforming upper crust significantly influence local principal strain directions. Thus, differently oriented principal strain axes inferred from fault-slip inversions in a given region may not point to regional deformation caused by successive and distinct deformation regimes. This outcome calls into question the common practice of separating heterogeneous fault-slip data sets into apparently homogeneous subsets. Finally, the fact that displacement vectors and principal strains are rarely co-linear defies the use of brittle fault data as proxy for estimating directions of plate-scale motions.

  10. Searching for a Cosmological Preferred Direction with 147 Rotationally Supported Galaxies

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Zhao, Zhi-Chao; Chang, Zhe

    2017-10-01

    It is well known that the Milgrom’s modified Newtonian dynamics (MOND) explains well the mass discrepancy problem in galaxy rotation curves. The MOND predicts a universal acceleration scale below which the Newtonian dynamics is still invalid. We get the universal acceleration scale of 1.02 × 10-10 m s-2 by using the Spitzer Photometry and Accurate Rotation Curves (SPARC) data set. Milgrom suggested that the acceleration scale may be a fingerprint of cosmology on local dynamics and related to the Hubble constant g † ˜ cH 0. In this paper, we use the hemisphere comparison method with the SPARC data set to investigate possible spatial anisotropy on the acceleration scale. It is found that the hemisphere of the maximum acceleration scale is in the direction (l,b)=(175\\buildrel{\\circ}\\over{.} {5}-{10^\\circ }+{6^\\circ }, -6\\buildrel{\\circ}\\over{.} {5}-{3^\\circ }+{9^\\circ }) with g †,max = 1.10 × 10-10 m s-2, while the hemisphere of the minimum acceleration scale is in the opposite direction (l,b)=(355\\buildrel{\\circ}\\over{.} {5}-{10^\\circ }+{6^\\circ }, 6\\buildrel{\\circ}\\over{.} {5}-{9^\\circ }+{3^\\circ }) with g †,min = 0.76 × 10-10 m s-2. The level of anisotropy reaches up to 0.37 ± 0.04. Robust tests show that such an anisotropy cannot be reproduced by a statistically isotropic data set. We also show that the spatial anisotropy on the acceleration scale is less correlated with the non-uniform distribution of the SPARC data points in the sky. In addition, we confirm that the anisotropy of the acceleration scale does not depend significantly on other physical parameters of the SPARC galaxies. It is interesting to note that the maximum anisotropy direction found in this paper is close with other cosmological preferred directions, particularly the direction of the “Australia dipole” for the fine structure constant.

  11. Empowering districts to target priorities for improving child health service in Uganda using change management and rapid assessment methods.

    PubMed

    Odaga, John; Henriksson, Dorcus K; Nkolo, Charles; Tibeihaho, Hector; Musabe, Richard; Katusiime, Margaret; Sinabulya, Zaccheus; Mucunguzi, Stephen; Mbonye, Anthony K; Valadez, Joseph J

    2016-01-01

    Local health system managers in low- and middle-income countries have the responsibility to set health priorities and allocate resources accordingly. Although tools exist to aid this process, they are not widely applied for various reasons including non-availability, poor knowledge of the tools, and poor adaptability into the local context. In Uganda, delivery of basic services is devolved to the District Local Governments through the District Health Teams (DHTs). The Community and District Empowerment for Scale-up (CODES) project aims to provide a set of management tools that aid contextualised priority setting, fund allocation, and problem-solving in a systematic way to improve effective coverage and quality of child survival interventions. Although the various tools have previously been used at the national level, the project aims to combine them in an integral way for implementation at the district level. These tools include Lot Quality Assurance Sampling (LQAS) surveys to generate local evidence, Bottleneck analysis and Causal analysis as analytical tools, Continuous Quality Improvement, and Community Dialogues based on Citizen Report Cards and U reports. The tools enable identification of gaps, prioritisation of possible solutions, and allocation of resources accordingly. This paper presents some of the tools used by the project in five districts in Uganda during the proof-of-concept phase of the project. All five districts were trained and participated in LQAS surveys and readily adopted the tools for priority setting and resource allocation. All districts developed health operational work plans, which were based on the evidence and each of the districts implemented more than three of the priority activities which were included in their work plans. In the five districts, the CODES project demonstrated that DHTs can adopt and integrate these tools in the planning process by systematically identifying gaps and setting priority interventions for child survival.

  12. Empowering districts to target priorities for improving child health service in Uganda using change management and rapid assessment methods

    PubMed Central

    Odaga, John; Henriksson, Dorcus K.; Nkolo, Charles; Tibeihaho, Hector; Musabe, Richard; Katusiime, Margaret; Sinabulya, Zaccheus; Mucunguzi, Stephen; Mbonye, Anthony K.; Valadez, Joseph J.

    2016-01-01

    Background Local health system managers in low- and middle-income countries have the responsibility to set health priorities and allocate resources accordingly. Although tools exist to aid this process, they are not widely applied for various reasons including non-availability, poor knowledge of the tools, and poor adaptability into the local context. In Uganda, delivery of basic services is devolved to the District Local Governments through the District Health Teams (DHTs). The Community and District Empowerment for Scale-up (CODES) project aims to provide a set of management tools that aid contextualised priority setting, fund allocation, and problem-solving in a systematic way to improve effective coverage and quality of child survival interventions. Design Although the various tools have previously been used at the national level, the project aims to combine them in an integral way for implementation at the district level. These tools include Lot Quality Assurance Sampling (LQAS) surveys to generate local evidence, Bottleneck analysis and Causal analysis as analytical tools, Continuous Quality Improvement, and Community Dialogues based on Citizen Report Cards and U reports. The tools enable identification of gaps, prioritisation of possible solutions, and allocation of resources accordingly. This paper presents some of the tools used by the project in five districts in Uganda during the proof-of-concept phase of the project. Results All five districts were trained and participated in LQAS surveys and readily adopted the tools for priority setting and resource allocation. All districts developed health operational work plans, which were based on the evidence and each of the districts implemented more than three of the priority activities which were included in their work plans. Conclusions In the five districts, the CODES project demonstrated that DHTs can adopt and integrate these tools in the planning process by systematically identifying gaps and setting priority interventions for child survival. PMID:27225791

  13. Birds of a Feather: Social Bases of Neighborhood Formation in Newark, New Jersey, 1880.

    PubMed

    Logan, John R; Shin, Hyoung-Jin

    2016-08-01

    This study examines the bases of residential segregation in a late nineteenth century American city, recognizing the strong tendency toward homophily within neighborhoods. Our primary question is how ethnicity, social class, nativity, and family composition affect where people live. Segregation is usually studied one dimension at a time, but these social differences are interrelated, and thus a multivariate approach is needed to understand their effects. We find that ethnicity is the main basis of local residential sorting, while occupational standing and, to a lesser degree, family life cycle and nativity also are significant. A second concern is the geographic scale of neighborhoods: in this study, the geographic area within which the characteristics of potential neighbors matter in locational outcomes of individuals. Studies of segregation typically use a single spatial scale, often one determined by the availability of administrative data. We take advantage of a unique data set containing the address and geo-referenced location of every resident. We conclude that it is the most local scale that offers the best prediction of people's similarity to their neighbors. Adding information at larger scales minimally improves prediction of the person's location. The 1880 neighborhoods of Newark, New Jersey, were formed as individuals located themselves among similar neighbors on a single street segment.

  14. Production regimes in four eastern boundary current systems

    NASA Technical Reports Server (NTRS)

    Carr, M. E.; Kearns, E. J.

    2003-01-01

    High productivity (maxima 3 g C m(sup -2)day(sup -1)) of the Eastern Boundary Currents (EBCs), i.e. the California, Peru-Humboldt, Canary and Benguela Currents, is driven by a combination of local forcing and large-scale circulation. The characteristics of the deep water brought to the surface by upwelling favorable winds depend on the large-scale circulation patterns. Here we use a new hydrographic and nutrient climatology together with satellite measurements ofthe wind vector, sea-surface temperature (SST), chlorophyll concentration, and primary production modeled from ocean color to quantify the meridional and seasonal patterns of upwelling dynamics and biological response. The unprecedented combination of data sets allows us to describe objectively the variability for small regions within each current and to characterize the governing factors for biological production. The temporal and spatial environmental variability was due in most regions to large-scale circulation, alone or in combination with offshore transport (local forcing). The observed meridional and seasonal patterns of biomass and primary production were most highlycorrelated to components representing large-scale circulation. The biomass sustained by a given nutrient concentration in the Atlantic EBCs was twice as large as that of the Pacific EBCs. This apparent greater efficiency may be due toavailability of iron, physical retention, or differences in planktonic community structure.

  15. Features in the primordial spectrum from WMAP: A wavelet analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Souradeep, Tarun; Manimaran, P.

    2007-06-15

    Precise measurements of the anisotropies in the cosmic microwave background enable us to do an accurate study on the form of the primordial power spectrum for a given set of cosmological parameters. In a previous paper [A. Shafieloo and T. Souradeep, Phys. Rev. D 70, 043523 (2004).], we implemented an improved (error sensitive) Richardson-Lucy deconvolution algorithm on the measured angular power spectrum from the first year of WMAP data to determine the primordial power spectrum assuming a concordance cosmological model. This recovered spectrum has a likelihood far better than a scale invariant, or, 'best fit' scale free spectra ({delta}lnL{approx_equal}25 withmore » respect to the Harrison-Zeldovich spectrum, and, {delta}lnL{approx_equal}11 with respect to the power law spectrum with n{sub s}=0.95). In this paper we use the discrete wavelet transform (DWT) to decompose the local features of the recovered spectrum individually to study their effect and significance on the recovered angular power spectrum and hence the likelihood. We show that besides the infrared cutoff at the horizon scale, the associated features of the primordial power spectrum around the horizon have a significant effect on improving the likelihood. The strong features are localized at the horizon scale.« less

  16. Experimental and theoretical studies of light-to-heat conversion and collective heating effects in metal nanoparticle solutions.

    PubMed

    Richardson, Hugh H; Carlson, Michael T; Tandler, Peter J; Hernandez, Pedro; Govorov, Alexander O

    2009-03-01

    We perform a set of experiments on photoheating in a water droplet containing gold nanoparticles (NPs). Using photocalorimetric methods, we determine efficiency of light-to-heat conversion (eta) which turns out to be remarkably close to 1, (0.97 < eta < 1.03). Detailed studies reveal a complex character of heat transfer in an optically stimulated droplet. The main mechanism of equilibration is due to convectional flow. Theoretical modeling is performed to describe thermal effects at both nano- and millimeter scales. Theory shows that the collective photoheating is the main mechanism. For a large concentration of NPs and small laser intensity, an averaged temperature increase (at the millimeter scale) is significant (approximately 7 degrees C), whereas on the nanometer scale the temperature increase at the surface of a single NP is small (approximately 0.02 degrees C). In the opposite regime, that is, a small NP concentration and intense laser irradiation, we find an opposite picture: a temperature increase at the millimeter scale is small (0.1 degrees C) but a local, nanoscale temperature has strong local spikes at the surfaces of NPs (approximately 3 degrees C). These studies are crucial for the understanding of photothermal effects in NPs and for their potential and current applications in nano- and biotechnologies.

  17. Multi-scale coupled modelling of waves and currents on the Catalan shelf.

    NASA Astrophysics Data System (ADS)

    Grifoll, M.; Warner, J. C.; Espino, M.; Sánchez-Arcilla, A.

    2012-04-01

    Catalan shelf circulation is characterized by a background along-shelf flow to the southwest (including some meso-scale features) plus episodic storm driven patterns. To investigate these dynamics, a coupled multi-scale modeling system is applied to the Catalan shelf (North-western Mediterranean Sea). The implementation consists of a set of increasing-resolution nested models, based on the circulation model ROMS and the wave model SWAN as part of the COAWST modeling system, covering from the slope and shelf region (~1 km horizontal resolution) down to a local area around Barcelona city (~40 m). The system is initialized with MyOcean products in the coarsest outer domain, and uses atmospheric forcing from other sources for the increasing resolution inner domains. Results of the finer resolution domains exhibit improved agreement with observations relative to the coarser model results. Several hydrodynamic configurations were simulated to determine dominant forcing mechanisms and hydrodynamic processes that control coastal scale processes. The numerical results reveal that the short term (hours to days) inner-shelf variability is strongly influenced by local wind variability, while sea-level slope, baroclinic effects, radiation stresses and regional circulation constitute second-order processes. Additional analysis identifies the significance of shelf/slope exchange fluxes, river discharge and the effect of the spatial resolution of the atmospheric fluxes.

  18. Statistical Downscaling in Multi-dimensional Wave Climate Forecast

    NASA Astrophysics Data System (ADS)

    Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.

    2009-04-01

    Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.

  19. A Big Data Approach for Situation-Aware estimation, correction and prediction of aerosol effects, based on MODIS Joint Atmosphere product (collection 6) time series data

    NASA Astrophysics Data System (ADS)

    Singh, A. K.; Toshniwal, D.

    2017-12-01

    The MODIS Joint Atmosphere product, MODATML2 and MYDATML2 L2/3 provided by LAADS DAAC (Level-1 and Atmosphere Archive & Distribution System Distributed Active Archive Center) re-sampled from medium resolution MODIS Terra /Aqua Satellites data at 5km scale, contains Cloud Reflectance, Cloud Top Temperature, Water Vapor, Aerosol Optical Depth/Thickness, Humidity data. These re-sampled data, when used for deriving climatic effects of aerosols (particularly in case of cooling effect) still exposes limitations in presence of uncertainty measures in atmospheric artifacts such as aerosol, cloud, cirrus cloud etc. The effect of uncertainty measures in these artifacts imposes an important challenge for estimation of aerosol effects, adequately affecting precise regional weather modeling and predictions: Forecasting and recommendation applications developed largely depend on these short-term local conditions (e.g. City/Locality based recommendations to citizens/farmers based on local weather models). Our approach inculcates artificial intelligence technique for representing heterogeneous data(satellite data along with air quality data from local weather stations (i.e. in situ data)) to learn, correct and predict aerosol effects in the presence of cloud and other atmospheric artifacts, defusing Spatio-temporal correlations and regressions. The Big Data process pipeline consisting correlation and regression techniques developed on Apache Spark platform can easily scale for large data sets including many tiles (scenes) and over widened time-scale. Keywords: Climatic Effects of Aerosols, Situation-Aware, Big Data, Apache Spark, MODIS Terra /Aqua, Time Series

  20. The Medical Science DMZ.

    PubMed

    Peisert, Sean; Barnett, William; Dart, Eli; Cuff, James; Grossman, Robert L; Balas, Edward; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2016-11-01

    We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. High-end networking, packet filter firewalls, network intrusion detection systems. We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. The exponentially increasing amounts of "omics" data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research "big data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a "Science DMZ"-a framework that is used in physical sciences and engineering research to manage high-capacity data flows. By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  1. The Medical Science DMZ

    PubMed Central

    Barnett, William; Dart, Eli; Cuff, James; Grossman, Robert L; Balas, Edward; Berman, Ari; Shankar, Anurag; Tierney, Brian

    2016-01-01

    Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements. PMID:27136944

  2. Navigation API Route Fuel Saving Opportunity Assessment on Large-Scale Real-World Travel Data for Conventional Vehicles and Hybrid Electric Vehicles: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob; Gonder, Jeffrey D

    The green routing strategy instructing a vehicle to select a fuel-efficient route benefits the current transportation system with fuel-saving opportunities. This paper introduces a navigation API route fuel-saving evaluation framework for estimating fuel advantages of alternative API routes based on large-scale, real-world travel data for conventional vehicles (CVs) and hybrid electric vehicles (HEVs). The navigation APIs, such Google Directions API, integrate traffic conditions and provide feasible alternative routes for origin-destination pairs. This paper develops two link-based fuel-consumption models stratified by link-level speed, road grade, and functional class (local/non-local), one for CVs and the other for HEVs. The link-based fuel-consumption modelsmore » are built by assigning travel from a large number of GPS driving traces to the links in TomTom MultiNet as the underlying road network layer and road grade data from a U.S. Geological Survey elevation data set. Fuel consumption on a link is calculated by the proposed fuel consumption model. This paper envisions two kinds of applications: 1) identifying alternate routes that save fuel, and 2) quantifying the potential fuel savings for large amounts of travel. An experiment based on a large-scale California Household Travel Survey GPS trajectory data set is conducted. The fuel consumption and savings of CVs and HEVs are investigated. At the same time, the trade-off between fuel saving and time saving for choosing different routes is also examined for both powertrains.« less

  3. Hypogeal geological survey in the "Grotta del Re Tiberio" natural cave (Apennines, Italy): a valid tool for reconstructing the structural setting

    NASA Astrophysics Data System (ADS)

    Ghiselli, Alice; Merazzi, Marzio; Strini, Andrea; Margutti, Roberto; Mercuriali, Michele

    2011-06-01

    As karst systems are natural windows to the underground, speleology, combined with geological surveys, can be useful tools for helping understand the geological evolution of karst areas. In order to enhance the reconstruction of the structural setting in a gypsum karst area (Vena del Gesso, Romagna Apennines), a detailed analysis has been carried out on hypogeal data. Structural features (faults, fractures, tectonic foliations, bedding) have been mapped in the "Grotta del Re Tiberio" cave, in the nearby gypsum quarry tunnels and open pit benches. Five fracture systems and six fault systems have been identified. The fault systems have been further analyzed through stereographic projections and geometric-kinematic evaluations in order to reconstruct the relative chronology of these structures. This analysis led to the detection of two deformation phases. The results permitted linking of the hypogeal data with the surface data both at a local and regional scale. At the local scale, fracture data collected in the underground have been compared with previous authors' surface data coming from the quarry area. The two data sets show a very good correspondence, as every underground fracture system matches with one of the surface fracture system. Moreover, in the cave, a larger number of fractures belonging to each system could be mapped. At the regional scale, the two deformation phases detected can be integrated in the structural setting of the study area, thereby enhancing the tectonic interpretation of the area ( e.g., structures belonging to a new deformation phase, not reported before, have been identified underground). The structural detailed hypogeal survey has, thus, provided very useful data, both by integrating the existing information and revealing new data not detected at the surface. In particular, some small structures ( e.g., displacement markers and short fractures) are better preserved in the hypogeal environment than on the surface where the outcropping gypsum is more exposed to dissolution and recrystallization. The hypogeal geological survey, therefore, can be considered a powerful tool for integrating the surface and log data in order to enhance the reconstruction of the deformational history and to get a three-dimensional model of the bedrock in karst areas.

  4. Classification tree models for predicting distributions of michigan stream fish from landscape variables

    USGS Publications Warehouse

    Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.

    2008-01-01

    Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.

  5. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin

    1997-06-01

    A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.

  6. Maintaining a Local Data Integration System in Support of Weather Forecast Operations

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.; Blottman, Peter F.; Sharp, David W.; Hoeth, Brian

    2010-01-01

    Since 2000, both the National Weather Service in Melbourne, FL (NWS MLB) and the Spaceflight Meteorology Group (SMG) have used a local data integration system (LDIS) as part of their forecast and warning operations. Each has benefited from 3-dimensional analyses that are delivered to forecasters every 15 minutes across the peninsula of Florida. The intent is to generate products that enhance short-range weather forecasts issued in support of NWS MLB and SMG operational requirements within East Central Florida. The current LDIS uses the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) package as its core, which integrates a wide variety of national, regional, and local observational data sets. It assimilates all available real-time data within its domain and is run at a finer spatial and temporal resolution than current national- or regional-scale analysis packages. As such, it provides local forecasters with a more comprehensive and complete understanding of evolving fine-scale weather features. Recent efforts have been undertaken to update the LDIS through the formal tasking process of NASA's Applied Meteorology Unit. The goals include upgrading LDIS with the latest version of ADAS, incorporating new sources of observational data, and making adjustments to shell scripts written to govern the system. A series of scripts run a complete modeling system consisting of the preprocessing step, the main model integration, and the post-processing step. The preprocessing step prepares the terrain, surface characteristics data sets, and the objective analysis for model initialization. Data ingested through ADAS include (but are not limited to) Level II Weather Surveillance Radar- 1988 Doppler (WSR-88D) data from six Florida radars, Geostationary Operational Environmental Satellites (GOES) visible and infrared satellite imagery, surface and upper air observations throughout Florida from NOAA's Earth System Research Laboratory/Global Systems Division/Meteorological Assimilation Data Ingest System (MADIS), as well as the Kennedy Space Center ICape Canaveral Air Force Station wind tower network. The scripts provide NWS MLB and SMG with several options for setting a desirable runtime configuration of the LDIS to account for adjustments in grid spacing, domain location, choice of observational data sources, and selection of background model fields, among others. The utility of an improved LDIS will be demonstrated through postanalysis warm and cool season case studies that compare high-resolution model output with and without the ADAS analyses. Operationally, these upgrades will result in more accurate depictions of the current local environment to help with short-range weather forecasting applications, while also offering an improved initialization for local versions of the Weather Research and Forecasting model.

  7. Downscaling climate model output for water resources impacts assessment (Invited)

    NASA Astrophysics Data System (ADS)

    Maurer, E. P.; Pierce, D. W.; Cayan, D. R.

    2013-12-01

    Water agencies in the U.S. and around the globe are beginning to wrap climate change projections into their planning procedures, recognizing that ongoing human-induced changes to hydrology can affect water management in significant ways. Future hydrology changes are derived using global climate model (GCM) projections, though their output is at a spatial scale that is too coarse to meet the needs of those concerned with local and regional impacts. Those investigating local impacts have employed a range of techniques for downscaling, the process of translating GCM output to a more locally-relevant spatial scale. Recent projects have produced libraries of publicly-available downscaled climate projections, enabling managers, researchers and others to focus on impacts studies, drawing from a shared pool of fine-scale climate data. Besides the obvious advantage to data users, who no longer need to develop expertise in downscaling prior to examining impacts, the use of the downscaled data by hundreds of people has allowed a crowdsourcing approach to examining the data. The wide variety of applications employed by different users has revealed characteristics not discovered during the initial data set production. This has led to a deeper look at the downscaling methods, including the assumptions and effect of bias correction of GCM output. Here new findings are presented related to the assumption of stationarity in the relationships between large- and fine-scale climate, as well as the impact of quantile mapping bias correction on precipitation trends. The validity of these assumptions can influence the interpretations of impacts studies using data derived using these standard statistical methods and help point the way to improved methods.

  8. Discontinuities Characteristics of the Upper Jurassic Arab-D Reservoir Equivalent Tight Carbonates Outcrops, Central Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Abdlmutalib, Ammar; Abdullatif, Osman

    2017-04-01

    Jurassic carbonates represent an important part of the Mesozoic petroleum system in the Arabian Peninsula in terms of source rocks, reservoirs, and seals. Jurassic Outcrop equivalents are well exposed in central Saudi Arabia and which allow examining and measuring different scales of geological heterogeneities that are difficult to collect from the subsurface due to limitations of data and techniques. Identifying carbonates Discontinuities characteristics at outcrops might help to understand and predict their properties and behavior in the subsurface. The main objective of this study is to identify the lithofacies and the discontinuities properties of the upper Jurassic carbonates of the Arab D member and the Jubaila Formation (Arab-D reservoir) based on their outcrop equivalent strata in central Saudi Arabia. The sedimentologic analysis revealed several lithofacies types that vary in their thickness, abundances, cyclicity and vertical and lateral stacking patterns. The carbonates lithofacies included mudstone, wackestone, packstone, and grainstone. These lithofacies indicate deposition within tidal flat, skeletal banks and shallow to deep lagoonal paleoenvironmental settings. Field investigations of the outcrops revealed two types of discontinuities within Arab D Member and Upper Jubaila. These are depositional discontinuities and tectonic fractures and which all vary in their orientation, intensity, spacing, aperture and displacements. It seems that both regional and local controls have affected the fracture development within these carbonate rocks. On the regional scale, the fractures seem to be structurally controlled by the Central Arabian Graben System, which affected central Saudi Arabia. While, locally, at the outcrop scale, stratigraphic, depositional and diagenetic controls appear to have influenced the fracture development and intensity. The fracture sets and orientations identified on outcrops show similarity to those fracture sets revealed in the upper Jurassic carbonates in the subsurface which suggest inter-relationships. Therefore, the integration of discontinuities characteristics revealed from the Arab-D outcrop with subsurface data might help to understand and predict discontinuity properties and patterns of the Arab-D reservoir in the subsurface.

  9. Many-body localization in disorder-free systems: The importance of finite-size constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papić, Z., E-mail: zpapic@perimeterinstitute.ca; Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5; Stoudenmire, E. Miles

    2015-11-15

    Recently it has been suggested that many-body localization (MBL) can occur in translation-invariant systems, and candidate 1D models have been proposed. We find that such models, in contrast to MBL systems with quenched disorder, typically exhibit much more severe finite-size effects due to the presence of two or more vastly different energy scales. In a finite system, this can artificially split the density of states (DOS) into bands separated by large gaps. We argue for such models to faithfully represent the thermodynamic limit behavior, the ratio of relevant coupling must exceed a certain system-size depedent cutoff, chosen such that variousmore » bands in the DOS overlap one another. Setting the parameters this way to minimize finite-size effects, we study several translation-invariant MBL candidate models using exact diagonalization. Based on diagnostics including entanglement and local observables, we observe thermal (ergodic), rather than MBL-like behavior. Our results suggest that MBL in translation-invariant systems with two or more very different energy scales is less robust than perturbative arguments suggest, possibly pointing to the importance of non-perturbative effects which induce delocalization in the thermodynamic limit.« less

  10. Basis set limit and systematic errors in local-orbital based all-electron DFT

    NASA Astrophysics Data System (ADS)

    Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias

    2006-03-01

    With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).

  11. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.

  12. Analysis of Influenza and RSV dynamics in the community using a ‘Local Transmission Zone’ approach

    NASA Astrophysics Data System (ADS)

    Almogy, Gal; Stone, Lewi; Bernevig, B. Andrei; Wolf, Dana G.; Dorozko, Marina; Moses, Allon E.; Nir-Paz, Ran

    2017-02-01

    Understanding the dynamics of pathogen spread within urban areas is critical for the effective prevention and containment of communicable diseases. At these relatively small geographic scales, short-distance interactions and tightly knit sub-networks dominate the dynamics of pathogen transmission; yet, the effective boundaries of these micro-scale groups are generally not known and often ignored. Using clinical test results from hospital admitted patients we analyze the spatio-temporal distribution of Influenza Like Illness (ILI) in the city of Jerusalem over a period of three winter seasons. We demonstrate that this urban area is not a single, perfectly mixed ecology, but is in fact comprised of a set of more basic, relatively independent pathogen transmission units, which we term here Local Transmission Zones, LTZs. By identifying these LTZs, and using the dynamic pathogen-content information contained within them, we are able to differentiate between disease-causes at the individual patient level often with near-perfect predictive accuracy.

  13. Stability of Local Quantum Dissipative Systems

    NASA Astrophysics Data System (ADS)

    Cubitt, Toby S.; Lucia, Angelo; Michalakis, Spyridon; Perez-Garcia, David

    2015-08-01

    Open quantum systems weakly coupled to the environment are modeled by completely positive, trace preserving semigroups of linear maps. The generators of such evolutions are called Lindbladians. In the setting of quantum many-body systems on a lattice it is natural to consider Lindbladians that decompose into a sum of local interactions with decreasing strength with respect to the size of their support. For both practical and theoretical reasons, it is crucial to estimate the impact that perturbations in the generating Lindbladian, arising as noise or errors, can have on the evolution. These local perturbations are potentially unbounded, but constrained to respect the underlying lattice structure. We show that even for polynomially decaying errors in the Lindbladian, local observables and correlation functions are stable if the unperturbed Lindbladian has a unique fixed point and a mixing time that scales logarithmically with the system size. The proof relies on Lieb-Robinson bounds, which describe a finite group velocity for propagation of information in local systems. As a main example, we prove that classical Glauber dynamics is stable under local perturbations, including perturbations in the transition rates, which may not preserve detailed balance.

  14. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  15. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  16. POSEIDON: An integrated system for analysis and forecast of hydrological, meteorological and surface marine fields in the Mediterranean area

    NASA Astrophysics Data System (ADS)

    Speranza, A.; Accadia, C.; Casaioli, M.; Mariani, S.; Monacelli, G.; Inghilesi, R.; Tartaglione, N.; Ruti, P. M.; Carillo, A.; Bargagli, A.; Pisacane, G.; Valentinotti, F.; Lavagnini, A.

    2004-07-01

    The Mediterranean area is characterized by relevant hydrological, meteorological and marine processes developing at horizontal space-scales of the order of 1-100 km. In the recent past, several international programs have been addressed (ALPEX, POEM, MAP, etc.) to "resolving" the dynamics of such motions. Other projects (INTERREG-Flooding, MEDEX, etc.) are at present being developed with special emphasis on catastrophic events with major impact on human society that are, quite often, characterized in their manifestation by processes with the above-mentioned scales of motion. In the dynamical evolution of such events, however, equally important is the dynamics of interaction of the local (and sometimes very damaging) processes with others developing at larger scales of motion. In fact, some of the most catastrophic events in the history of Mediterranean countries are associated with dynamical processes covering all the range of space-time scales from planetary to local. The Prevision Operational System for the mEditerranean basIn and the Defence of the lagOon of veNice (POSEIDON) is an integrated system for the analysis and forecast of hydrological, meteorological, oceanic fields specifically designed and set up in order to bridge the gap between global and local scales of motion, by modeling explicitly the above referred to dynamical processes in the range of scales from Mediterranean to local. The core of POSEIDON consists of a "cascade" of numerical models that, starting from global scale numerical analysis-forecast, goes all the way to very local phenomena, like tidal propagation in Venice Lagoon. The large computational load imposed by such operational design requires necessarily parallel computing technology: the first model in the cascade is a parallelised version of BOlogna Limited Area Model (BOLAM) running on a Quadrics 128 processors computer (also known as QBOLAM). POSEIDON, developed in the context of a co-operation between the Italian Agency for New technologies, Energy and Environment (Ente per le Nuove tecnologie, l'Energia e l'Ambiente, ENEA) and the Italian Agency for Environmental Protection and Technical Services (Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici, APAT), has become operational in 2000 and we are presently in the condition of drawing some preliminary conclusions about its performance. In the paper we describe the scientific concepts that were at the basis of the original planning, the structure of the system, its operational cycle and some preliminary scientific and technical evaluations after two years of experimentation.

  17. Local and transient nanoscale strain mapping during in situ deformation

    DOE PAGES

    Gammer, C.; Kacher, J.; Czarnik, C.; ...

    2016-08-26

    The mobility of defects such as dislocations controls the mechanical properties of metals. This mobility is determined both by the characteristics of the defect and the material, as well as the local stress and strain applied to the defect. Therefore, the knowledge of the stress and strain during deformation at the scale of defects is important for understanding fundamental deformation mechanisms. In this paper, we demonstrate a method of measuring local stresses and strains during continuous in situ deformation with a resolution of a few nanometers using nanodiffraction strain mapping. Finally, our results demonstrate how large multidimensional data sets capturedmore » with high speed electron detectors can be analyzed in multiple ways after an in situ TEM experiment, opening the door for true multimodal analysis from a single electron scattering experiment.« less

  18. Optimized circulation and weather type classifications relating large-scale atmospheric conditions to local PM10 concentrations in Bavaria

    NASA Astrophysics Data System (ADS)

    Weitnauer, C.; Beck, C.; Jacobeit, J.

    2013-12-01

    In the last decades the critical increase of the emission of air pollutants like nitrogen dioxide, sulfur oxides and particulate matter especially in urban areas has become a problem for the environment as well as human health. Several studies confirm a risk of high concentration episodes of particulate matter with an aerodynamic diameter < 10 μm (PM10) for the respiratory tract or cardiovascular diseases. Furthermore it is known that local meteorological and large scale atmospheric conditions are important influencing factors on local PM10 concentrations. With climate changing rapidly, these connections need to be better understood in order to provide estimates of climate change related consequences for air quality management purposes. For quantifying the link between large-scale atmospheric conditions and local PM10 concentrations circulation- and weather type classifications are used in a number of studies by using different statistical approaches. Thus far only few systematic attempts have been made to modify consisting or to develop new weather- and circulation type classifications in order to improve their ability to resolve local PM10 concentrations. In this contribution existing weather- and circulation type classifications, performed on daily 2.5 x 2.5 gridded parameters of the NCEP/NCAR reanalysis data set, are optimized with regard to their discriminative power for local PM10 concentrations at 49 Bavarian measurement sites for the period 1980 to 2011. Most of the PM10 stations are situated in urban areas covering urban background, traffic and industry related pollution regimes. The range of regimes is extended by a few rural background stations. To characterize the correspondence between the PM10 measurements of the different stations by spatial patterns, a regionalization by an s-mode principal component analysis is realized on the high-pass filtered data. The optimization of the circulation- and weather types is implemented using two representative classification approaches, a k-means cluster analysis and an objective version of the Grosswetter types. They have been run with varying spatial and temporal settings as well as modified numbers of classes. As an evaluation metric for their performance several skill scores are used. Taking into account the outcome further attempts towards the optimization of circulation type classifications are made. These are varying meteorological input parameters (e.g. geopotential height, zonal and meridional wind, specific humidity, temperature) on several pressure levels (1000, 850 and 500 hPa) and combinations of these variables. All classification variants are again evaluated. Based on these analyses it is further intended to develop robust downscaling models for estimating possible future - climate change induced - variations of local PM10 concentrations in Bavaria from scenario runs of global CMIP5 climate models.

  19. Bulk flow in the combined 2MTF and 6dFGSv surveys

    NASA Astrophysics Data System (ADS)

    Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao

    2018-07-01

    We create a combined sample of 10 904 late- and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a maximum likelihood estimator (ηMLE) for bulk flow measurements that allows for more accurate measurement in the presence of non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the Λ cold dark matter expectation for similar size scales.

  20. Bulk flow in the combined 2MTF and 6dFGSv surveys

    NASA Astrophysics Data System (ADS)

    Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao

    2018-04-01

    We create a combined sample of 10,904 late and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a new maximum likelihood estimator (ηMLE) for bulk flow measurements which allows for more accurate measurement in the presence non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the ΛCDM expectation for similar size scales.

  1. Time-localized wavelet multiple regression and correlation

    NASA Astrophysics Data System (ADS)

    Fernández-Macho, Javier

    2018-02-01

    This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.

  2. Predictive model for local scour downstream of hydrokinetic turbines in erodible channels

    NASA Astrophysics Data System (ADS)

    Musa, Mirko; Heisel, Michael; Guala, Michele

    2018-02-01

    A modeling framework is derived to predict the scour induced by marine hydrokinetic turbines installed on fluvial or tidal erodible bed surfaces. Following recent advances in bridge scour formulation, the phenomenological theory of turbulence is applied to describe the flow structures that dictate the equilibrium scour depth condition at the turbine base. Using scaling arguments, we link the turbine operating conditions to the flow structures and scour depth through the drag force exerted by the device on the flow. The resulting theoretical model predicts scour depth using dimensionless parameters and considers two potential scenarios depending on the proximity of the turbine rotor to the erodible bed. The model is validated at the laboratory scale with experimental data comprising the two sediment mobility regimes (clear water and live bed), different turbine configurations, hydraulic settings, bed material compositions, and migrating bedform types. The present work provides future developers of flow energy conversion technologies with a physics-based predictive formula for local scour depth beneficial to feasibility studies and anchoring system design. A potential prototype-scale deployment in a large sandy river is also considered with our model to quantify how the expected scour depth varies as a function of the flow discharge and rotor diameter.

  3. Natural disasters and population mobility in Bangladesh

    PubMed Central

    Gray, Clark L.; Mueller, Valerie

    2012-01-01

    The consequences of environmental change for human migration have gained increasing attention in the context of climate change and recent large-scale natural disasters, but as yet relatively few large-scale and quantitative studies have addressed this issue. We investigate the consequences of climate-related natural disasters for long-term population mobility in rural Bangladesh, a region particularly vulnerable to environmental change, using longitudinal survey data from 1,700 households spanning a 15-y period. Multivariate event history models are used to estimate the effects of flooding and crop failures on local population mobility and long-distance migration while controlling for a large set of potential confounders at various scales. The results indicate that flooding has modest effects on mobility that are most visible at moderate intensities and for women and the poor. However, crop failures unrelated to flooding have strong effects on mobility in which households that are not directly affected but live in severely affected areas are the most likely to move. These results point toward an alternate paradigm of disaster-induced mobility that recognizes the significant barriers to migration for vulnerable households as well their substantial local adaptive capacity. PMID:22474361

  4. Directional semivariogram analysis to identify and rank controls on the spatial variability of fracture networks

    NASA Astrophysics Data System (ADS)

    Hanke, John R.; Fischer, Mark P.; Pollyea, Ryan M.

    2018-03-01

    In this study, the directional semivariogram is deployed to investigate the spatial variability of map-scale fracture network attributes in the Paradox Basin, Utah. The relative variability ratio (R) is introduced as the ratio of integrated anisotropic semivariogram models, and R is shown to be an effective metric for quantifying the magnitude of spatial variability for any two azimuthal directions. R is applied to a GIS-based data set comprising roughly 1200 fractures, in an area which is bounded by a map-scale anticline and a km-scale normal fault. This analysis reveals that proximity to the fault strongly influences the magnitude of spatial variability for both fracture intensity and intersection density within 1-2 km. Additionally, there is significant anisotropy in the spatial variability, which is correlated with trends of the anticline and fault. The direction of minimum spatial correlation is normal to the fault at proximal distances, and gradually rotates and becomes subparallel to the fold axis over the same 1-2 km distance away from the fault. We interpret these changes to reflect varying scales of influence of the fault and the fold on fracture network development: the fault locally influences the magnitude and variability of fracture network attributes, whereas the fold sets the background level and structure of directional variability.

  5. Chemical analyses (raw laboratory data) and locality index maps of the Confederate Gulch area, Broadwater and Meagher Counties, Montana

    USGS Publications Warehouse

    ,

    1975-01-01

    Analysis of the side looking airborn radar imagery of Massachusetts, Connecticut and Rhode Island indicates that radar shows the topography in great detail. Since bedrock geologic features are frequently expressed in the topography the radar lends itself to geologic interpretation. The radar was studied by comparisons with field mapped geologic data first at a scale of approximately 1:125,000 and then at a scale of 1:500,000. The larger scale comparison revealed that faults, minor faults, joint sets, bedding and foliation attitudes, lithology and lithologic contacts all have a topographic expression interpretable on the imagery. Surficial geologic features were far less visible on the imagery over most of the area studied. The smaller scale comparisons revealed a pervasive, near orthogonal fracture set cutting all types and ages of rock and trending roughly N40?E and N30?W. In certain places the strike of bedding and foliation attitudes and some lithologic Contacts were visible in addition to the fractures. Fracturing in southern New England is apparently far more important than has been previously recognized. This new information, together with the visibility of many bedding and foliation attitudes and lithologic contacts, indicates the importance of radar imagery in improving the geologic interpretation of an area.

  6. The use of Goal Attainment Scaling in a community health promotion initiative with seniors.

    PubMed

    Kloseck, Marita

    2007-07-03

    Evaluating collaborative community health promotion initiatives presents unique challenges, including engaging community members and other stakeholders in the evaluation process, and measuring the attainment of goals at the collective community level. Goal Attainment Scaling (GAS) is a versatile, under-utilized evaluation tool adaptable to a wide range of situations. GAS actively involves all partners in the evaluation process and has many benefits when used in community health settings. The purpose of this paper is to describe the use of GAS as a potential means of measuring progress and outcomes in community health promotion and community development projects. GAS methodology was used in a local community of seniors (n = 2500; mean age = 76 +/- 8.06 SD; 77% female, 23% male) to a) collaboratively set health promotion and community partnership goals and b) objectively measure the degree of achievement, over- or under-achievement of the established health promotion goals. Goal attainment was measured in a variety of areas including operationalizing a health promotion centre in a local mall, developing a sustainable mechanism for recruiting and training volunteers to operate the health promotion centre, and developing and implementing community health education programs. Goal attainment was evaluated at 3 monthly intervals for one year, then re-evaluated again at year 2. GAS was found to be a feasible and responsive method of measuring community health promotion and community development progress. All project goals were achieved at one year or sooner. The overall GAS score for the total health promotion project increased from 16.02 at baseline (sum of scale scores = -30, average scale score = -2) to 54.53 at one year (sum of scale scores = +4, average scale score = +0.27) showing project goals were achieved above the expected level. With GAS methodology an amalgamated score of 50 represents the achievement of goals at the expected level. GAS provides a "participatory", flexible evaluation approach that involves community members, research partners and other stakeholders in the evaluation process. GAS was found to be "user-friendly" and readily understandable by seniors and other community partners not familiar with program evaluation.

  7. Microwave Remote Sensing and the Cold Land Processes Field Experiment

    NASA Technical Reports Server (NTRS)

    Kim, Edward J.; Cline, Don; Davis, Bert; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    The Cold Land Processes Field Experiment (CLPX) has been designed to advance our understanding of the terrestrial cryosphere. Developing a more complete understanding of fluxes, storage, and transformations of water and energy in cold land areas is a critical focus of the NASA Earth Science Enterprise Research Strategy, the NASA Global Water and Energy Cycle (GWEC) Initiative, the Global Energy and Water Cycle Experiment (GEWEX), and the GEWEX Americas Prediction Project (GAPP). The movement of water and energy through cold regions in turn plays a large role in ecological activity and biogeochemical cycles. Quantitative understanding of cold land processes over large areas will require synergistic advancements in 1) understanding how cold land processes, most comprehensively understood at local or hillslope scales, extend to larger scales, 2) improved representation of cold land processes in coupled and uncoupled land-surface models, and 3) a breakthrough in large-scale observation of hydrologic properties, including snow characteristics, soil moisture, the extent of frozen soils, and the transition between frozen and thawed soil conditions. The CLPX Plan has been developed through the efforts of over 60 interested scientists that have participated in the NASA Cold Land Processes Working Group (CLPWG). This group is charged with the task of assessing, planning and implementing the required background science, technology, and application infrastructure to support successful land surface hydrology remote sensing space missions. A major product of the experiment will be a comprehensive, legacy data set that will energize many aspects of cold land processes research. The CLPX will focus on developing the quantitative understanding, models, and measurements necessary to extend our local-scale understanding of water fluxes, storage, and transformations to regional and global scales. The experiment will particularly emphasize developing a strong synergism between process-oriented understanding, land surface models and microwave remote sensing. The experimental design is a multi-sensor, multi-scale (1-ha to 160,000 km ^ {2}) approach to providing the comprehensive data set necessary to address several experiment objectives. A description focusing on the microwave remote sensing components (ground, airborne, and spaceborne) of the experiment will be presented.

  8. Act local, think global: how the Malawi experience of scaling up antiretroviral treatment has informed global policy.

    PubMed

    Harries, Anthony D; Ford, Nathan; Jahn, Andreas; Schouten, Erik J; Libamba, Edwin; Chimbwandira, Frank; Maher, Dermot

    2016-09-06

    The scale-up of antiretroviral therapy (ART) in Malawi was based on a public health approach adapted to its resource-poor setting, with principles and practices borrowed from the successful tuberculosis control framework. From 2004 to 2015, the number of new patients started on ART increased from about 3000 to over 820,000. Despite being a small country, Malawi has made a significant contribution to the 15 million people globally on ART and has also contributed policy and service delivery innovations that have supported international guidelines and scale up in other countries. The first set of global guidelines for scaling up ART released by the World Health Organization (WHO) in 2002 focused on providing clinical guidance. In Malawi, the ART guidelines adopted from the outset a more operational and programmatic approach with recommendations on health systems and services that were needed to deliver HIV treatment to affected populations. Seven years after the start of national scale-up, Malawi launched a new strategy offering all HIV-infected pregnant women lifelong ART regardless of the CD4-cell count, named Option B+. This strategy was subsequently incorporated into a WHO programmatic guide in 2012 and WHO ART guidelines in 2013, and has since then been adopted by the majority of countries worldwide. In conclusion, the Malawi experience of ART scale-up has become a blueprint for a public health response to HIV and has informed international efforts to end the AIDS epidemic by 2030.

  9. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  10. Training clinicians treating HIV to diagnose cytomegalovirus retinitis

    PubMed Central

    Tun, NiNi; Maningding, Ernest; Heiden, Matthew; Rose-Nussbaumer, Jennifer; Chan, Khin Nyein; Khizniak, Tamara; Yakubenko, Alexandra; Lewallen, Susan; Keenan, Jeremy D; Saranchuk, Peter

    2014-01-01

    Abstract Problem Acquired immunodeficiency syndrome (AIDS)-related cytomegalovirus (CMV) retinitis continues to be a neglected source of blindness in resource-poor settings. The main issue is lack of capacity to diagnose CMV retinitis in the clinical setting where patients receive care and all other opportunistic infections are diagnosed. Approach We developed and implemented a four-day workshop to train clinicians working in human immunodeficiency virus (HIV) clinics how to perform binocular indirect ophthalmoscopy and diagnose CMV retinitis. Workshops comprised both classroom didactic instruction and direct clinical eye examinations in patients with advanced AIDS. Between 2007 and 2013, 14 workshops were conducted in China, Myanmar and the Russian Federation. Local setting Workshops were held with local clinicians at HIV clinics supported by nongovernmental organizations, public-sector municipal hospitals and provincial infectious disease referral hospitals. Each setting had limited or no access to locally- trained ophthalmologists, and an HIV-infected population with advanced disease. Relevant changes Clinicians learnt how to do binocular indirect ophthalmoscopy and to diagnose CMV retinitis. One year after the workshop, 32/38 trainees in Myanmar did systematic eye examination for early diagnosis of CMV retinitis as standard care for at-risk patients. In China and the Russian Federation, the success rates were lower, with 10/15 and 3/5 trainees, respectively, providing follow-up data. Lessons learnt Skills necessary for screening and diagnosis of CMV retinitis can be taught in a four-day task-oriented training workshop. Successful implementation depends on institutional support, ongoing training and technical support. The next challenge is to scale up this approach in other countries. PMID:25552774

  11. A geometric viewpoint on generalized hydrodynamics

    NASA Astrophysics Data System (ADS)

    Doyon, Benjamin; Spohn, Herbert; Yoshimura, Takato

    2018-01-01

    Generalized hydrodynamics (GHD) is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective ("dressed") velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  12. Quantum Groups, Property (T), and Weak Mixing

    NASA Astrophysics Data System (ADS)

    Brannan, Michael; Kerr, David

    2018-06-01

    For second countable discrete quantum groups, and more generally second countable locally compact quantum groups with trivial scaling group, we show that property (T) is equivalent to every weakly mixing unitary representation not having almost invariant vectors. This is a generalization of a theorem of Bekka and Valette from the group setting and was previously established in the case of low dual by Daws, Skalski, and Viselter. Our approach uses spectral techniques and is completely different from those of Bekka-Valette and Daws-Skalski-Viselter. By a separate argument we furthermore extend the result to second countable nonunimodular locally compact quantum groups, which are shown in particular not to have property (T), generalizing a theorem of Fima from the discrete setting. We also obtain quantum group versions of characterizations of property (T) of Kerr and Pichot in terms of the Baire category theory of weak mixing representations and of Connes and Weiss in terms of the prevalence of strongly ergodic actions.

  13. The influence of control parameter estimation on large scale geomorphological interpretation of pointclouds

    NASA Astrophysics Data System (ADS)

    Dorninger, P.; Koma, Z.; Székely, B.

    2012-04-01

    In recent years, laser scanning, also referred to as LiDAR, has proved to be an important tool for topographic data acquisition. Basically, laser scanning acquires a more or less homogeneously distributed point cloud. These points represent all natural objects like terrain and vegetation as well as man-made objects such as buildings, streets, powerlines, or other constructions. Due to the enormous amount of data provided by current scanning systems capturing up to several hundred thousands of points per second, the immediate application of such point clouds for large scale interpretation and analysis is often prohibitive due to restrictions of the hard- and software infrastructure. To overcome this, numerous methods for the determination of derived products do exist. Commonly, Digital Terrain Models (DTM) or Digital Surface Models (DSM) are derived to represent the topography using a regular grid as datastructure. The obvious advantages are a significant reduction of the amount of data and the introduction of an implicit neighborhood topology enabling the application of efficient post processing methods. The major disadvantages are the loss of 3D information (i.e. overhangs) as well as the loss of information due to the interpolation approach used. We introduced a segmentation approach enabling the determination of planar structures within a given point cloud. It was originally developed for the purpose of building modeling but has proven to be well suited for large scale geomorphological analysis as well. The result is an assignment of the original points to a set of planes. Each plane is represented by its plane parameters. Additionally, numerous quality and quantity parameters are determined (e.g. aspect, slope, local roughness, etc.). In this contribution, we investigate the influence of the control parameters required for the plane segmentation on the geomorphological interpretation of the derived product. The respective control parameters may be determined either automatically (i.e. estimated of the given data) or manually (i.e. supervised parameter estimation). Additionally, the result might be influenced if data processing is performed locally (i.e. using tiles) or globally. Local processing of the data has the advantages of generally performing faster, having less hardware requirements, and enabling the determination of more detailed information. By contrast, especially in geomorphological interpretation, a global data processing enables determining large scale relations within the dataset analyzed. We investigated the influence of control parameter settings on the geomorphological interpretation on airborne and terrestrial laser scanning data sets of the landslide at Doren (Vorarlberg, Austria), on airborne laser scanning data of the western cordilleras of the central Andes, and on HRSC terrain data of the Mars surface. Topics discussed are the suitability of automated versus manual determination of control parameters, the influence of the definition of the area of interest (local versus global application) as well as computational performance.

  14. 48 CFR 26.202-1 - Local area set-aside.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... shall also determine whether a local area set-aside should be further restricted to small business... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Local area set-aside. 26... area set-aside. The contracting officer may set aside solicitations to allow only local firms within a...

  15. Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.

    PubMed

    Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro

    2013-01-01

    Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.

  16. Fragment approach to constrained density functional theory calculations using Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan

    2015-06-21

    In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less

  17. Comment on "Worldwide evidence of a unimodal relationship between productivity and plant species richness".

    PubMed

    Tredennick, Andrew T; Adler, Peter B; Grace, James B; Harpole, W Stanley; Borer, Elizabeth T; Seabloom, Eric W; Anderson, T Michael; Bakker, Jonathan D; Biederman, Lori A; Brown, Cynthia S; Buckley, Yvonne M; Chu, Chengjin; Collins, Scott L; Crawley, Michael J; Fay, Philip A; Firn, Jennifer; Gruner, Daniel S; Hagenah, Nicole; Hautier, Yann; Hector, Andy; Hillebrand, Helmut; Kirkman, Kevin; Knops, Johannes M H; Laungani, Ramesh; Lind, Eric M; MacDougall, Andrew S; McCulley, Rebecca L; Mitchell, Charles E; Moore, Joslin L; Morgan, John W; Orrock, John L; Peri, Pablo L; Prober, Suzanne M; Risch, Anita C; Schütz, Martin; Speziale, Karina L; Standish, Rachel J; Sullivan, Lauren L; Wardle, Glenda M; Williams, Ryan J; Yang, Louie H

    2016-01-29

    Fraser et al. (Reports, 17 July 2015, p. 302) report a unimodal relationship between productivity and species richness at regional and global scales, which they contrast with the results of Adler et al. (Reports, 23 September 2011, p. 1750). However, both data sets, when analyzed correctly, show clearly and consistently that productivity is a poor predictor of local species richness. Copyright © 2016, American Association for the Advancement of Science.

  18. Linking morphodynamic response with sediment mass balance on the Colorado River in Marble Canyon: issues of scale, geomorphic setting, and sampling design

    USGS Publications Warehouse

    Grams, Paul E.; Topping, David J.; Schmidt, John C.; Hazel, Joseph E.; Kaplinski, Matt

    2013-01-01

    Measurements of morphologic change are often used to infer sediment mass balance. Such measurements may, however, result in gross errors when morphologic changes over short reaches are extrapolated to predict changes in sediment mass balance for long river segments. This issue is investigated by examination of morphologic change and sediment influx and efflux for a 100 km segment of the Colorado River in Grand Canyon, Arizona. For each of four monitoring intervals within a 7 year study period, the direction of sand-storage response within short morphologic monitoring reaches was consistent with the flux-based sand mass balance. Both budgeting methods indicate that sand storage was stable or increased during the 7 year period. Extrapolation of the morphologic measurements outside the monitoring reaches does not, however, provide a reasonable estimate of the magnitude of sand-storage change for the 100 km study area. Extrapolation results in large errors, because there is large local variation in site behavior driven by interactions between the flow and local bed topography. During the same flow regime and reach-average sediment supply, some locations accumulate sand while others evacuate sand. The interaction of local hydraulics with local channel geometry exerts more control on local morphodynamic response than sand supply over an encompassing river segment. Changes in the upstream supply of sand modify bed responses but typically do not completely offset the effect of local hydraulics. Thus, accurate sediment budgets for long river segments inferred from reach-scale morphologic measurements must incorporate the effect of local hydraulics in a sampling design or avoid extrapolation altogether.

  19. Diffusion and scaling during early embryonic pattern formation.

    PubMed

    Gregor, Thomas; Bialek, William; de Ruyter van Steveninck, Rob R; Tank, David W; Wieschaus, Eric F

    2005-12-20

    Development of spatial patterns in multicellular organisms depends on gradients in the concentration of signaling molecules that control gene expression. In the Drosophila embryo, Bicoid (Bcd) morphogen controls cell fate along 70% of the anteroposterior axis but is translated from mRNA localized at the anterior pole. Gradients of Bcd and other morphogens are thought to arise through diffusion, but this basic assumption has never been rigorously tested in living embryos. Furthermore, because diffusion sets a relationship between length and time scales, it is hard to see how patterns of gene expression established by diffusion would scale proportionately as egg size changes during evolution. Here, we show that the motion of inert molecules through the embryo is well described by the diffusion equation on the relevant length and time scales, and that effective diffusion constants are essentially the same in closely related dipteran species with embryos of very different size. Nonetheless, patterns of gene expression in these different species scale with egg length. We show that this scaling can be traced back to scaling of the Bcd gradient itself. Our results, together with constraints imposed by the time scales of development, suggest that the mechanism for scaling is a species-specific adaptation of the Bcd lifetime.

  20. DichotomY IdentitY: Euler-Bernoulli Numbers, Sets-Multisets, FD-BE Quantum-Statistics, 1 /f0 - 1 /f1 Power-Spectra, Ellipse-Hyperbola Conic-Sections, Local-Global Extent: ``Category-Semantics''

    NASA Astrophysics Data System (ADS)

    Rota, G.-C.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Seminal Apostol[Math.Mag.81,3,178(08);Am.Math.Month.115,9,795(08)]-Rota[Intro.Prob. Thy.(95)-p.50-55] DichotomY equivalence-class: set-theory: sets V multisets; closed V open; to Abromowitz-Stegun[Hdbk.Math.Fns.(64)]-ch.23,p.803!]: numbers/polynomials generating-functions: Euler V Bernoulli; to Siegel[Schrodinger Cent.Symp.(87); Symp.Fractals, MRS Fall Mtg.,(1989)-5-papers!] power-spectrum: 1/ f {0}-White V 1/ f {1}-Zipf/Pink (Archimedes) HYPERBOLICITY INEVITABILITY; to analytic-geometry Conic-Sections: Ellipse V (via Parabola) V Hyperbola; to Extent/Scale/Radius: Locality V Globality, Root-Causes/Ultimate-Origins: Dimensionality: odd-Z V (via fractal) V even-Z, to Symmetries/(Noether's-theorem connected)/Conservation-Laws Dichotomy: restored/conservation/convergence=0- V broken/non-conservation/divergence=/=0: with asymptotic-limit antipodes morphisms/ crossovers: Eureka!!!; "FUZZYICS"=''CATEGORYICS''!!! Connection to Kummer(1850) Bernoulli-numbers proof of FLT is via Siegel(CCNY;1964) < (1994)[AMS Joint Mtg. (2002)-Abs.973-60-124] short succinct physics proof: FLT = Least-Action Principle!!!

  1. From local to national scale DInSAR analysis for the comprehension of Earth's surface dynamics.

    NASA Astrophysics Data System (ADS)

    De Luca, Claudio; Casu, Francesco; Manunta, Michele; Zinno, Ivana; lanari, Riccardo

    2017-04-01

    Earth Observation techniques can be very helpful for the estimation of several sources of ground deformation due to their characteristics of large spatial coverage, high resolution and cost effectiveness. In this scenario, Differential Synthetic Aperture Radar Interferometry (DInSAR) is one of the most effective methodologies for its capability to generate spatially dense deformation maps with centimeter to millimeter accuracy. DInSAR exploits the phase difference (interferogram) between SAR image pairs relevant to acquisitions gathered at different times, but with the same illumination geometry and from sufficiently close flight tracks, whose separation is typically referred to as baseline. Among several, the SBAS algorithm is one of the most used DInSAR approaches and it is aimed at generating displacement time series at a multi-scale level by exploiting a set of small baseline interferograms. SBAS, and generally DInSAR, has taken benefit from the large availability of spaceborne SAR data collected along years by several satellite systems, with particular regard to the European ERS and ENVISAT sensors, which have acquired SAR images worldwide during approximately 20 years. While the application of SBAS to ERS and ENVISAT data at local scale is widely testified, very few examples involving those archives for analysis at huge spatial scale are available in literature. This is mainly due to the required processing power (in terms of CPUs, memory and storage) and the limited availability of automatic processing procedures (unsupervised tools), which are mandatory requirements for obtaining displacement results in a time effective way. Accordingly, in this work we present a methodology for generating the Vertical and Horizontal (East-West) components of Earth's surface deformation at very large (national/continental) spatial scale. In particular, it relies on the availability of a set of SAR data collected over an Area of Interest (AoI), which could be some hundreds of thousands of square kilometers wide, from ascending and descending orbits. The exploited SAR data are processed, on a local basis, through the Parallel SBAS (P-SBAS) approach thus generating the displacement time series and the corresponding mean deformation velocity maps. Subsequently, starting from the so generated DInSAR results, the proposed methodology lays on a proper mosaicking procedure to finally retrieve the mean velocity maps of the Vertical and Horizontal (East-West) deformation components relevant to the overall AoI. This technique permits to account for possible regional trends (tectonics trend) not easily detectable by the local scale DInSAR analyses. We tested the proposed methodology with the ENVISAT ASAR archives that have been acquired, from ascending and descending orbits, over California (US), covering an area of about 100.000 km2. The presented methodology can be easily applied also to other SAR satellite data. Above all, it is particularly suitable to deal with the very large data flow provided by the Sentinel-1 constellation, which collects data with a global coverage policy and an acquisition mode specifically designed for interferometric applications.

  2. Developing Urban Environment Indicators for Neighborhood Sustainability Assessment in Tripoli-Libya

    NASA Astrophysics Data System (ADS)

    Elgadi, Ahmed. A.; Hakim Ismail, Lokman; Abass, Fatma; Ali, Abdelmuniem

    2016-11-01

    Sustainability assessment frameworks are becoming increasingly important to assist in the transition towards a sustainable urban environment. The urban environment is an effective system and requires regular monitoring and evaluation through a set of relevant indicators. The indicator provides information about the state of the environment through the production value of quantity. The indicator creates sustainability assessment requests to be considered on all spatial scales to specify efficient information of urban environment sustainability in Tripoli-Libya. Detailed data is necessary to assess environmental modification in the urban environment on a local scale and ease the transfer of this information to national and global stages. This paper proposes a set of key indicators to monitor urban environmental sustainability developments of Libyan residential neighborhoods. The proposed environmental indicator framework measures the sustainability performance of an urban environment through 13 sub-categories consisting of 21 indicators. This paper also explains the theoretical foundations for the selection of all indicators with reference to previous studies.

  3. Spatial fingerprints of community structure in human interaction network for an extensive set of large-scale regions.

    PubMed

    Kallus, Zsófia; Barankai, Norbert; Szüle, János; Vattay, Gábor

    2015-01-01

    Human interaction networks inferred from country-wide telephone activity recordings were recently used to redraw political maps by projecting their topological partitions into geographical space. The results showed remarkable spatial cohesiveness of the network communities and a significant overlap between the redrawn and the administrative borders. Here we present a similar analysis based on one of the most popular online social networks represented by the ties between more than 5.8 million of its geo-located users. The worldwide coverage of their measured activity allowed us to analyze the large-scale regional subgraphs of entire continents and an extensive set of examples for single countries. We present results for North and South America, Europe and Asia. In our analysis we used the well-established method of modularity clustering after an aggregation of the individual links into a weighted graph connecting equal-area geographical pixels. Our results show fingerprints of both of the opposing forces of dividing local conflicts and of uniting cross-cultural trends of globalization.

  4. The calculations of small molecular conformation energy differences by density functional method

    NASA Astrophysics Data System (ADS)

    Topol, I. A.; Burt, S. K.

    1993-03-01

    The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.

  5. Think globally, act locally: the role of local demographics and vaccination coverage in the dynamic response of measles infection to control.

    PubMed

    Ferrari, M J; Grenfell, B T; Strebel, P M

    2013-08-05

    The global reduction of the burden of morbidity and mortality owing to measles has been a major triumph of public health. However, the continued persistence of measles infection probably not only reflects local variation in progress towards vaccination target goals, but may also reflect local variation in dynamic processes of transmission, susceptible replenishment through births and stochastic local extinction. Dynamic models predict that vaccination should increase the mean age of infection and increase inter-annual variability in incidence. Through a comparative approach, we assess national-level patterns in the mean age of infection and measles persistence. We find that while the classic predictions do hold in general, the impact of vaccination on the age distribution of cases and stochastic fadeout are mediated by local birth rate. Thus, broad-scale vaccine coverage goals are unlikely to have the same impact on the interruption of measles transmission in all demographic settings. Indeed, these results suggest that the achievement of further measles reduction or elimination goals is likely to require programmatic and vaccine coverage goals that are tailored to local demographic conditions.

  6. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  7. The impact of covariance localization on the performance of an ocean EnKF system assimilating glider data in the Ligurian Sea

    NASA Astrophysics Data System (ADS)

    Falchetti, Silvia; Alvarez, Alberto

    2018-04-01

    Data assimilation through an ensemble Kalman filter (EnKF) is not exempt from deficiencies, including the generation of long-range unphysical correlations that degrade its performance. The covariance localization technique has been proposed and used in previous research to mitigate this effect. However, an evaluation of its performance is usually hindered by the sparseness and unsustained collection of independent observations. This article assesses the performance of an ocean prediction system composed of a multivariate EnKF coupled with a regional configuration of the Regional Ocean Model System (ROMS) with a covariance localization solution and data assimilation from an ocean glider that operated over a limited region of the Ligurian Sea. Simultaneous with the operation of the forecast system, a high-quality data set was repeatedly collected with a CTD sensor, i.e., every day during the period from 5 to 20 August 2013 (approximately 4 to 5 times the synoptic time scale of the area), located on board the NR/V Alliance for model validation. Comparisons between the validation data set and the forecasts provide evidence that the performance of the prediction system with covariance localization is superior to that observed using only EnKF assimilation without localization or using a free run ensemble. Furthermore, it is shown that covariance localization also increases the robustness of the model to the location of the assimilated data. Our analysis reveals that improvements are detected with regard to not only preventing the occurrence of spurious correlations but also preserving the spatial coherence in the updated covariance matrix. Covariance localization has been shown to be relevant in operational frameworks where short-term forecasts (on the order of days) are required.

  8. Macroecological patterns of phytoplankton in the northwestern North Atlantic Ocean.

    PubMed

    Li, W K W

    2002-09-12

    Many issues in biological oceanography are regional or global in scope; however, there are not many data sets of extensive areal coverage for marine plankton. In microbial ecology, a fruitful approach to large-scale questions is comparative analysis wherein statistical data patterns are sought from different ecosystems, frequently assembled from unrelated studies. A more recent approach termed macroecology characterizes phenomena emerging from large numbers of biological units by emphasizing the shapes and boundaries of statistical distributions, because these reflect the constraints on variation. Here, I use a set of flow cytometric measurements to provide macroecological perspectives on North Atlantic phytoplankton communities. Distinct trends of abundance in picophytoplankton and both small and large nanophytoplankton underlaid two patterns. First, total abundance of the three groups was related to assemblage mean-cell size according to the 3/4 power law of allometric scaling in biology. Second, cytometric diversity (an ataxonomic measure of assemblage entropy) was maximal at intermediate levels of water column stratification. Here, intermediate disturbance shapes diversity through an equitable distribution of cells in size classes, from which arises a high overall biomass. By subsuming local fluctuations, macroecology reveals meaningful patterns of phytoplankton at large scales.

  9. Thermalization and light cones in a model with weak integrability breaking

    DOE PAGES

    Bertini, Bruno; Essler, Fabian H. L.; Groha, Stefan; ...

    2016-12-09

    Here, we employ equation-of-motion techniques to study the nonequilibrium dynamics in a lattice model of weakly interacting spinless fermions. Our model provides a simple setting for analyzing the effects of weak integrability-breaking perturbations on the time evolution after a quantum quench. We establish the accuracy of the method by comparing results at short and intermediate times to time-dependent density matrix renormalization group computations. For sufficiently weak integrability-breaking interactions we always observe prethermalization plateaus, where local observables relax to nonthermal values at intermediate time scales. At later times a crossover towards thermal behavior sets in. We determine the associated time scale,more » which depends on the initial state, the band structure of the noninteracting theory, and the strength of the integrability-breaking perturbation. Our method allows us to analyze in some detail the spreading of correlations and in particular the structure of the associated light cones in our model. We find that the interior and exterior of the light cone are separated by an intermediate region, the temporal width of which appears to scale with a universal power law t 1/3.« less

  10. An Integrated Ransac and Graph Based Mismatch Elimination Approach for Wide-Baseline Image Matching

    NASA Astrophysics Data System (ADS)

    Hasheminasab, M.; Ebadi, H.; Sedaghat, A.

    2015-12-01

    In this paper we propose an integrated approach in order to increase the precision of feature point matching. Many different algorithms have been developed as to optimizing the short-baseline image matching while because of illumination differences and viewpoints changes, wide-baseline image matching is so difficult to handle. Fortunately, the recent developments in the automatic extraction of local invariant features make wide-baseline image matching possible. The matching algorithms which are based on local feature similarity principle, using feature descriptor as to establish correspondence between feature point sets. To date, the most remarkable descriptor is the scale-invariant feature transform (SIFT) descriptor , which is invariant to image rotation and scale, and it remains robust across a substantial range of affine distortion, presence of noise, and changes in illumination. The epipolar constraint based on RANSAC (random sample consensus) method is a conventional model for mismatch elimination, particularly in computer vision. Because only the distance from the epipolar line is considered, there are a few false matches in the selected matching results based on epipolar geometry and RANSAC. Aguilariu et al. proposed Graph Transformation Matching (GTM) algorithm to remove outliers which has some difficulties when the mismatched points surrounded by the same local neighbor structure. In this study to overcome these limitations, which mentioned above, a new three step matching scheme is presented where the SIFT algorithm is used to obtain initial corresponding point sets. In the second step, in order to reduce the outliers, RANSAC algorithm is applied. Finally, to remove the remained mismatches, based on the adjacent K-NN graph, the GTM is implemented. Four different close range image datasets with changes in viewpoint are utilized to evaluate the performance of the proposed method and the experimental results indicate its robustness and capability.

  11. Validating the Center for Epidemiological Studies Depression Scale for Children in Rwanda

    PubMed Central

    Betancourt, Theresa; Scorza, Pamela; Meyers-Ohki, Sarah; Mushashi, Christina; Kayiteshonga, Yvonne; Binagwaho, Agnes; Stulac, Sara; Beardslee, William R.

    2017-01-01

    Objective We assessed the validity of the Center for Epidemiological Studies Depression Scale for Children (CES-DC) as a screen for depression in Rwandan children and adolescents. Although the CES-DC is widely used for depression screening in high-income countries, its validity in low-income and culturally diverse settings, including sub-Saharan Africa, is unknown. Method The CES-DC was selected based on alignment with local expressions of depression-like problems in Rwandan children and adolescents. To examine criterion validity, we compared CES-DC scores to depression diagnoses on a structured diagnostic interview, the Mini International Neuropsychiatric Interview for Children (MINI KID), in a sample of 367 Rwandan children and adolescents aged 10 through 17 years. Caregiver and child or adolescent self-reports endorsing the presence of local depression-like problems agahinda kenshi (persistent sorrow) and kwiheba (severe hopelessness) were also examined for agreement with MINI KID diagnosis. Results The CES-DC exhibited good internal reliability (α = .86) and test-retest reliability (r = .85). The area under the receiver operating characteristic curve for the CES-DC was 0.825 when compared to MINI KID diagnoses, indicating a strong ability to distinguish between depressed and nondepressed children and adolescents in Rwanda. A cut point of ≥ 30 corresponded with a sensitivity of 81.9% and a specificity of 71.9% in this referred sample. MINI KID diagnosis was well aligned with local expressions of depression-like problems. Conclusion The CES-DC demonstrates good psychometric properties for clinical screening and evaluation in Rwanda, and should be considered for use in this and other low-resource settings. Population samples are needed to determine a generalizable cut point in nonreferred samples. PMID:23200285

  12. Generation and evolution of anisotropic turbulence and related energy transfer in drifting proton-alpha plasmas

    NASA Astrophysics Data System (ADS)

    Maneva, Y. G.; Poedts, S.

    2018-05-01

    The power spectra of magnetic field fluctuations in the solar wind typically follow a power-law dependence with respect to the observed frequencies and wave-numbers. The background magnetic field often influences the plasma properties, setting a preferential direction for plasma heating and acceleration. At the same time the evolution of the solar-wind turbulence at the ion and electron scales is influenced by the plasma properties through local micro-instabilities and wave-particle interactions. The solar-wind-plasma temperature and the solar-wind turbulence at sub- and sup-ion scales simultaneously show anisotropic features, with different components and fluctuation power in parallel with and perpendicular to the orientation of the background magnetic field. The ratio between the power of the magnetic field fluctuations in parallel and perpendicular direction at the ion scales may vary with the heliospheric distance and depends on various parameters, including the local wave properties and nonthermal plasma features, such as temperature anisotropies and relative drift speeds. In this work we have performed two-and-a-half-dimensional hybrid simulations to study the generation and evolution of anisotropic turbulence in a drifting multi-ion species plasma. We investigate the evolution of the turbulent spectral slopes along and across the background magnetic field for the cases of initially isotropic and anisotropic turbulence. Finally, we show the effect of the various turbulent spectra for the local ion heating in the solar wind.

  13. Variation in carbon isotope discrimination in Cleistogenes squarrosa (Trin.) Keng: patterns and drivers at tiller, local, catchment, and regional scales

    PubMed Central

    Yang, Hao; Auerswald, Karl; Bai, Yongfei; Wittmer, Maximilian H. O. M.; Schnyder, Hans

    2011-01-01

    Understanding the patterns and drivers of carbon isotope discrimination in C4 species is critical for predicting the effects of global change on C3/C4 ratio of plant community and consequently on ecosystem functioning and services. Cleistogenes squarrosa (Trin.) Keng is a dominant C4 perennial bunchgrass of arid and semi-arid ecosystems across the Mongolian plateau of the Eurasian steppe. Its carbon isotope discrimination (13Δ) during photosynthesis is relatively large among C4 species and it is variable. Here the 13Δ of C. squarrosa and its potential drivers at a nested set of scales were examined. Within cohorts of tillers, 13Δ of leaves increased from 5.1‰ to 8.1‰ from old to young leaves. At the local scale, 13Δ of mature leaves varied from 5.8‰ to 8.4‰, increasing with decreasing grazing intensity. At the catchment scale, 13Δ of mature leaves varied from 6.2‰ to 8.5‰ and increased with topsoil silt content. At the regional scale, 13Δ of mature leaves varied from 5.5‰ to 8.9‰, increasing with growing-season precipitation. At all scales, 13Δ decreased with increasing leaf nitrogen content (Nleaf). Nleaf was positively correlated with grazing intensity and leaf position along tillers, but negatively correlated with precipitation. The presence of the correlations across a range of different environmental contexts strongly implicates Nleaf as a major driver of 13Δ in C. squarrosa and, possibly, other C4 species. PMID:21527626

  14. 48 CFR 26.202-1 - Local area set-aside.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Local area set-aside. 26... SOCIOECONOMIC PROGRAMS OTHER SOCIOECONOMIC PROGRAMS Disaster or Emergency Assistance Activities 26.202-1 Local area set-aside. The contracting officer may set aside solicitations to allow only local firms within a...

  15. Anyonic self-induced disorder in a stabilizer code: Quasi many-body localization in a translational invariant model

    NASA Astrophysics Data System (ADS)

    Yarloo, H.; Langari, A.; Vaezi, A.

    2018-02-01

    We enquire into the quasi many-body localization in topologically ordered states of matter, revolving around the case of Kitaev toric code on the ladder geometry, where different types of anyonic defects carry different masses induced by environmental errors. Our study verifies that the presence of anyons generates a complex energy landscape solely through braiding statistics, which suffices to suppress the diffusion of defects in such clean, multicomponent anyonic liquid. This nonergodic dynamics suggests a promising scenario for investigation of quasi many-body localization. Computing standard diagnostics evidences that a typical initial inhomogeneity of anyons gives birth to a glassy dynamics with an exponentially diverging time scale of the full relaxation. Our results unveil how self-generated disorder ameliorates the vulnerability of topological order away from equilibrium. This setting provides a new platform which paves the way toward impeding logical errors by self-localization of anyons in a generic, high energy state, originated exclusively in their exotic statistics.

  16. An efficient and near linear scaling pair natural orbital based local coupled cluster method.

    PubMed

    Riplinger, Christoph; Neese, Frank

    2013-01-21

    In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009)]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 10(5)-10(6) relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below <0.05%, which implies typically 15-20 but occasionally up to 30 atoms per domain on average. The new method has been given the acronym DLPNO-CCSD ("domain based LPNO-CCSD"). The method is nearly linear scaling with respect to system size. The original LPNO-CCSD method had three adjustable truncation thresholds that were chosen conservatively and do not need to be changed for actual applications. In the present treatment, no additional truncation parameters have been introduced. Any additional truncation is performed on the basis of the three original thresholds. There are no real-space cutoffs. Single excitations are truncated using singles-specific natural orbitals. Pairs are prescreened according to a multipole expansion of a pair correlation energy estimate based on local orbital specific virtual orbitals (LOSVs). Like its LPNO-CCSD predecessor, the method is completely of black box character and does not require any user adjustments. It is shown here that DLPNO-CCSD is as accurate as LPNO-CCSD while leading to computational savings exceeding one order of magnitude for larger systems. The largest calculations reported here featured >8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.

  17. The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface

    NASA Astrophysics Data System (ADS)

    Klass, E. V.

    2017-12-01

    The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.

  18. Low Mach number fluctuating hydrodynamics for electrolytes

    NASA Astrophysics Data System (ADS)

    Péraud, Jean-Philippe; Nonaka, Andy; Chaudhri, Anuj; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.

    2016-11-01

    We formulate and study computationally the low Mach number fluctuating hydrodynamic equations for electrolyte solutions. We are interested in studying transport in mixtures of charged species at the mesoscale, down to scales below the Debye length, where thermal fluctuations have a significant impact on the dynamics. Continuing our previous work on fluctuating hydrodynamics of multicomponent mixtures of incompressible isothermal miscible liquids [A. Donev et al., Phys. Fluids 27, 037103 (2015), 10.1063/1.4913571], we now include the effect of charged species using a quasielectrostatic approximation. Localized charges create an electric field, which in turn provides additional forcing in the mass and momentum equations. Our low Mach number formulation eliminates sound waves from the fully compressible formulation and leads to a more computationally efficient quasi-incompressible formulation. We demonstrate our ability to model saltwater (NaCl) solutions in both equilibrium and nonequilibrium settings. We show that our algorithm is second order in the deterministic setting and for length scales much greater than the Debye length gives results consistent with an electroneutral approximation. In the stochastic setting, our model captures the predicted dynamics of equilibrium and nonequilibrium fluctuations. We also identify and model an instability that appears when diffusive mixing occurs in the presence of an applied electric field.

  19. Newborn Care in the Home and Health Facility: Formative Findings for Intervention Research in Cambodia

    PubMed Central

    Bazzano, Alessandra N.; Taub, Leah; Oberhelman, Richard A.; Var, Chivorn

    2016-01-01

    Global coverage and scale up of interventions to reduce newborn mortality remains low, though progress has been achieved in improving newborn survival in many low-income settings. An important factor in the success of newborn health interventions, and moving to scale, is appropriate design of community-based programs and strategies for local implementation. We report the results of formative research undertaken to inform the design of a newborn health intervention in Cambodia. Information was gathered on newborn care practices over a period of three months using multiple qualitative methods of data collection in the primary health facility and home setting. Analysis of the data indicated important gaps, both at home and facility level, between recommended newborn care practices and those typical in the study area. The results of this formative research have informed strategies for behavior change and improving referral of sick infants in the subsequent implementation study. Collection and dissemination of data on newborn care practices from settings such as these can contribute to efforts to advance survival, growth and development of newborns for intervention research, and for future newborn health programming. PMID:28009812

  20. Estimation of local scale dispersion from local breakthrough curves during a tracer test in a heterogeneous aquifer: the Lagrangian approach.

    PubMed

    Vanderborght, Jan; Vereecken, Harry

    2002-01-01

    The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.

  1. Fast response of electron-scale turbulence to auxiliary heating cessation in National Spherical Torus Experiment

    DOE PAGES

    Ren, Y.; Wang, W. X.; LeBlanc, B. P.; ...

    2015-11-03

    In this letter, we report the first observation of the fast response of electron-scale turbulence to auxiliary heating cessation in National Spherical Torus eXperiment [Ono et al., Nucl. Fusion 40, 557 (2000)]. The observation was made in a set of RF-heated L-mode plasmas with toroidal magnetic field of 0.55 T and plasma current of 300 kA. It is observed that electron-scale turbulence spectral power (measured with a high-k collective microwave scattering system) decreases significantly following fast cessation of RF heating that occurs in less than 200 μs. The large drop in the turbulence spectral power has a short time delaymore » of about 1–2 ms relative to the RF cessation and happens on a time scale of 0.5–1 ms, much smaller than the energy confinement time of about 10 ms. Power balance analysis shows a factor of about 2 decrease in electron thermal diffusivity after the sudden drop of turbulence spectral power. Measured small changes in equilibrium profiles across the RF cessation are unlikely able to explain this sudden reduction in the measured turbulence and decrease in electron thermal transport, supported by local linear stability analysis and both local and global nonlinear gyrokinetic simulations. Furthermore, the observations imply that nonlocal flux-driven mechanism may be important for the observed turbulence and electron thermal transport.« less

  2. Axisymmetric Shearing Box Models of Magnetized Disks

    NASA Astrophysics Data System (ADS)

    Guan, Xiaoyue; Gammie, Charles F.

    2008-01-01

    The local model, or shearing box, has proven a useful model for studying the dynamics of astrophysical disks. Here we consider the evolution of magnetohydrodynamic (MHD) turbulence in an axisymmetric local model in order to evaluate the limitations of global axisymmetric models. An exploration of the model parameter space shows the following: (1) The magnetic energy and α-decay approximately exponentially after an initial burst of turbulence. For our code, HAM, the decay time τ propto Res , where Res/2 is the number of zones per scale height. (2) In the initial burst of turbulence the magnetic energy is amplified by a factor proportional to Res3/4λR, where λR is the radial scale of the initial field. This scaling applies only if the most unstable wavelength of the magnetorotational instability is resolved and the final field is subthermal. (3) The shearing box is a resonant cavity and in linear theory exhibits a discrete set of compressive modes. These modes are excited by the MHD turbulence and are visible as quasi-periodic oscillations (QPOs) in temporal power spectra of fluid variables at low spatial resolution. At high resolution the QPOs are hidden by a noise continuum. (4) In axisymmetry disk turbulence is local. The correlation function of the turbulence is limited in radial extent, and the peak magnetic energy density is independent of the radial extent of the box LR for LR > 2H. (5) Similar results are obtained for the HAM, ZEUS, and ATHENA codes; ATHENA has an effective resolution that is nearly double that of HAM and ZEUS. (6) Similar results are obtained for 2D and 3D runs at similar resolution, but only for particular choices of the initial field strength and radial scale of the initial magnetic field.

  3. FLARE: a New User Facility for Studies of Magnetic Reconnection Through Simultaneous, in-situ Measurements on MHD Scales, Ion Scales and Electron Scales

    NASA Astrophysics Data System (ADS)

    Ji, H.; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W. S.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C. E.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S. E.; Drake, J. F.; Egedal, J.; Sarff, J.; Wallace, J.

    2017-12-01

    The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram [Ji & Daughton, (2011)]. The whole device has been successfully assembled with rough leak check completed. The first plasmas are expected in the fall to winter. The main diagnostic is an extensive set of magnetic probe arrays to cover multiple scales from local electron scales ( ˜2 mm), to intermediate ion scales ( ˜10 cm), and global MHD scales ( ˜1 m), simultaneously providing in-situ measurements over all these relevant scales. By using these laboratory data, not only the detailed spatial profiles around each reconnecting X-line are available for direct comparisons with spacecraft data, but also the global conditions and consequences of magnetic reconnection, which are often difficult to quantify in space, can be controlled or studied systematically. The planned procedures and example topics as a user facility will be discussed in detail.

  4. Challenges in Upscaling Geomorphic Transport Laws: Scale-dependence of Local vs. Non-local Formalisms and Derivation of Closures (Invited)

    NASA Astrophysics Data System (ADS)

    Foufoula-Georgiou, E.; Ganti, V. K.; Passalacqua, P.

    2010-12-01

    Nonlinear geomorphic transport laws are often derived from mechanistic considerations at a point, and yet they are implemented on 90m or 30 m DEMs, presenting a mismatch in the scales of derivation and application of the flux laws. Since estimates of local slopes and curvatures are known to depend on the scale of the DEM used in their computation, two questions arise: (1) how to meaningfully compensate for the scale dependence, if any, of local transport laws? and (2) how to formally derive, via upscaling, constitutive laws that are applicable at larger scales? Recently, non-local geomorphic transport laws for sediment transport on hillslopes have been introduced using the concept of an integral flux that depends on topographic attributes in the vicinity of a point of interest. In this paper, we demonstrate the scale dependence of local nonlinear hillslope sediment transport laws and derive a closure term via upscaling (Reynolds averaging). We also show that the non-local hillslope transport laws are inherently scale independent owing to their non-local, scale-free nature. These concepts are demonstrated via an application to a small subbasin of the Oregon Coast Range using 2m LiDAR topographic data.

  5. Methodological framework for the probabilistic risk assessment of multi-hazards at a municipal scale: a case study in the Fella river valley, Eastern Italian Alps

    NASA Astrophysics Data System (ADS)

    Hussin, Haydar; van Westen, Cees; Reichenbach, Paola

    2013-04-01

    Local and regional authorities in mountainous areas that deal with hydro-meteorological hazards like landslides and floods try to set aside budgets for emergencies and risk mitigation. However, future losses are often not calculated in a probabilistic manner when allocating budgets or determining how much risk is acceptable. The absence of probabilistic risk estimates can create a lack of preparedness for reconstruction and risk reduction costs and a deficiency in promoting risk mitigation and prevention in an effective way. The probabilistic risk of natural hazards at local scale is usually ignored all together due to the difficulty in acknowledging, processing and incorporating uncertainties in the estimation of losses (e.g. physical damage, fatalities and monetary loss). This study attempts to set up a working framework for a probabilistic risk assessment (PRA) of landslides and floods at a municipal scale using the Fella river valley (Eastern Italian Alps) as a multi-hazard case study area. The emphasis is on the evaluation and determination of the uncertainty in the estimation of losses from multi-hazards. To carry out this framework some steps are needed: (1) by using physically based stochastic landslide and flood models we aim to calculate the probability of the physical impact on individual elements at risk, (2) this is then combined with a statistical analysis of the vulnerability and monetary value of the elements at risk in order to include their uncertainty in the risk assessment, (3) finally the uncertainty from each risk component is propagated into the loss estimation. The combined effect of landslides and floods on the direct risk to communities in narrow alpine valleys is also one of important aspects that needs to be studied.

  6. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Jeong, Donghui; Desjacques, Vincent; Schmidt, Fabian

    2018-01-01

    Here, we briefly introduce the key results of the recent review (arXiv:1611.09787), whose abstract is as following. This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy (or halo) statistics. We then review the excursion set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  7. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  8. Seismic probabilistic tsunami hazard: from regional to local analysis and use of geological and historical observations

    NASA Astrophysics Data System (ADS)

    Tonini, R.; Lorito, S.; Orefice, S.; Graziani, L.; Brizuela, B.; Smedile, A.; Volpe, M.; Romano, F.; De Martini, P. M.; Maramai, A.; Selva, J.; Piatanesi, A.; Pantosti, D.

    2016-12-01

    Site-specific probabilistic tsunami hazard analyses demand very high computational efforts that are often reduced by introducing approximations on tsunami sources and/or tsunami modeling. On one hand, the large variability of source parameters implies the definition of a huge number of potential tsunami scenarios, whose omission could easily lead to important bias in the analysis. On the other hand, detailed inundation maps computed by tsunami numerical simulations require very long running time. When tsunami effects are calculated at regional scale, a common practice is to propagate tsunami waves in deep waters (up to 50-100 m depth) neglecting non-linear effects and using coarse bathymetric meshes. Then, maximum wave heights on the coast are empirically extrapolated, saving a significant amount of computational time. However, moving to local scale, such assumptions drop out and tsunami modeling would require much greater computational resources. In this work, we perform a local Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) for the 50 km long coastal segment between Augusta and Siracusa, a touristic and commercial area placed along the South-Eastern Sicily coast, Italy. The procedure consists in using the outcomes of a regional SPTHA as input for a two-step filtering method to select and substantially reduce the number of scenarios contributing to the specific target area. These selected scenarios are modeled using high resolution topo-bathymetry for producing detailed inundation maps. Results are presented as probabilistic hazard curves and maps, with the goal of analyze, compare and highlight the different results provided by regional and local hazard assessments. Moreover, the analysis is enriched by the use of local observed tsunami data, both geological and historical. Indeed, tsunami data-sets available for the selected target areas are particularly rich with respect to the scarce and heterogeneous data-sets usually available elsewhere. Therefore, they can represent valuable benchmarks for testing and strengthening the results of such kind of studies. The work is funded by the Italian Flagship Project RITMARE, the two EC FP7 projects ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389), and the INGV-DPC Agreement.

  9. Simultaneous head tissue conductivity and EEG source location estimation.

    PubMed

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Countdown to Drawdown: an initial overview of exponential scaling of potential societal tipping points for deep decarbonization of global energy infrastructure by 2050

    NASA Astrophysics Data System (ADS)

    McCaffrey, Mark; Bhowmik, Avit

    2017-04-01

    The 194 signatories to the Paris Agreement range in size from small island nations (Tuvalu, less than 10,000 people) to massive states (India and China, which between them have 2.6 billion people). Their cultural backgrounds, political, economic and social systems vary widely. What they all share is an agreement for climate stabilisation at 1.5-2˚ C. A roadmap outlining potential exponential transitions towards a carbon-free economy may benefit from a logarithmic "powers of ten" framework that sets aside backgrounds and systems to examine the relative population concentration scales-from the individual (100) to local/neighborhood (103) to the national/transnational scales (108) and ultimately the global population of around 10 billion anticipated in 2050 (1010). What are the related targets and indicators for successful engagement at each level for rapid and radical reductions of carbon emissions and concentrations? What are the possible interventions and barriers that may be applied at different levels of population concentration? What "drawdown" strategies are most appropriate for different scales? Could focusing demonstrations of clean energy and sustainable practices on the local/neighborhood to urban scale (103-104) provide a leverage that has not been achieved at more complex national and transnational scales? Ultimately, backgrounds and systems are important factors in the equation, but the "powers of 10" scaling framework may provide a compass to assist in identifying the challenges, opportunities and related thresholds and tipping points for achieving deep decarbonization and transformation of the global energy infrastructure at every level of society over the next thirty-three years.

  11. Simultaneous head tissue conductivity and EEG source location estimation

    PubMed Central

    Acar, Can E.; Makeig, Scott

    2015-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3 cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15 cm2-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm2-scale accurate 3-D functional cortical imaging modality. PMID:26302675

  12. Foundations for a multiscale collaborative Earth model

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael; Peter, Daniel; Sager, Korbinian; Simutė, Saulė; Ermert, Laura; Krischer, Lion; Fichtner, Andreas

    2016-01-01

    We present a computational framework for the assimilation of local to global seismic data into a consistent model describing Earth structure on all seismically accessible scales. This Collaborative Seismic Earth Model (CSEM) is designed to meet the following requirements: (i) Flexible geometric parametrization, capable of capturing topography and bathymetry, as well as all aspects of potentially resolvable structure, including small-scale heterogeneities and deformations of internal discontinuities. (ii) Independence of any particular wave equation solver, in order to enable the combination of inversion techniques suitable for different types of seismic data. (iii) Physical parametrization that allows for full anisotropy and for variations in attenuation and density. While not all of these parameters are always resolvable, the assimilation of data that constrain any parameter subset should be possible. (iv) Ability to accommodate successive refinements through the incorporation of updates on any scale as new data or inversion techniques become available. (v) Enable collaborative Earth model construction. The structure of the initial CSEM is represented on a variable-resolution tetrahedral mesh. It is assembled from a long-wavelength 3-D global model into which several regional-scale tomographies are embedded. We illustrate the CSEM workflow of successive updating with two examples from Japan and the Western Mediterranean, where we constrain smaller scale structure using full-waveform inversion. Furthermore, we demonstrate the ability of the CSEM to act as a vehicle for the combination of different tomographic techniques with a joint full-waveform and traveltime ray tomography of Europe. This combination broadens the exploitable frequency range of the individual techniques, thereby improving resolution. We perform two iterations of a whole-Earth full-waveform inversion using a long-period reference data set from 225 globally recorded earthquakes. At this early stage of the CSEM development, the broad global updates mostly act to remove artefacts from the assembly of the initial CSEM. During the future evolution of the CSEM, the reference data set will be used to account for the influence of small-scale refinements on large-scale global structure. The CSEM as a computational framework is intended to help bridging the gap between local, regional and global tomography, and to contribute to the development of a global multiscale Earth model. While the current construction serves as a first proof of concept, future refinements and additions will require community involvement, which is welcome at this stage already.

  13. Localization of self-potential sources in volcano-electric effect with complex continuous wavelet transform and electrical tomography methods for an active volcano

    NASA Astrophysics Data System (ADS)

    Saracco, Ginette; Labazuy, Philippe; Moreau, Frédérique

    2004-06-01

    This study concerns the fluid flow circulation associated with magmatic intrusion during volcanic eruptions from electrical tomography studies. The objective is to localize and characterize the sources responsible for electrical disturbances during a time evolution survey between 1993 and 1999 of an active volcano, the Piton de la Fournaise. We have applied a dipolar probability tomography and a multi-scale analysis on synthetic and experimental SP data. We show the advantage of the complex continuous wavelet transform which allows to obtain directional information from the phase without a priori information on sources. In both cases, we point out a translation of potential sources through the upper depths during periods preceding a volcanic eruption around specific faults or structural features. The set of parameters obtained (vertical and horizontal localization, multipolar degree and inclination) could be taken into account as criteria to define volcanic precursors.

  14. Cross-Cordillera exchange mediated by the Panama Canal increased the species richness of local freshwater fish assemblages.

    PubMed Central

    Smith, Scott A.; Bell, Graham; Bermingham, Eldredge

    2004-01-01

    Completion of the Panama Canal in 1914 breached the continental divide and set into motion a natural experiment of unprecedented magnitude by bringing previously isolated freshwater fish communities into contact. The construction of a freshwater corridor connecting evolutionarily isolated communities in Pacific and Caribbean watersheds dramatically increased the rate of dispersal, without directly affecting species interactions. Here, we report that a large fraction of species have been able to establish themselves on the other side of the continental divide, whereas no species have become extinct, leading to a local increase in species richness. Our results suggest that communities are not saturated and that competitive exclusion does not occur over the time-scale previously envisioned. Moreover, the results of this unintentional experiment demonstrate that community composition and species richness were regulated by the regional process of dispersal, rather than by local processes such as competition and predation. PMID:15347510

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liakh, Dmitry I

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locallymore » supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).« less

  16. Dominant factors in controlling marine gas pools in South China

    USGS Publications Warehouse

    Xu, S.; Watney, W.L.

    2007-01-01

    In marine strata from Sinian to Middle Triassic in South China, there develop four sets of regional and six sets of local source rocks, and ten sets of reservoir rocks. The occurrence of four main formation periods in association with five main reconstruction periods, results in a secondary origin for the most marine gas pools in South China. To improve the understanding of marine gas pools in South China with severely deformed geological background, the dominant control factors are discussed in this paper. The fluid sources, including the gas cracked from crude oil, the gas dissolved in water, the gas of inorganic origin, hydrocarbons generated during the second phase, and the mixed pool fluid source, were the most significant control factors of the types and the development stage of pools. The period of the pool formation and the reconstruction controlled the pool evolution and the distribution on a regional scale. Owing to the multiple periods of the pool formation and the reconstruction, the distribution of marine gas pools was complex both in space and in time, and the gas in the pools is heterogeneous. Pool elements, such as preservation conditions, traps and migration paths, and reservoir rocks and facies, also served as important control factors to marine gas pools in South China. Especially, the preservation conditions played a key role in maintaining marine oil and gas accumulations on a regional or local scale. According to several dominant control factors of a pool, the pool-controlling model can be constructed. As an example, the pool-controlling model of Sinian gas pool in Weiyuan gas field in Sichuan basin was summed up. ?? Higher Education Press and Springer-Verlag 2007.

  17. From data to wisdom: quality improvement strategies supporting large-scale implementation of evidence-based services.

    PubMed

    Daleiden, Eric L; Chorpita, Bruce F

    2005-04-01

    The Hawaii Department of Health Child and Adolescent Mental Health Division has explored various strategies to promote widespread use of empirical evidence to improve the quality of services and outcomes for youth. This article describes a core set of clinical decisions and how several general and local evidence bases may inform those decisions. Multiple quality improvement strategies are illustrated in the context of a model that outlines four phases of evidence: data, information, knowledge, and wisdom.

  18. A geological and morphological description of Lakshmi planum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronin, A.A.; Kadnichanskii, S.A.; Kotel'nikov, V.A.

    A morphological description is presented of Lakshmi Planum and its setting, which are a single structure. It is assumed that a mechanism for lifting matter from the planet's interior and spreading it horizontally, which was accompanied by deformation of folding and/or by the formation of tectonic slivers, is the basis for the formation of structure. This allows one to speak of Lakshmi as a local center of radial spreading. The structural scales indicate the participation of asthenospheric flows in its formation.

  19. The UCLA Design Diversity Experiment (DEDIX) system: A distributed testbed for multiple-version software

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.

  20. A continuum theory of edge dislocations

    NASA Astrophysics Data System (ADS)

    Berdichevsky, V. L.

    2017-09-01

    Continuum theory of dislocation aims to describe the behavior of large ensembles of dislocations. This task is far from completion, and, most likely, does not have a "universal solution", which is applicable to any dislocation ensemble. In this regards it is important to have guiding lines set by benchmark cases, where the transition from a discrete set of dislocations to a continuum description is made rigorously. Two such cases have been considered recently: equilibrium of dislocation walls and screw dislocations in beams. In this paper one more case is studied, equilibrium of a large set of 2D edge dislocations placed randomly in a 2D bounded region. The major characteristic of interest is energy of dislocation ensemble, because it determines the structure of continuum equations. The homogenized energy functional is obtained for the periodic dislocation ensembles with a random contents of the periodic cell. Parameters of the periodic structure can change slowly on distances of order of the size of periodic cells. The energy functional is obtained by the variational-asymptotic method. Equilibrium positions are local minima of energy. It is confirmed the earlier assertion that energy density of the system is the sum of elastic energy of averaged elastic strains and microstructure energy, which is elastic energy of the neutralized dislocation system, i.e. the dislocation system placed in a constant dislocation density field making the averaged dislocation density zero. The computation of energy is reduced to solution of a variational cell problem. This problem is solved analytically. The solution is used to investigate stability of simple dislocation arrays, i.e. arrays with one dislocation in the periodic cell. The relations obtained yield two outcomes: First, there is a state parameter of the system, dislocation polarization; averaged stresses affect only dislocation polarization and cannot change other characteristics of the system. Second, the structure of dislocation phase space is strikingly simple. Dislocation phase space is split in a family of subspaces corresponding to constant values of dislocation polarizations; in each equipolarization subspace there are many local minima of energy; for zero external stresses the system is stuck in a local minimum of energy; for non-zero slowly changing external stress, dislocation polarization evolves, while the system moves over local energy minima of equipolarization subspaces. Such a simple picture of dislocation dynamics is due to the presence of two time scales, slow evolution of dislocation polarization and fast motion of the system over local minima of energy. The existence of two time scales is justified for a neutral system of edge dislocations.

  1. Ground penetrating radar imaging of cap rock, caliche and carbonate strata

    USGS Publications Warehouse

    Kruse, S.E.; Schneider, J.C.; Campagna, D.J.; Inman, J.A.; Hickey, T.D.

    2000-01-01

    Field experiments show ground penetrating radar (GPR) can be used to image shallow carbonate stratigraphy effectively in a variety of settings. In south Florida, the position and structure of cap rock cover on limestone can be an important control on surface water flow and vegetation, but larger scale outcrops (tens of meters) of cap rock are sparse. GPR mapping through south Florida prairie, cypress swamp and hardwood hammock resolves variations in thickness and structure of cap rock to ~3 m and holds the potential to test theories for cap rock-vegetation relationships. In other settings, carbonate strata are mapped to test models for the formation of local structural anomalies. A test of GPR imaging capabilities on an arid caliche (calcrete) horizon in southeastern Nevada shows depth penetration to ~2 m with resolution of the base of caliche. GPR profiling also succeeds in resolving more deeply buried (~5 m) limestone discontinuity surfaces that record subaerial exposure in south Florida. (C) 2000 Elsevier Science B.V. All rights reserved.Field experiments show ground penetrating radar (GPR) can be used to image shallow carbonate stratigraphy effectively in a variety of settings. In south Florida, the position and structure of cap rock cover on limestone can be an important control on surface water flow and vegetation, but larger scale outcrops (tens of meters) of cap rock are sparse. GPR mapping through south Florida prairie, cypress swamp and hardwood hammock resolves variations in thickness and structure of cap rock to approx. 3 m and holds the potential to test theories for cap rock-vegetation relationships. In other settings, carbonate strata are mapped to test models for the formation of local structural anomalies. A test of GPR imaging capabilities on an arid caliche (calcrete) horizon in southeastern Nevada shows depth penetration to approx. 2 m with resolution of the base of caliche. GPR profiling also succeeds in resolving more deeply buried (approx. 5 m) limestone discontinuity surfaces that record subaerial exposure in south Florida.

  2. The LANDFIRE Total Fuel Change Tool (ToFuΔ) user’s guide

    USGS Publications Warehouse

    Smail, Tobin; Martin, Charley; Napoli, Jim

    2011-01-01

    LANDFIRE fuel data were originally developed from coarse-scale existing vegetation type, existing vegetation cover, existing vegetation height, and biophysical setting layers. Fire and fuel specialists from across the country provided input to the original LANDFIRE National (LF_1.0.0) fuel layers to help calibrate fuel characteristics on a more localized scale. The LANDFIRE Total Fuel Change Tool (ToFu∆) was developed from this calibration process. Vegetation is subject to constant change – and fuels are therefore also dynamic, necessitating a systematic method for reflecting changes spatially so that fire behavior can be accurately accessed. ToFuΔ allows local experts to quickly produce maps that spatially display any proposed fuel characteristics changes. ToFu∆ works through a Microsoft Access database to produce spatial results in ArcMap based on rule sets devised by the user that take into account the existing vegetation type (EVT), existing vegetation cover (EVC), existing vegetation height (EVH), and biophysical setting (BpS) from the LANDFIRE grid data. There are also options within ToFu∆ to add discrete variables in grid format through use of the wildcard option and for subdividing specific areas for different fuel characteristic assignments through the BpS grid. The ToFu∆ user determines the size of the area for assessment by defining a Management Unit, or “MU.” User-defined rule sets made up of EVT, EVC, EVH, and BpS layers, as well as any wildcard selections, are used to change or refine fuel characteristics within the MU. Once these changes have been made to the fuel characteristics, new grids are created for fire behavior analysis or planning. These grids represent the most common ToFu∆ output. ToFuΔ is currently under development and will continue to be updated in the future. The current beta version (0.12), released in March 2011, is compatible with Windows 7 and will be the last release until the fall of 2011.

  3. Influence of spatial resolution on precipitation simulations for the central Andes Mountains

    NASA Astrophysics Data System (ADS)

    Trachte, Katja; Bendix, Jörg

    2013-04-01

    The climate of South America is highly influenced by the north-south oriented Andes Mountains. Their complex structure causes modifications of large-scale atmospheric circulations resulting in various mesoscale phenomena as well as a high variability in the local conditions. Due to their height and length the terrain generates distinctly climate conditions between the western and the eastern slopes. While in the tropical regions along the western flanks the conditions are cold and arid, the eastern slopes are dominated by warm-moist and rainy air coming from the Amazon basin. Below 35° S the situation reverses with rather semiarid conditions in the eastern part and temperate rainy climate along southern Chile. Generally, global circulation models (GCMs) describe the state of the global climate and its changes, but are disabled to capture regional or even local features due to their coarse resolution. This is particularly true in heterogeneous regions such as the Andes Mountains, where local driving features, e. g. local circulation systems, highly varies on small scales and thus, lead to a high variability of rainfall distributions. An appropriate technique to overcome this problem and to gain regional and local scale rainfall information is the dynamical downscaling of the global data using a regional climate model (RCM). The poster presents results of the evaluation of the performance of the Weather Research and Forecasting (WRF) model over South America with special focus on the central Andes Mountains of Ecuador. A sensitivity study regarding the cumulus parametrization, microphysics, boundary layer processes and the radiation budget is conducted. With 17 simulations consisting of 16 parametrization scheme combinations and 1 default run a suitable model set-up for climate research in this region is supposed to be evaluated. The simulations were conducted in a two-way nested mode i) to examine the best physics scheme combination for the target and ii) to analyze the impact of spatial resolution and thus, the representation of the terrain on the result.

  4. Think locally, act locally: detection of small, medium-sized, and large communities in large networks.

    PubMed

    Jeub, Lucas G S; Balachandran, Prakash; Porter, Mason A; Mucha, Peter J; Mahoney, Michael W

    2015-01-01

    It is common in the study of networks to investigate intermediate-sized (or "meso-scale") features to try to gain an understanding of network structure and function. For example, numerous algorithms have been developed to try to identify "communities," which are typically construed as sets of nodes with denser connections internally than with the remainder of a network. In this paper, we adopt a complementary perspective that communities are associated with bottlenecks of locally biased dynamical processes that begin at seed sets of nodes, and we employ several different community-identification procedures (using diffusion-based and geodesic-based dynamics) to investigate community quality as a function of community size. Using several empirical and synthetic networks, we identify several distinct scenarios for "size-resolved community structure" that can arise in real (and realistic) networks: (1) the best small groups of nodes can be better than the best large groups (for a given formulation of the idea of a good community); (2) the best small groups can have a quality that is comparable to the best medium-sized and large groups; and (3) the best small groups of nodes can be worse than the best large groups. As we discuss in detail, which of these three cases holds for a given network can make an enormous difference when investigating and making claims about network community structure, and it is important to take this into account to obtain reliable downstream conclusions. Depending on which scenario holds, one may or may not be able to successfully identify "good" communities in a given network (and good communities might not even exist for a given community quality measure), the manner in which different small communities fit together to form meso-scale network structures can be very different, and processes such as viral propagation and information diffusion can exhibit very different dynamics. In addition, our results suggest that, for many large realistic networks, the output of locally biased methods that focus on communities that are centered around a given seed node (or set of seed nodes) might have better conceptual grounding and greater practical utility than the output of global community-detection methods. They also illustrate structural properties that are important to consider in the development of better benchmark networks to test methods for community detection.

  5. Think locally, act locally: Detection of small, medium-sized, and large communities in large networks

    NASA Astrophysics Data System (ADS)

    Jeub, Lucas G. S.; Balachandran, Prakash; Porter, Mason A.; Mucha, Peter J.; Mahoney, Michael W.

    2015-01-01

    It is common in the study of networks to investigate intermediate-sized (or "meso-scale") features to try to gain an understanding of network structure and function. For example, numerous algorithms have been developed to try to identify "communities," which are typically construed as sets of nodes with denser connections internally than with the remainder of a network. In this paper, we adopt a complementary perspective that communities are associated with bottlenecks of locally biased dynamical processes that begin at seed sets of nodes, and we employ several different community-identification procedures (using diffusion-based and geodesic-based dynamics) to investigate community quality as a function of community size. Using several empirical and synthetic networks, we identify several distinct scenarios for "size-resolved community structure" that can arise in real (and realistic) networks: (1) the best small groups of nodes can be better than the best large groups (for a given formulation of the idea of a good community); (2) the best small groups can have a quality that is comparable to the best medium-sized and large groups; and (3) the best small groups of nodes can be worse than the best large groups. As we discuss in detail, which of these three cases holds for a given network can make an enormous difference when investigating and making claims about network community structure, and it is important to take this into account to obtain reliable downstream conclusions. Depending on which scenario holds, one may or may not be able to successfully identify "good" communities in a given network (and good communities might not even exist for a given community quality measure), the manner in which different small communities fit together to form meso-scale network structures can be very different, and processes such as viral propagation and information diffusion can exhibit very different dynamics. In addition, our results suggest that, for many large realistic networks, the output of locally biased methods that focus on communities that are centered around a given seed node (or set of seed nodes) might have better conceptual grounding and greater practical utility than the output of global community-detection methods. They also illustrate structural properties that are important to consider in the development of better benchmark networks to test methods for community detection.

  6. What's exposed? Mapping elements at risk from space

    NASA Astrophysics Data System (ADS)

    Taubenböck, Hannes; Klotz, Martin; Geiß, Christian

    2014-05-01

    The world has suffered from severe natural disasters over the last decennium. The earthquake in Haiti in 2010 or the typhoon "Haiyan" hitting the Philippines in 2013 are among the most prominent examples in recent years. Especially in developing countries, knowledge on amount, location or type of the exposed elements or people is often not given. (Geo)-data are mostly inaccurate, generalized, not up-to-date or even not available at all. Thus, fast and effective disaster management is often delayed until necessary geo-data allow an assessment of effected people, buildings, infrastructure and their respective locations. In the last decade, Earth observation data and methods have developed a product portfolio from low resolution land cover datasets to high resolution spatially accurate building inventories to classify elements at risk or even assess indirectly population densities. This presentation will give an overview on the current available products and EO-based capabilities from global to local scale. On global to regional scale, remote sensing derived geo-products help to approximate the inventory of elements at risk in their spatial extent and abundance by mapping and modelling approaches of land cover or related spatial attributes such as night-time illumination or fractions of impervious surfaces. The capabilities and limitations for mapping physical exposure will be discussed in detail using the example of DLR's 'Global Urban Footprint' initiative. On local scale, the potential of remote sensing particularly lies in the generation of spatially and thematically accurate building inventories for the detailed analysis of the building stock's physical exposure. Even vulnerability-related indicators can be derived. Indicators such as building footprint, height, shape characteristics, roof materials, location, and construction age and structure type have already been combined with civil engineering approaches to assess building stability for large areas. Especially last generation optical sensors - often in combination with digital surface models - featuring very high geometric resolutions are perceived as advantageous for operational applications, especially for small to medium scale urban areas. With regard to user-oriented product generation in the FP-7project SENSUM, a multi-scale and multi-source reference database has been set up to systematically screen available products - global to local ones - with regard to data availability in data-rich and data-poor countries. Thus, the higher ranking goal in this presentation is to provide a systematic overview on EO-based data sets and their individual capabilities and limitations with respect to spatial, temporal and thematic details to support decision-making in before, during and after natural disasters.

  7. Local and Catchment-Scale Water Storage Changes in Northern Benin Deduced from Gravity Monitoring at Various Time-Scales

    NASA Astrophysics Data System (ADS)

    Hinderer, J.; Hector, B.; Séguis, L.; Descloitres, M.; Cohard, J.; Boy, J.; Calvo, M.; Rosat, S.; Riccardi, U.; Galle, S.

    2013-12-01

    Water storage changes (WSC) are investigated by the mean of gravity monitoring in Djougou, northern Benin, in the frame of the GHYRAF (Gravity and Hydrology in Africa) project. In this area, WSC are 1) part of the control system for evapotranspiration (ET) processes, a key variable of the West-African monsoon cycle and 2) the state variable for resource management, a critical issue in storage-poor hard rock basement contexts such as in northern Benin. We show the advantages of gravity monitoring for analyzing different processes in the water cycle involved at various time and space scales, using the main gravity sensors available today (FG5 absolute gravimeter, superconducting gravimeter -SG- and CG5 micro-gravimeter). The study area is also part of the long-term observing system AMMA-Catch, and thus under intense hydro-meteorological monitoring (rain, soil moisture, water table level, ET ...). Gravity-derived WSC are compared at all frequencies to hydrological data and to hydrological models calibrated on these data. Discrepancies are analyzed to discuss the pros and cons of each approach. Fast gravity changes (a few hours) are significant when rain events occur, and involve different contributions: rainfall itself, runoff, fast subsurface water redistribution, screening effect of the gravimeter building and local topography. We investigate these effects and present the statistical results of a set of rain events recorded with the SG installed in Djougou since July 2010. The intermediate time scale of gravity changes (a few days) is caused by ET and both vertical and horizontal water redistribution. The integrative nature of gravity measurements does not allow to separate these different contributions, and the screening from the shelter reduces our ability to retrieve ET values. Also, atmospheric corrections are critical at such frequencies, and deserve some specific attention. However, a quick analysis of gravity changes following rain events shows that the values are in accordance with expected ET values (up to about 5mm/day). Seasonal WSC are analyzed since 2008 using FG5 absolute gravity measurements four times a year and since 2010 using the continuous SG time series. They can reach up to 12 microGal (≈270mm) and show a clear interannual variability, as can be expected from rainfall variability in the area. This data set allows some estimates of an average specific yield for the local aquifer, together with a scaling factor for Magnetic Resonance Soundings-derived water content.

  8. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  9. Diffusion of novel healthcare technologies to resource poor settings.

    PubMed

    Malkin, Robert; von Oldenburg Beer, Kim

    2013-09-01

    A new product has completed clinical trials in a distant, resource poor hospital using a few dozen prototypes. The data looks great. The novel medical device solves a widely felt problem. The next goal is to integrate the device into the country's healthcare system and spread the device to other countries. But how? In order to be widely used, the device must be manufactured and distributed. One option is to license the intellectual property (IP) to an interested third party, if one can be found. However, it is possible to manage the manufacturing and distribution without licensing. There are at least two common means for manufacturing a novel medical device targeted to resource poor settings: (a) formal (contract) manufacturing and (b) informal (local) manufacturing. There are three primary routes to diffusion of novel medical devices in the developing world: (1) local distributors (2) direct international sales and (3) international donations. Perhaps surprisingly, the least effective mechanism is direct importation through donation. The most successful mechanism, the method used by nearly all working medical devices in resource-poor settings, is the use of contract manufacturing and a local distributor. This article is written for the biomedical innovator and entrepreneur who wishes to make a novel healthcare technology or product available and accessible to healthcare providers and patients in the developing world. There are very few documented cases and little formal research in this area. To this end, this article describes and explores the manufacturing and distribution options in order to provide insights into when and how each can be applied to scale up a novel technology to make a difference in a resource poor setting.

  10. The Relationship between Media Consumption and Health-Related Anxieties after the Fukushima Daiichi Nuclear Disaster

    PubMed Central

    Sugimoto, Amina; Nomura, Shuhei; Tsubokura, Masaharu; Matsumura, Tomoko; Muto, Kaori; Sato, Mikiko; Gilmour, Stuart

    2013-01-01

    Background The Fukushima Daiichi nuclear disaster caused a global panic by a release of harmful radionuclides. In a disaster setting, misusage of contemporary media sources available today can lead to disseminated incorrect information and panic. The study aims to build a scale which examines associations between media and individual anxieties, and to propose effective media usages for future disaster management. Methods The University of Tokyo collaborated with the Fukushima local government to conduct a radiation-health-seminar for a total of 1560 residents, at 12 different locations in Fukushima. A 13 item questionnaire collected once before and after a radiation-seminar was used on factor analysis to develop sub-scales for multiple regression models, to determine relationships between the sub-scales and media type consumed. A paired t–test was used to examine any changes in sub-scale of pre- and post-seminar scores. Results Three sub-scales were revealed and were associated with different media types: was with rumors, while concern for the future was positively associated with regional-newspapers and negatively with national-newspapers. Anxiety about social-disruption was associated with radio. The seminar had a significant effect on anxiety reduction for all the three sub-scales. Conclusion Different media types were associated with various heightened concerns, and that a radiation seminar was helpful to reduce anxieties in the post-disaster setting. By tailoring post-disaster messages via specific media types, i.e., radio, it may be possible to effectively convey important information, as well as to calm fears about particular elements of post-disaster recovery and to combat rumors. PMID:23967046

  11. The relationship between media consumption and health-related anxieties after the Fukushima Daiichi nuclear disaster.

    PubMed

    Sugimoto, Amina; Nomura, Shuhei; Tsubokura, Masaharu; Matsumura, Tomoko; Muto, Kaori; Sato, Mikiko; Gilmour, Stuart

    2013-01-01

    The Fukushima Daiichi nuclear disaster caused a global panic by a release of harmful radionuclides. In a disaster setting, misusage of contemporary media sources available today can lead to disseminated incorrect information and panic. The study aims to build a scale which examines associations between media and individual anxieties, and to propose effective media usages for future disaster management. The University of Tokyo collaborated with the Fukushima local government to conduct a radiation-health-seminar for a total of 1560 residents, at 12 different locations in Fukushima. A 13 item questionnaire collected once before and after a radiation-seminar was used on factor analysis to develop sub-scales for multiple regression models, to determine relationships between the sub-scales and media type consumed. A paired t-test was used to examine any changes in sub-scale of pre- and post-seminar scores. Three sub-scales were revealed and were associated with different media types: was with rumors, while concern for the future was positively associated with regional-newspapers and negatively with national-newspapers. Anxiety about social-disruption was associated with radio. The seminar had a significant effect on anxiety reduction for all the three sub-scales. Different media types were associated with various heightened concerns, and that a radiation seminar was helpful to reduce anxieties in the post-disaster setting. By tailoring post-disaster messages via specific media types, i.e., radio, it may be possible to effectively convey important information, as well as to calm fears about particular elements of post-disaster recovery and to combat rumors.

  12. Interference techniques in fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dogan, Mehmet

    We developed a set of interference-based optical microscopy techniques to study biological structures through nanometer-scale axial localization of fluorescent biomarkers. Spectral self-interference fluorescence microscopy (SSFM) utilizes interference of direct and reflected waves emitted from fluorescent molecules in the vicinity of planar reflectors to reveal the axial position of the molecules. A comprehensive calculation algorithm based on Green's function formalism is presented to verify the validity of approximations used in a far-field approach that describes the emission of fluorescent markers near interfaces. Using the validated model, theoretical limits of axial localization were determined with emphasis given to numerical aperture (NA) dependence of localization uncertainty. SSFM was experimentally demonstrated in conformational analysis of nucleoproteins. In particular, interaction between surface-tethered 75-mer double strand DNA and integration host factor (IHF) protein was probed on Si-SiO2 substrates by determining the axial position of fluorescent labels attached to the free ends of DNA molecules. Despite its sub-nanometer precision axial localization capability, SSFM lacks high lateral resolution due to the low-NA requirement for planar reflectors. We developed a second technique, 4Pi-SSFM, which improves the lateral resolution of a conventional SSFM system by an order of magnitude while achieving nanometer-scale axial localization precision. Using two opposing high-NA objectives, fluorescence signal is interferometrically collected and spectral interference pattern is recorded. Axial position of emitters is found from analysis of the spectra. The 4Pi-SSFM technique was experimentally demonstrated by determining the surface profiles of fabricated glass surfaces and outer membranes of Shigella, a type of Gram-negative bacteria. A further discussion is presented to localize surface O antigen, which is an important oligosaccharide structure in the virulence mechanism of the Gram-negative bacteria, including E. coli and Shigella.

  13. Scale Effects and Expected Savings from Consolidation Policies of Italian Local Healthcare Authorities.

    PubMed

    Di Novi, Cinzia; Rizzi, Dino; Zanette, Michele

    2018-02-01

    Consolidation is often considered by policymakers as a means to reduce service delivery costs and enhance accountability. The aim of this study was to estimate the potential cost savings that may be derived from consolidation of local health authorities (LHAs) with specific reference to the Italian setting. For our empirical analysis, we use data relating to the costs of the LHAs as reported in the 2012 LHAs' Income Statements published within the New Health Information System (NSIS) by the Ministry of Health. With respect to the previous literature on the consolidation of local health departments (LHDs), which is based on ex-post-assessments on what has been the impact of the consolidation of LHDs on health spending, we use an ex-ante-evaluation design and simulate the potential cost savings that may arise from the consolidation of LHAs. Our results show the existence of economies of scale with reference to a particular subset of the production costs of LHAs, i.e. administrative costs together with the purchasing costs of goods (such as drugs and medical devices) as well as non-healthcare-related services. The research findings of our paper provide practical insight into the concerns and challenges of LHA consolidations and may have important implications for NHS organisation and for the containment of public healthcare expenditure.

  14. High-Resolution Air Pollution Mapping with Google Street View Cars: Exploiting Big Data.

    PubMed

    Apte, Joshua S; Messier, Kyle P; Gani, Shahzad; Brauer, Michael; Kirchstetter, Thomas W; Lunden, Melissa M; Marshall, Julian D; Portier, Christopher J; Vermeulen, Roel C H; Hamburg, Steven P

    2017-06-20

    Air pollution affects billions of people worldwide, yet ambient pollution measurements are limited for much of the world. Urban air pollution concentrations vary sharply over short distances (≪1 km) owing to unevenly distributed emission sources, dilution, and physicochemical transformations. Accordingly, even where present, conventional fixed-site pollution monitoring methods lack the spatial resolution needed to characterize heterogeneous human exposures and localized pollution hotspots. Here, we demonstrate a measurement approach to reveal urban air pollution patterns at 4-5 orders of magnitude greater spatial precision than possible with current central-site ambient monitoring. We equipped Google Street View vehicles with a fast-response pollution measurement platform and repeatedly sampled every street in a 30-km 2 area of Oakland, CA, developing the largest urban air quality data set of its type. Resulting maps of annual daytime NO, NO 2 , and black carbon at 30 m-scale reveal stable, persistent pollution patterns with surprisingly sharp small-scale variability attributable to local sources, up to 5-8× within individual city blocks. Since local variation in air quality profoundly impacts public health and environmental equity, our results have important implications for how air pollution is measured and managed. If validated elsewhere, this readily scalable measurement approach could address major air quality data gaps worldwide.

  15. Peridynamic Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy; Bond, Stephen D.; Littlewood, David John

    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less

  16. Intensive agriculture erodes β-diversity at large scales.

    PubMed

    Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C

    2012-09-01

    Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity. © 2012 Blackwell Publishing Ltd/CNRS.

  17. Stability of knotted vortices in wave chaos

    NASA Astrophysics Data System (ADS)

    Taylor, Alexander; Dennis, Mark

    Large scale tangles of disordered filaments occur in many diverse physical systems, from turbulent superfluids to optical volume speckle to liquid crystal phases. They can exhibit particular large scale random statistics despite very different local physics. We have previously used the topological statistics of knotting and linking to characterise the large scale tangling, using the vortices of three-dimensional wave chaos as a universal model system whose physical lengthscales are set only by the wavelength. Unlike geometrical quantities, the statistics of knotting depend strongly on the physical system and boundary conditions. Although knotting patterns characterise different systems, the topology of vortices is highly unstable to perturbation, under which they may reconnect with one another. In systems of constructed knots, these reconnections generally rapidly destroy the knot, but for vortex tangles the topological statistics must be stable. Using large scale simulations of chaotic eigenfunctions, we numerically investigate the prevalence and impact of reconnection events, and their effect on the topology of the tangle.

  18. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  19. Skeletonization of gray-scale images by gray weighted distance transform

    NASA Astrophysics Data System (ADS)

    Qian, Kai; Cao, Siqi; Bhattacharya, Prabir

    1997-07-01

    In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.

  20. Critical Song Features for Auditory Pattern Recognition in Crickets

    PubMed Central

    Meckenhäuser, Gundula; Hennig, R. Matthias; Nawrot, Martin P.

    2013-01-01

    Many different invertebrate and vertebrate species use acoustic communication for pair formation. In the cricket Gryllus bimaculatus, females recognize their species-specific calling song and localize singing males by positive phonotaxis. The song pattern of males has a clear structure consisting of brief and regular pulses that are grouped into repetitive chirps. Information is thus present on a short and a long time scale. Here, we ask which structural features of the song critically determine the phonotactic performance. To this end we employed artificial neural networks to analyze a large body of behavioral data that measured females’ phonotactic behavior under systematic variation of artificially generated song patterns. In a first step we used four non-redundant descriptive temporal features to predict the female response. The model prediction showed a high correlation with the experimental results. We used this behavioral model to explore the integration of the two different time scales. Our result suggested that only an attractive pulse structure in combination with an attractive chirp structure reliably induced phonotactic behavior to signals. In a further step we investigated all feature sets, each one consisting of a different combination of eight proposed temporal features. We identified feature sets of size two, three, and four that achieve highest prediction power by using the pulse period from the short time scale plus additional information from the long time scale. PMID:23437054

  1. Analysis and interpretation of the 1985 Sequoia transport experiment. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myrup, L.; Flocchini, R.

    1987-10-01

    An analysis and interpretation is presented of the 1985 Aerosol Transport and Characterization Program at Sequoia National Park, sponsored by the California Air Resources Board. Overall, it was found that the Program produced unique data sets and interesting new results relating particulate air quality and meteorology in the context of complex terrain. The major conclusion is that the meso-scale wind field, as modulated by synoptic-scale fluctuations, is the chief factor acting to cause variation in particulate concentrations in the Park. Areas for future work are discussed. In addition, it was recommended that in future measurement programs, greater effort be mademore » to locate sites completely unaffected by local sources of pollutants.« less

  2. Hierarchical coarse-graining transform.

    PubMed

    Pancaldi, Vera; King, Peter R; Christensen, Kim

    2009-03-01

    We present a hierarchical transform that can be applied to Laplace-like differential equations such as Darcy's equation for single-phase flow in a porous medium. A finite-difference discretization scheme is used to set the equation in the form of an eigenvalue problem. Within the formalism suggested, the pressure field is decomposed into an average value and fluctuations of different kinds and at different scales. The application of the transform to the equation allows us to calculate the unknown pressure with a varying level of detail. A procedure is suggested to localize important features in the pressure field based only on the fine-scale permeability, and hence we develop a form of adaptive coarse graining. The formalism and method are described and demonstrated using two synthetic toy problems.

  3. Designing at Scale: Lessons in Relevance, Quality, and Equity from ChangeScale, a Bay Area environmental education collaborative

    NASA Astrophysics Data System (ADS)

    Babcock, E.

    2015-12-01

    The best environmental education equips people with the know-how and drive to create healthy communities and a healthy planet. While there are many wonderful organizations providing environmental learning, ensuring quality, cultural relevance and equity of access remains an elusive goal--especially if environmental education organizations work in isolation. Organizations across 12 counties in the Bay Area have come together to create a different model. They have founded ChangeScale, a regional collaborative dedicated to providing high quality environmental education to hundreds of thousands of youth---by working together. ChangeScale's work involves setting up school district-level partnerships, providing technical assistance to local environmental education networks, and training environmental educators across the region. In this talk, the presenter, who is a founding member and steering committee chair for ChangeScale, will outline the challenges of working at a regional scale with dozens of organizations. She will share the processes ChangeScale has used to develop a business plan and build membership. She will conclude by sharing the short term and long term potential impacts of working collectively for environmental literacy in the Bay Area.

  4. Entropic Barriers for Two-Dimensional Quantum Memories

    NASA Astrophysics Data System (ADS)

    Brown, Benjamin J.; Al-Shimary, Abbas; Pachos, Jiannis K.

    2014-03-01

    Comprehensive no-go theorems show that information encoded over local two-dimensional topologically ordered systems cannot support macroscopic energy barriers, and hence will not maintain stable quantum information at finite temperatures for macroscopic time scales. However, it is still well motivated to study low-dimensional quantum memories due to their experimental amenability. Here we introduce a grid of defect lines to Kitaev's quantum double model where different anyonic excitations carry different masses. This setting produces a complex energy landscape which entropically suppresses the diffusion of excitations that cause logical errors. We show numerically that entropically suppressed errors give rise to superexponential inverse temperature scaling and polynomial system size scaling for small system sizes over a low-temperature regime. Curiously, these entropic effects are not present below a certain low temperature. We show that we can vary the system to modify this bound and potentially extend the described effects to zero temperature.

  5. Disturbance alters local-regional richness relationships in appalachian forests

    USGS Publications Warehouse

    Belote, R.T.; Sanders, N.J.; Jones, R.H.

    2009-01-01

    Whether biological diversity within communities is limited by local interactions or regional species pools remains an important question in ecology. In this paper, we investigate how an experimentally applied tree-harvesting disturbance gradient influenced local-regional richness relationships. Plant species richness was measured at three spatial scales (2 ha = regional; 576 m2 and 1 m2 = local) on three occasions (one year pre-disturbance, one year post-disturbance, and 10 years post-disturbance) across five disturbance treatments (uncut control through clearcut) replicated throughout the southern Appalachian Mountains, USA. We investigated whether species richness in 576-m2 plots and 1-m2 subplots depended on species richness in 2-ha experimental units and whether this relationship changed through time before and after canopy disturbance. We found that, before disturbance, the relationship between local and regional richness was weak or nonexistent. One year after disturbance local richness was a positive function of regional richness, because local sites were colonized from the regional species pool. Ten years after disturbance, the positive relationship persisted, but the slope had decreased by half. These results suggest that disturbance can set the stage for strong influences of regional species pools on local community assembly in temperate forests. However, as time since disturbance increases, local controls on community assembly decouple the relationships between regional and local diversity. ?? 2009 by the Ecological Society of America.

  6. Recent Trends in Local-Scale Marine Biodiversity Reflect Community Structure and Human Impacts.

    PubMed

    Elahi, Robin; O'Connor, Mary I; Byrnes, Jarrett E K; Dunic, Jillian; Eriksson, Britas Klemens; Hensel, Marc J S; Kearns, Patrick J

    2015-07-20

    The modern biodiversity crisis reflects global extinctions and local introductions. Human activities have dramatically altered rates and scales of processes that regulate biodiversity at local scales. Reconciling the threat of global biodiversity loss with recent evidence of stability at fine spatial scales is a major challenge and requires a nuanced approach to biodiversity change that integrates ecological understanding. With a new dataset of 471 diversity time series spanning from 1962 to 2015 from marine coastal ecosystems, we tested (1) whether biodiversity changed at local scales in recent decades, and (2) whether we can ignore ecological context (e.g., proximate human impacts, trophic level, spatial scale) and still make informative inferences regarding local change. We detected a predominant signal of increasing species richness in coastal systems since 1962 in our dataset, though net species loss was associated with localized effects of anthropogenic impacts. Our geographically extensive dataset is unlikely to be a random sample of marine coastal habitats; impacted sites (3% of our time series) were underrepresented relative to their global presence. These local-scale patterns do not contradict the prospect of accelerating global extinctions but are consistent with local species loss in areas with direct human impacts and increases in diversity due to invasions and range expansions in lower impact areas. Attempts to detect and understand local biodiversity trends are incomplete without information on local human activities and ecological context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Enhancing ecosystem restoration efficiency through spatial and temporal coordination.

    PubMed

    Neeson, Thomas M; Ferris, Michael C; Diebel, Matthew W; Doran, Patrick J; O'Hanley, Jesse R; McIntyre, Peter B

    2015-05-12

    In many large ecosystems, conservation projects are selected by a diverse set of actors operating independently at spatial scales ranging from local to international. Although small-scale decision making can leverage local expert knowledge, it also may be an inefficient means of achieving large-scale objectives if piecemeal efforts are poorly coordinated. Here, we assess the value of coordinating efforts in both space and time to maximize the restoration of aquatic ecosystem connectivity. Habitat fragmentation is a leading driver of declining biodiversity and ecosystem services in rivers worldwide, and we simultaneously evaluate optimal barrier removal strategies for 661 tributary rivers of the Laurentian Great Lakes, which are fragmented by at least 6,692 dams and 232,068 road crossings. We find that coordinating barrier removals across the entire basin is nine times more efficient at reconnecting fish to headwater breeding grounds than optimizing independently for each watershed. Similarly, a one-time pulse of restoration investment is up to 10 times more efficient than annual allocations totaling the same amount. Despite widespread emphasis on dams as key barriers in river networks, improving road culvert passability is also essential for efficiently restoring connectivity to the Great Lakes. Our results highlight the dramatic economic and ecological advantages of coordinating efforts in both space and time during restoration of large ecosystems.

  8. Effects of local extinction on mixture fraction and scalar dissipation statistics in turbulent nonpremixed flames

    NASA Astrophysics Data System (ADS)

    Attili, Antonio; Bisetti, Fabrizio

    2015-11-01

    Passive scalar and scalar dissipation statistics are investigated in a set of flames achieving a Taylor's scale Reynolds number in the range 100 <=Reλ <= 150 [Attili et al. Comb. Flame 161, 2014; Attili et al. Proc. Comb. Inst. 35, 2015]. The three flames simulated show an increasing level of extinction due to the decrease of the Damköhler number. In the case of negligible extinction, the non-dimensional scalar dissipation is expected to be the same in the three cases. In the present case, the deviations from the aforementioned self-similarity manifests itself as a decrease of the non-dimensional scalar dissipation for increasing level of local extinction, in agreement with recent experiments [Karpetis and Barlow Proc. Comb. Inst. 30, 2005; Sutton and Driscoll Combust. Flame 160, 2013 ]. This is caused by the decrease of molecular diffusion due to the lower temperature in the low Damköhler number cases. Probability density functions of the scalar dissipation χ show rather strong deviations from the log-normal distribution. The left tail of the pdf scales as χ 1 / 2 while the right tail scales as e-cχα, in agreement with results for incompressible turbulence [Schumacher et al. J. Fluid Mech. 531, 2005].

  9. Geometric stabilization of the electrostatic ion-temperature-gradient driven instability. I. Nearly axisymmetric systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zocco, A.; Plunk, G. G.; Xanthopoulos, P.

    The effects of a non-axisymmetric (3D) equilibrium magnetic field on the linear ion-temperature-gradient (ITG) driven mode are investigated. We consider the strongly driven, toroidal branch of the instability in a global (on the magnetic surface) setting. Previous studies have focused on particular features of non-axisymmetric systems, such as strong local shear or magnetic ripple, that introduce inhomogeneity in the coordinate along the magnetic field. In contrast, here we include non-axisymmetry explicitly via the dependence of the magnetic drift on the field line label α, i.e., across the magnetic field, but within the magnetic flux surface. We consider the limit wheremore » this variation occurs on a scale much larger than that of the ITG mode, and also the case where these scales are similar. Close to axisymmetry, we find that an averaging effect of the magnetic drift on the flux surface causes global (on the surface) stabilization, as compared to the most unstable local mode. In the absence of scale separation, we find destabilization is also possible, but only if a particular resonance occurs between the magnetic drift and the mode, and finite Larmor radius effects are neglected. We discuss the relative importance of surface global effects and known radially global effects.« less

  10. Enhancing ecosystem restoration efficiency through spatial and temporal coordination

    PubMed Central

    Neeson, Thomas M.; Ferris, Michael C.; Diebel, Matthew W.; Doran, Patrick J.; O’Hanley, Jesse R.; McIntyre, Peter B.

    2015-01-01

    In many large ecosystems, conservation projects are selected by a diverse set of actors operating independently at spatial scales ranging from local to international. Although small-scale decision making can leverage local expert knowledge, it also may be an inefficient means of achieving large-scale objectives if piecemeal efforts are poorly coordinated. Here, we assess the value of coordinating efforts in both space and time to maximize the restoration of aquatic ecosystem connectivity. Habitat fragmentation is a leading driver of declining biodiversity and ecosystem services in rivers worldwide, and we simultaneously evaluate optimal barrier removal strategies for 661 tributary rivers of the Laurentian Great Lakes, which are fragmented by at least 6,692 dams and 232,068 road crossings. We find that coordinating barrier removals across the entire basin is nine times more efficient at reconnecting fish to headwater breeding grounds than optimizing independently for each watershed. Similarly, a one-time pulse of restoration investment is up to 10 times more efficient than annual allocations totaling the same amount. Despite widespread emphasis on dams as key barriers in river networks, improving road culvert passability is also essential for efficiently restoring connectivity to the Great Lakes. Our results highlight the dramatic economic and ecological advantages of coordinating efforts in both space and time during restoration of large ecosystems. PMID:25918378

  11. Discriminating strength: a bona fide measure of non-classical correlations

    NASA Astrophysics Data System (ADS)

    Farace, A.; De Pasquale, A.; Rigovacca, L.; Giovannetti, V.

    2014-07-01

    A new measure of non-classical correlations is introduced and characterized. It tests the ability of using a state ρ of a composite system AB as a probe for a quantum illumination task (e.g. see Lloyd 2008 Science 321 1463), in which one is asked to remotely discriminate between the two following scenarios: (i) either nothing happens to the probe, or (ii) the subsystem A is transformed via a local unitary {{R}_{A}} whose properties are partially unspecified when producing ρ. This new measure can be seen as the discrete version of the recently introduced interferometric power measure (Girolami et al 2013 e-print arXiv:1309.1472) and, at least for the case in which A is a qubit, it is shown to coincide (up to an irrelevant scaling factor) with the local quantum uncertainty measure of Girolami, Tufarelli and Adesso (2013 Phys. Rev. Lett. 110 240402). Analytical expressions are derived which allow us to formally prove that, within the set of separable configurations, the maximum value of our non-classicality measure is achieved over the set of quantum-classical states (i.e. states ρ which admit a statistical unravelling where each element of the associated ensemble is distinguishable via local measures on B).

  12. Synthesis of Joint Volumes, Visualization of Paths, and Revision of Viewing Sequences in a Multi-dimensional Seismic Data Viewer

    NASA Astrophysics Data System (ADS)

    Chen, D. M.; Clapp, R. G.; Biondi, B.

    2006-12-01

    Ricksep is a freely-available interactive viewer for multi-dimensional data sets. The viewer is very useful for simultaneous display of multiple data sets from different viewing angles, animation of movement along a path through the data space, and selection of local regions for data processing and information extraction. Several new viewing features are added to enhance the program's functionality in the following three aspects. First, two new data synthesis algorithms are created to adaptively combine information from a data set with mostly high-frequency content, such as seismic data, and another data set with mainly low-frequency content, such as velocity data. Using the algorithms, these two data sets can be synthesized into a single data set which resembles the high-frequency data set on a local scale and at the same time resembles the low- frequency data set on a larger scale. As a result, the originally separated high and low-frequency details can now be more accurately and conveniently studied together. Second, a projection algorithm is developed to display paths through the data space. Paths are geophysically important because they represent wells into the ground. Two difficulties often associated with tracking paths are that they normally cannot be seen clearly inside multi-dimensional spaces and depth information is lost along the direction of projection when ordinary projection techniques are used. The new algorithm projects samples along the path in three orthogonal directions and effectively restores important depth information by using variable projection parameters which are functions of the distance away from the path. Multiple paths in the data space can be generated using different character symbols as positional markers, and users can easily create, modify, and view paths in real time. Third, a viewing history list is implemented which enables Ricksep's users to create, edit and save a recipe for the sequence of viewing states. Then, the recipe can be loaded into an active Ricksep session, after which the user can navigate to any state in the sequence and modify the sequence from that state. Typical uses of this feature are undoing and redoing viewing commands and animating a sequence of viewing states. The theoretical discussion are carried out and several examples using real seismic data are provided to show how these new Ricksep features provide more convenient, accurate ways to manipulate multi-dimensional data sets.

  13. Quantifying Diffuse Contamination: Method and Application to Pb in Soil.

    PubMed

    Fabian, Karl; Reimann, Clemens; de Caritat, Patrice

    2017-06-20

    A new method for detecting and quantifying diffuse contamination at the continental to regional scale is based on the analysis of cumulative distribution functions (CDFs). It uses cumulative probability (CP) plots for spatially representative data sets, preferably containing >1000 determinations. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. It is found that diffuse contamination is characterized by a distinctive shift of the low-concentration end of the distribution of the studied element in its CP plot. Diffuse contamination can be detected and quantified via either (1) comparing the distribution of the contaminating element to that of an element with a geochemically comparable behavior but no contamination source (e.g., Pb vs Rb), or (2) comparing the top soil distribution of an element to the distribution of the same element in subsoil samples from the same area, taking soil forming processes into consideration. Both procedures are demonstrated for geochemical soil data sets from Europe, Australia, and the U.S.A. Several different data sets from Europe deliver comparable results at different scales. Diffuse Pb contamination in surface soil is estimated to be <0.5 mg/kg for Australia, 1-3 mg/kg for Europe, and 1-2 mg/kg, or at least <5 mg/kg, for the U.S.A. The analysis presented here also allows recognition of local contamination sources and can be used to efficiently monitor diffuse contamination at the continental to regional scale.

  14. Role of local network oscillations in resting-state functional connectivity.

    PubMed

    Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo

    2011-07-01

    Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. The air quality and health co-benefits of alternative post-2020 pathways for achieving peak carbon targets in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Liu, M.; Bi, J.; Huang, Y.; Kinney, P. L.

    2016-12-01

    Jiangsu, which has three national low-carbon pilot cities, is set to be a model province in China for achieving peak carbon targets before 2030. However, according to local planning of responding to climate change, carbon emissions are projected to keep going up before 2020 even the strictest measures are implemented. In other words, innovative measures must be in action after 2020. This work aimed at assessing the air quality and health co-benefits of alternative post-2020 measures to help remove barriers of policy implementation through tying it to local incentives for air quality improvement. To achieve the aim, we select 2010 as baseline year and develop Bussiness As Usual (BAU) and Traditional Carbon Reduction (TCR) scenarios before 2020. Under BAU, only existing climate and air pollution control policies are considered; under TCR, potential climate policies in local planning and existing air pollution control policies are considered. After 2020, integrated gasification combined cycle (IGCC) plant with carbon capture and storage (CCS) technology and large-scale substitution of renewable energy seem to be two promising pathways for achieving peak carbon targets. Therefore, two additional scenarios (TCR-IGCC and TCR-SRE) are set after 2020. Based on the projections of future energy balances and industrial productions, we estimate the pollutant emissions and simulate PM2.5 and ozone concentrations by 2017, 2020, 2030 and 2050 using CMAQ. Then using health impact assessment approach, the premature deaths are estimated and monetized. Results show that the carbon peak in Jiangsu will be achieved before 2030 only under TCR-IGCC and TCR-SRE scenarios. Under three policy scenarios, Jiangsu's carbon emission control targets would have substantial effects on primary air pollutant emissions far beyond those we estimate would be needed to meet the PM2.5 concentration targets in 2017. Compared with IGCC with CCS, large-scale substitutions of renewable energy bring comparable pollutant emission reductions but more health benefits because it reduces more emissions from traffic sources which are more harmful to health. However, large-scale substitution of renewable energy posed challenges on energy supply capacity, which need to be seriously considered in future policy decision.

  16. DD-HDS: A method for visualization and exploration of high-dimensional data.

    PubMed

    Lespinats, Sylvain; Verleysen, Michel; Giron, Alain; Fertil, Bernard

    2007-09-01

    Mapping high-dimensional data in a low-dimensional space, for example, for visualization, is a problem of increasingly major concern in data analysis. This paper presents data-driven high-dimensional scaling (DD-HDS), a nonlinear mapping method that follows the line of multidimensional scaling (MDS) approach, based on the preservation of distances between pairs of data. It improves the performance of existing competitors with respect to the representation of high-dimensional data, in two ways. It introduces (1) a specific weighting of distances between data taking into account the concentration of measure phenomenon and (2) a symmetric handling of short distances in the original and output spaces, avoiding false neighbor representations while still allowing some necessary tears in the original distribution. More precisely, the weighting is set according to the effective distribution of distances in the data set, with the exception of a single user-defined parameter setting the tradeoff between local neighborhood preservation and global mapping. The optimization of the stress criterion designed for the mapping is realized by "force-directed placement" (FDP). The mappings of low- and high-dimensional data sets are presented as illustrations of the features and advantages of the proposed algorithm. The weighting function specific to high-dimensional data and the symmetric handling of short distances can be easily incorporated in most distance preservation-based nonlinear dimensionality reduction methods.

  17. Cultural meanings of nature: an analysis of contemporary motion pictures.

    PubMed

    Pollio, Howard R; Anderson, John; Levasseur, Priscilla; Thweatt, Michael

    2003-03-01

    To evaluate current cultural meanings of nature, the authors asked 65 undergraduate students to "list 3 movies in which nature was an important aspect of the film." They also were asked to specify the natural element that stood out to them and describe "how it related to the overall theme of the movie." Two independent groups of raters skilled in interpretive analysis developed thematic meanings from these responses. Following this, in a 2nd study, a different group of participants rated the 18 most frequently mentioned natural elements on thematic scales derived from the initial interpretive analysis. Participants in the 1st study mentioned 33 different movies at least twice and 5 themes that captured the meaning of nature in these films. Correlational results derived from the 2nd study indicated that rating scales reflecting these 5 themes formed 2 distinct groups; the first group described settings in which nature is experienced as adversarial and plays a significant role in dramatic action, and the second group defined settings in which nature is viewed either as a place of refuge or simply as a locale in which ongoing narrative action occurs. The general conclusion reached in both studies concerns the often noted but not always appreciated fact that movies--similar to everyday events and actions--always take place in specific settings and that neither life events nor dramatic narratives can be understood apart from specific settings.

  18. A new test procedure to evaluate the performance of substations for collective heating systems

    NASA Astrophysics Data System (ADS)

    Baetens, Robin; Verhaert, Ivan

    2017-11-01

    The overall heat demand of a single dwelling, existing out of space heating and domestic hot water production, decreases due to higher insulation rates. Because of this, investing in efficient and renewable heat generation becomes less interesting. Therefore, to incorporate renewables or residual heat on a larger scale, district heating or collective heating systems grow in importance. Within this set-up, the substation is responsible for the interaction between local demand for comfort and overall energy performance of the collective heating system. Many different configurations of substations exist, which influence both local comfort and central system performance. Next to that, also hybrids exist with additional local energy input. To evaluate performance of such substations, a new experimental-based test procedure is developed in order to evaluate these different aspects, characterized by the two roles a substation has, namely as heat generator and as heat consumer. The advantage of this approach is that an objective comparison between individual and central systems regarding performance on delivering local comfort can be executed experimentally. The lab set-up consists out of three different subsystems, namely the central system, the domestic hot water consumption and the local space heating. The central system can work with different temperature regimes and control strategies, as these aspects have proven to have the largest influence on actual performance. The domestic hot water system is able to generate similar tap profiles according to eco-design regulation for domestic hot water generation. The space heating system is able to demand a modular heat load.

  19. Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Guinard, S.; Landrieu, L.

    2017-05-01

    We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.

  20. High performance computing to support multiscale representation of hydrography for the conterminous United States

    USGS Publications Warehouse

    Stanislawski, Larry V.; Liu, Yan; Buttenfield, Barbara P.; Survila, Kornelijus; Wendel, Jeffrey; Okok, Abdurraouf

    2016-01-01

    The National Hydrography Dataset (NHD) for the United States furnishes a comprehensive set of vector features representing the surface-waters in the country (U.S. Geological Survey 2000). The high-resolution (HR) layer of the NHD is largely comprised of hydrographic features originally derived from 1:24,000-scale (24K) U.S. Topographic maps. However, in recent years (2009 to present) densified hydrographic feature content, from sources as large as 1:2,400, have been incorporated into some watersheds of the HR NHD within the conterminous United States to better support the needs of various local and state organizations. As such, the HR NHD is a multiresolution dataset with obvious data density variations because of scale changes. In addition, data density variations exist within the HR NHD that are particularly evident in the surface-water flow network (NHD flowlines) because of natural variations of local geographic conditions; and also because of unintentional compilation inconsistencies due to variations in data collection standards and climate conditions over the many years of 24K hydrographic data collection (US Geological Survey 1955).

  1. Large area sub-micron chemical imaging of magnesium in sea urchin teeth.

    PubMed

    Masic, Admir; Weaver, James C

    2015-03-01

    The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Quantification of the Early Small-Scale Fishery in the North-Eastern Baltic Sea in the Late 17th Century

    PubMed Central

    Verliin, Aare; Ojaveer, Henn; Kaju, Katre; Tammiksaar, Erki

    2013-01-01

    Historical perspectives on fisheries and related human behaviour provide valuable information on fishery resources and their exploitation, helping to more appropriately set management targets and determine relevant reference levels. In this study we analyse historical fisheries and fish trade at the north-eastern Baltic Sea coast in the late 17th century. Local consumption and export together amounted to the annual removal of about 200 tonnes of fish from the nearby sea and freshwater bodies. The fishery was very diverse and exploited altogether one cyclostome and 17 fish species with over 90% of the catch being consumed locally. The exported fish consisted almost entirely of high-valued species with Stockholm (Sweden) being the most important export destination. Due to rich political history and natural features of the region, we suggest that the documented evidence of this small-scale fishery should be considered as the first quantitative summary of exploitation of aquatic living resources in the region and can provide a background for future analyses. PMID:23861914

  3. Impact of seaweed beachings on dynamics of δ(15)N isotopic signatures in marine macroalgae.

    PubMed

    Lemesle, Stéphanie; Mussio, Isabelle; Rusig, Anne-Marie; Menet-Nédélec, Florence; Claquin, Pascal

    2015-08-15

    A fine-scale survey of δ(15)N, δ(13)C, tissue-N in seaweeds was conducted using samples from 17 sampling points at two sites (Grandcamp-Maisy (GM), Courseulles/Mer (COU)) along the French coast of the English Channel in 2012 and 2013. Partial triadic analysis was performed on the parameter data sets and revealed the functioning of three areas: one estuary (EstA) and two rocky areas (GM(∗), COU(∗)). In contrast to oceanic and anthropogenic reference points similar temporal dynamics characterized δ(15)N signatures and N contents at GM(∗) and COU(∗). Nutrient dynamics were similar: the N-concentrations in seawater originated from the River Seine and local coastal rivers while P-concentrations mainly from these local rivers. δ(15)N at GM(∗) were linked to turbidity suggesting inputs of autochthonous organic matter from large-scale summer seaweed beachings made up of a mixture of Rhodophyta, Phaeophyta and Chlorophyta species. This study highlights the coupling between seaweed beachings and nitrogen sources of intertidal macroalgae. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Regional and transported aerosols during DRAGON-Japan experiment

    NASA Astrophysics Data System (ADS)

    Sano, I.; Holben, B. N.; Mukai, S.; Nakata, M.; Nakaguchi, Y.; Sugimoto, N.; Hatakeyama, S.; Nishizawa, T.; Takamura, T.; Takemura, T.; Yonemitsu, M.; Fujito, T.; Schafer, J.; Eck, T. F.; Sorokin, M.; Kenny, P.; Goto, M.; Hiraki, T.; Iguchi, N.; Kouzai, K.; KUJI, M.; Muramatsu, K.; Okada, Y.; Sadanaga, Y.; Tohno, S.; Toyazaki, Y.; Yamamoto, K.

    2013-12-01

    Aerosol properties over Japan have been monitored by AERONET sun / sky photometers since 2000. These measurements provides us with long term information of local aerosols, which are influenced by transported aerosols, such as Asian dusts or anthropogenic pollutants due to rapid increasing of energy consumption in Asian countries. A new aerosol monitoring experiment, Distributed Regional Aerosol Gridded Observation Networks (DRAGON) - Japan is operated in spring of 2012. The main instrument of DRAGON network is AERONET sun/sky radiometers. Some of them are sparsely set along the Japanese coast and some others make a dense network in Osaka, which is the second-largest city in Japan and famous for manufacturing town. Several 2ch NIES-LIDAR systems are also co-located with AERONET instrument to monitor Asian dusts throughout the campaign. The objects of Dragon-Japan are to characterize local aerosols as well as transported ones from the continent of China, and to acquire the detailed aerosol information for validating satellite data with high resolved spatial scale. This work presents the comprehensive results of aerosol properties with respect to regional- and/or transported- scale during DRAGON-Japan experiments.

  5. High-resolution modelling of waves, currents and sediment transport in the Catalan Sea.

    NASA Astrophysics Data System (ADS)

    Sánchez-Arcilla, Agustín; Grifoll, Manel; Pallares, Elena; Espino, Manuel

    2013-04-01

    In order to investigate coastal shelf dynamics, a sequence of high resolution multi-scale models have been implemented for the Catalan shelf (North-western Mediterranean Sea). The suite consists of a set of increasing-resolution nested models, based on the circulation model ROMS (Regional Ocean Modelling System), the wave model SWAN (Simulation Waves Nearshore) and the sediment transport model CSTM (Community Sediment Transport Model), covering different ranges of spatial (from ~1 km at shelf-slope regions to ~40 m around river mouth or local beaches) and temporal scales (from storms events to seasonal variability). Contributions in the understanding of local processes such as along-shelf dynamics in the inner-shelf, sediment dispersal from the river discharge or bi-directional wave-current interactions under different synoptic conditions and resolution have been obtained using the Catalan Coast as a pilot site. Numerical results have been compared with "ad-hoc" intensive field campaigns, data from observational models and remote sensing products. The results exhibit acceptable agreement with observations and the investigation has allowed developing generic knowledge and more efficient (process-based) strategies for the coastal and shelf management.

  6. Dynamical heterogeneity in a glass-forming ideal gas.

    PubMed

    Charbonneau, Patrick; Das, Chinmay; Frenkel, Daan

    2008-07-01

    We conduct a numerical study of the dynamical behavior of a system of three-dimensional "crosses," particles that consist of three mutually perpendicular line segments of length sigma rigidly joined at their midpoints. In an earlier study [W. van Ketel, Phys. Rev. Lett. 94, 135703 (2005)] we showed that this model has the structural properties of an ideal gas, yet the dynamical properties of a strong glass former. In the present paper we report an extensive study of the dynamical heterogeneities that appear in this system in the regime where glassy behavior sets in. On the one hand, we find that the propensity of a particle to diffuse is determined by the structure of its local environment. The local density around mobile particles is significantly less than the average density, but there is little clustering of mobile particles, and the clusters observed tend to be small. On the other hand, dynamical susceptibility results indicate that a large dynamical length scale develops even at moderate densities. This suggests that propensity and other mobility measures are an incomplete measure of the dynamical length scales in this system.

  7. Diffusion and scaling during early embryonic pattern formation

    PubMed Central

    Gregor, Thomas; Bialek, William; van Steveninck, Rob R. de Ruyter; Tank, David W.; Wieschaus, Eric F.

    2005-01-01

    Development of spatial patterns in multicellular organisms depends on gradients in the concentration of signaling molecules that control gene expression. In the Drosophila embryo, Bicoid (Bcd) morphogen controls cell fate along 70% of the anteroposterior axis but is translated from mRNA localized at the anterior pole. Gradients of Bcd and other morphogens are thought to arise through diffusion, but this basic assumption has never been rigorously tested in living embryos. Furthermore, because diffusion sets a relationship between length and time scales, it is hard to see how patterns of gene expression established by diffusion would scale proportionately as egg size changes during evolution. Here, we show that the motion of inert molecules through the embryo is well described by the diffusion equation on the relevant length and time scales, and that effective diffusion constants are essentially the same in closely related dipteran species with embryos of very different size. Nonetheless, patterns of gene expression in these different species scale with egg length. We show that this scaling can be traced back to scaling of the Bcd gradient itself. Our results, together with constraints imposed by the time scales of development, suggest that the mechanism for scaling is a species-specific adaptation of the Bcd lifetime. PMID:16352710

  8. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    PubMed

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  9. Methods and apparatus of analyzing electrical power grid data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.

    Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less

  10. Short-term effect of local muscle vibration treatment versus sham therapy on upper limb in chronic post-stroke patients: a randomized controlled trial.

    PubMed

    Costantino, Cosimo; Galuppo, Laura; Romiti, Davide

    2017-02-01

    In recent years, local muscle vibration received considerable attention as a useful method for muscle stimulation in clinical therapy. Some studies described specific vibration training protocol, and few of them were conducted on post-stroke patients. Therefore there is a general uncertainty regarding the vibrations protocol. The aim of this study was to evaluate the effects of local muscle high frequency mechano-acoustic vibratory treatment on grip muscle strength, muscle tonus, disability and pain in post-stroke individuals with upper limb spasticity. Single-blind randomized controlled trial. Outpatient rehabilitation center. Thirty-two chronic poststroke patients with upper-limb spasticity: 21 males, 11 females, mean age 61.59 years ±15.50, time passed from stroke 37.78±17.72 months. The protocol treatment consisted of the application of local muscle vibration, set to a frequency of 300 Hz, for 30 minutes 3 times per week, for 12 sessions, applied to the skin covering the venter of triceps brachii and extensor carpi radialis longus and brevis muscles during voluntary isometric contraction. All participants were randomized in two groups: group A treated with vibration protocol; group B with sham therapy. All participants were evaluated before and after 4-week treatment with Hand Grip Strength Test, Modified Ashworth Scale, QuickDASH score, FIM scale, Fugl-Meyer Assessment, Jebsen-Taylor Hand Function Test and Verbal Numerical Rating Scale of pain. Outcomes between groups was compared using a repeated-measures ANOVA. Over 4 weeks, the values recorded in group A when compared to group B demonstrated statistically significant improvement in grip muscle strength, pain and quality of life and decrease of spasticity; P-values were <0.05 in all tested parameters. Rehabilitation treatment with local muscle high frequency (300 Hz) vibration for 30 minutes, 3 times a week for 4 weeks, could significantly improve muscle strength and decrease muscle tonus, disability and pain in upper limb of hemiplegic post-stroke patients. Local muscle vibration treatment might be an additional and safe tool in the management of chronic poststroke patients, granted its high therapeutic efficiency, limited cost and short and repeatable protocol of use.

  11. Document image binarization using "multi-scale" predefined filters

    NASA Astrophysics Data System (ADS)

    Saabni, Raid M.

    2018-04-01

    Reading text or searching for key words within a historical document is a very challenging task. one of the first steps of the complete task is binarization, where we separate foreground such as text, figures and drawings from the background. Successful results of this important step in many cases can determine next steps to success or failure, therefore it is very vital to the success of the complete task of reading and analyzing the content of a document image. Generally, historical documents images are of poor quality due to their storage condition and degradation over time, which mostly cause to varying contrasts, stains, dirt and seeping ink from reverse side. In this paper, we use banks of anisotropic predefined filters in different scales and orientations to develop a binarization method for degraded documents and manuscripts. Using the fact, that handwritten strokes may follow different scales and orientations, we use predefined sets of filter banks having various scales, weights, and orientations to seek a compact set of filters and weights in order to generate diffrent layers of foregrounds and background. Results of convolving these fiters on the gray level image locally, weighted and accumulated to enhance the original image. Based on the different layers, seeds of components in the gray level image and a learning process, we present an improved binarization algorithm to separate the background from layers of foreground. Different layers of foreground which may be caused by seeping ink, degradation or other factors are also separated from the real foreground in a second phase. Promising experimental results were obtained on the DIBCO2011 , DIBCO2013 and H-DIBCO2016 data sets and a collection of images taken from real historical documents.

  12. Drawing Connections Between Local and Global Observations: An Essential Element of Geoscience Education

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.; Mogk, D. W.

    2002-12-01

    One of the hallmarks of geoscience research is the process of moving between observations and interpretations on local and global scales to develop an integrated understanding of Earth processes. Understanding this interplay is an important aspect of student geoscience learning which leads to an understanding of the fundamental principles of science and geoscience and of the connections between local natural phenomena or human activity and global processes. Several techniques that engage students in inquiry and discovery (as recommended in the National Science Education Standards, NRC 1996, Shaping the Future of Undergraduate Earth Science Education, AGU, 1997) hold promise for helping students make these connections. These include the development of global data sets from local observations (e.g. GLOBE); studying small scale or local phenomenon in the context of global models (e.g. carbon storage in local vegetation and its role in the carbon cycle); or an analysis of local environmental issues in a global context (e.g. a comparison of local flooding to flooding in other countries and analysis in the context of weather, geology and development patterns). Research on learning suggests that data-rich activities linking the local and global have excellent potential for enhancing student learning because 1) students have already developed observations and interpretations of their local environment which can serve as a starting point for constructing new knowledge and 2) this context may motivate learning and develop understanding that can be transferred to other situations. (How People Learn, NRC, 2001). Faculty and teachers at two recent workshops confirm that projects that involve local or global data can engage students in learning by providing real world context, creating student ownership of the learning process, and developing scientific skills applicable to the complex problems that characterize modern science and society. Workshop participants called for increased dissemination of examples of effective practice, evaluation of the impact of data-rich activities on learning, and further development of data access infrastructure and services. (for additional workshop results and discussion see http://serc.carleton.edu/research_education/usingdata)

  13. 5 CFR 9901.333 - Setting and adjusting local market supplements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... factors. The Secretary may determine the effective date of newly set or adjusted targeted local market... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Setting and adjusting local market... DEFENSE NATIONAL SECURITY PERSONNEL SYSTEM (NSPS) Pay and Pay Administration Local Market Supplements...

  14. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  15. LOLAweb: a containerized web server for interactive genomic locus overlap enrichment analysis.

    PubMed

    Nagraj, V P; Magee, Neal E; Sheffield, Nathan C

    2018-06-06

    The past few years have seen an explosion of interest in understanding the role of regulatory DNA. This interest has driven large-scale production of functional genomics data and analytical methods. One popular analysis is to test for enrichment of overlaps between a query set of genomic regions and a database of region sets. In this way, new genomic data can be easily connected to annotations from external data sources. Here, we present an interactive interface for enrichment analysis of genomic locus overlaps using a web server called LOLAweb. LOLAweb accepts a set of genomic ranges from the user and tests it for enrichment against a database of region sets. LOLAweb renders results in an R Shiny application to provide interactive visualization features, enabling users to filter, sort, and explore enrichment results dynamically. LOLAweb is built and deployed in a Linux container, making it scalable to many concurrent users on our servers and also enabling users to download and run LOLAweb locally.

  16. Scale-up of a comprehensive harm reduction programme for people injecting opioids: lessons from north-eastern India

    PubMed Central

    Lalmuanpuii, Melody; Biangtung, Langkham; Mishra, Ritu Kumar; Reeve, Matthew J; Tzudier, Sentimoa; Singh, Angom L; Sinate, Rebecca

    2013-01-01

    Abstract Problem Harm reduction packages for people who inject illicit drugs, including those infected with human immunodeficiency virus (HIV), are cost-effective but have not been scaled up globally. In the north-eastern Indian states of Manipur and Nagaland, the epidemic of HIV infection is driven by the injection of illicit drugs, especially opioids. These states needed to scale up harm reduction programmes but faced difficulty doing so. Approach In 2004, the Bill & Melinda Gates Foundation funded Project ORCHID to scale up a harm reduction programme in Manipur and Nagaland. Local setting In 2003, an estimated 10 000 and 16 000 people were injecting drugs in Manipur and Nagaland, respectively. The prevalence of HIV infection among people injecting drugs was 24.5% in Manipur and 8.4% in Nagaland. Relevant changes By 2012, the harm reduction programme had been scaled up to an average of 9011 monthly contacts outside clinics (80% of target); an average of 1709 monthly clinic visits (15% of target, well above the 5% monthly goal) and an average monthly distribution of needles and syringes of 16 each per programme participant. Opioid agonist maintenance treatment coverage was 13.7% and retention 6 months after enrolment was 63%. Antiretroviral treatment coverage for HIV-positive participants was 81%. Lessons learnt A harm reduction model consisting of community-owned, locally relevant innovations and business approaches can result in good harm reduction programme scale-up and influence harm reduction policy. Project ORCHID has influenced national harm reduction policy in India and contributed to the development of harm reduction guidelines. PMID:23599555

  17. Confirmatory factor analysis and recommendations for improvement of the Autonomy-Preference-Index (API).

    PubMed

    Simon, Daniela; Kriston, Levente; Loh, Andreas; Spies, Claudia; Scheibler, Fueloep; Wills, Celia; Härter, Martin

    2010-09-01

    Validation of the German version of the Autonomy-Preference-Index (API), a measure of patients' preferences for decision making and information seeking. Stepwise confirmatory factor analysis was conducted on a sample of patients (n = 1592) treated in primary care for depression (n = 186), surgical and internal medicine inpatients (n = 811) and patients with minor trauma treated in an emergency department (n = 595). An initial test of the model was done on calculation and validation halves of the sample. Both local and global indexes-of-fit suggested modifications to the scale. The scale was modified and re-tested in the calculation sample and confirmed in the validation sample. Subgroup analyses for age, gender and type of treatment setting were also performed. The confirmatory analysis led to a modified version of the API with better local and global indexes-of-fit for samples of German-speaking patients. Two items of the sub-scale, 'preference for decision-making', and one item of the sub-scale, 'preference for information seeking', showed very low reliability scores and were deleted. Thus, several global indexes-of-fit clearly improved significantly. The modified scale was confirmed on the validation sample with acceptable to good indices of fit. Results of subgroup analyses indicated that no adaptations were necessary. This first confirmatory analysis for a German-speaking population showed that the API was improved by the removal of several items. There were theoretically plausible explanations for this improvement suggesting that the modifications might also be appropriate in English and other language versions.

  18. The amplitude of the deep solar convection and the origin of the solar supergranulation

    NASA Astrophysics Data System (ADS)

    Rast, Mark

    2016-10-01

    Recent observations and models have raised questions about our understanding of the dynamics of the deep solar convection. In particular, the amplitude of low wavenumber convective motions appears to be too high in both local area radiative magnetohydrodynamic and global spherical shell magnetohydrodynamic simulations. In global simulations this results in weaker than needed rotational constraints and consequent non solar-like differential rotation profiles. In deep local area simulations it yields strong horizontal flows in the photosphere on scales much larger than the observed supergranulation. We have undertaken numerical studies that suggest that solution to this problem is closely related to the long standing question of the origin of the solar supergranulation. Two possibilities have emerged. One suggests that small scale photospherically driven motions dominate convecive transport even at depth, descending through a very nearly adiabatic interior (more more nearly adiabatic than current convection models achieve). Convection of this form can meet Rossby number constraints set by global scale motions and implies that the solar supergranulation is the largest buoyantly driven scale of motion in the Sun. The other possibility is that large scale convection driven deeep in the Sun dynamically couples to the near surface shear layer, perhaps as its origin. In this case supergranulation would be the largest non-coupled convective mode, or only weakly coupled and thus potentially explaining the observed excess power in the prograde direction. Recent helioseismic results lend some support to this. We examind both of these possibilities using carefully designed numerical experiments, and weigh thier plausibilities in light of recent observations.

  19. Using global Climate Impact Indicators to assess water resource availability in a Mediterranean mountain catchment: the Sierra Nevada study case (Spain) in the SWICCA platform

    NASA Astrophysics Data System (ADS)

    José Pérez-Palazón, María; Pimentel, Rafael; Sáenz de Rodrigáñez, Marta; Gulliver, Zacarias; José Polo, María

    2017-04-01

    Climate services provide water resource managements and users with science-based information on the likely impacts associated to the future climate scenarios. Mountainous areas are especially vulnerable to climate variations due to the expected changes in the snow regime, among others; in Mediterranean regions, this shift involves significant effects on the river flow regime and water resource availability and management. The Guadalfeo River Basin is a 1345 km2 mountainous, coastal catchment in southern Spain, ranging from the Mediterranean Sea coastline to the Sierra Nevada mountains to the north (up to 3450 m a.s.l.) within a 40-km distance. The climate variability adds complexity to this abrupt topography and heterogeneous area. The uncertainty associated to snow occurrence and persistence for the next decades poses a challenge for the current and future water resource uses in the area. The development of easy-to-use local climate indicators and derived decision-making variables is key to assess and face the economic impact of the potential changes. The SWICCA (Service for Water Indicators in Climate Change Adaptation) Platform (http://swicca.climate.copernicus.eu/) has been developed under the Copernicus Climate Change Service (C3S) and provides global climate and hydrology indicators on a Pan-European scale. Different case studies are included to assess the platform development and contents, and analyse the indicators' performance from a proof-of-concept approach that includes end-users feedbacks. The Guadalfeo River Basin is one of these case studies. This work presents the work developed so far to analyse and use the SWICCA Climate Impact Indicators (CIIs) related to river flow in this mountainous area, and the first set of local indicators specifically designed to assess selected end-users on the potential impact associated to different climate scenarios. Different CIIs were extracted from the SWICCA interface and tested against the local information available in the case study. The Essential Climate Variables used were precipitation and flow daily values, obtained at different spatial scales. The analysis led to the use of SWICCA-river flow on a catchment scale as the most suitable global CIIs in this area. Further treatment included local downscaling by means of transfer functions and a final relative anomaly correction. Three final end-users (clients) were identified within the water resource management framework: 1) mini hydropower facilities at the head areas, 2) urban supply at the southern area, and 3) water management decision makers (reservoir operation). From the corrected CIIs, local indicators were defined from the interaction with each client, to tailor water services easily and readily usable. Knowledge brokering from this interaction resulted in a first identification of a set of 4, 3 and 4 indicators for hydropower generation, urban users and water resource decision-makers, respectively, with different time scales. The projections of three future climate scenarios were assessed for each indicator and presented to each client. Local indicators are an efficient tool to assess the potential range of water allocation possibilities in this area on an annual and decadal basis, and get a deeper insight of the seasonal future potential regime of water resource availability. The results are good examples of key information for decision making in the future, and show how to derive local indicators with impact in the short and medium term planning in heterogeneous catchments in this region.

  20. Policies for reduced deforestation and their impact on agricultural production.

    PubMed

    Angelsen, Arild

    2010-11-16

    Policies to effectively reduce deforestation are discussed within a land rent (von Thünen) framework. The first set of policies attempts to reduce the rent of extensive agriculture, either by neglecting extension, marketing, and infrastructure, generating alternative income opportunities, stimulating intensive agricultural production or by reforming land tenure. The second set aims to increase either extractive or protective forest rent and--more importantly--create institutions (community forest management) or markets (payment for environmental services) that enable land users to capture a larger share of the protective forest rent. The third set aims to limit forest conversion directly by establishing protected areas. Many of these policy options present local win-lose scenarios between forest conservation and agricultural production. Local yield increases tend to stimulate agricultural encroachment, contrary to the logic of the global food equation that suggests yield increases take pressure off forests. At national and global scales, however, policy makers are presented with a more pleasant scenario. Agricultural production in developing countries has increased by 3.3-3.4% annually over the last 2 decades, whereas gross deforestation has increased agricultural area by only 0.3%, suggesting a minor role of forest conversion in overall agricultural production. A spatial delinking of remaining forests and intensive production areas should also help reconcile conservation and production goals in the future.

  1. Filaments from the galaxy distribution and from the velocity field in the local universe

    NASA Astrophysics Data System (ADS)

    Libeskind, Noam I.; Tempel, Elmo; Hoffman, Yehuda; Tully, R. Brent; Courtois, Hélène

    2015-10-01

    The cosmic web that characterizes the large-scale structure of the Universe can be quantified by a variety of methods. For example, large redshift surveys can be used in combination with point process algorithms to extract long curvilinear filaments in the galaxy distribution. Alternatively, given a full 3D reconstruction of the velocity field, kinematic techniques can be used to decompose the web into voids, sheets, filaments and knots. In this Letter, we look at how two such algorithms - the Bisous model and the velocity shear web - compare with each other in the local Universe (within 100 Mpc), finding good agreement. This is both remarkable and comforting, given that the two methods are radically different in ideology and applied to completely independent and different data sets. Unsurprisingly, the methods are in better agreement when applied to unbiased and complete data sets, like cosmological simulations, than when applied to observational samples. We conclude that more observational data is needed to improve on these methods, but that both methods are most likely properly tracing the underlying distribution of matter in the Universe.

  2. Persistent homology and non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Cole, Alex; Shiu, Gary

    2018-03-01

    In this paper, we introduce the topological persistence diagram as a statistic for Cosmic Microwave Background (CMB) temperature anisotropy maps. A central concept in 'Topological Data Analysis' (TDA), the idea of persistence is to represent a data set by a family of topological spaces. One then examines how long topological features 'persist' as the family of spaces is traversed. We compute persistence diagrams for simulated CMB temperature anisotropy maps featuring various levels of primordial non-Gaussianity of local type. Postponing the analysis of observational effects, we show that persistence diagrams are more sensitive to local non-Gaussianity than previous topological statistics including the genus and Betti number curves, and can constrain Δ fNLloc= 35.8 at the 68% confidence level on the simulation set, compared to Δ fNLloc= 60.6 for the Betti number curves. Given the resolution of our simulations, we expect applying persistence diagrams to observational data will give constraints competitive with those of the Minkowski Functionals. This is the first in a series of papers where we plan to apply TDA to different shapes of non-Gaussianity in the CMB and Large Scale Structure.

  3. Method and infrastructure for cycle-reproducible simulation on large scale digital circuits on a coordinated set of field-programmable gate arrays (FPGAs)

    DOEpatents

    Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A

    2014-01-28

    A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.

  4. Local competition increases people's willingness to harm others

    PubMed Central

    Barker, Jessica L.; Barclay, Pat

    2016-01-01

    Why should organisms incur a cost in order to inflict a (usually greater) cost on others? Such costly harming behavior may be favored when competition for resources occurs locally, because it increases individuals’ fitness relative to close competitors. However, there is no explicit experimental evidence supporting the prediction that people are more willing to harm others under local versus global competition. We illustrate this prediction with a game theoretic model, and then test it in a series of economic games. In these experiments, players could spend money to make others lose more. We manipulated the scale of competition by awarding cash prizes to the players with the highest payoffs per set of social partners (local competition) or in all the participants in a session (global competition). We found that, as predicted, people were more harmful to others when competition was local (Study 1). This result still held when people “earned” (rather than were simply given) their money (Study 2). In addition, when competition was local, people were more willing to harm ingroup members than outgroup members (Study 3), because ingroup members were the relevant competitive targets. Together, our results suggest that local competition in human groups not only promotes willingness to harm others in general, but also causes ingroup hostility. PMID:29805247

  5. Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Zong-shou; Li, Jin-wei

    2014-12-01

    Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.

  6. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laes, Denise; Eisinger, Chris; Morgan, Craig

    2013-07-30

    The purpose of this report is to provide a summary of individual local-­scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-­ specific characterization analyses were performed as part of the “Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region” (RMCCS) project. The primary objective of these local-­scale analyses is to provide a basis for regional-­scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-­ resolution characterization of a state-­sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and suchmore » can provide insight regarding limiting factors for the regional-­scale geology. For the RMCCS project, the outcomes of these local-­scale studies provide a starting point for future local-­scale site characterization efforts in the Rocky Mountain region.« less

  7. Landscape effects on mallard habitat selection at multiple spatial scales during the non-breeding period

    USGS Publications Warehouse

    Beatty, William S.; Webb, Elisabeth B.; Kesler, Dylan C.; Raedeke, Andrew H.; Naylor, Luke W.; Humburg, Dale D.

    2014-01-01

    Previous studies that evaluated effects of landscape-scale habitat heterogeneity on migratory waterbird distributions were spatially limited and temporally restricted to one major life-history phase. However, effects of landscape-scale habitat heterogeneity on long-distance migratory waterbirds can be studied across the annual cycle using new technologies, including global positioning system satellite transmitters. We used Bayesian discrete choice models to examine the influence of local habitats and landscape composition on habitat selection by a generalist dabbling duck, the mallard (Anas platyrhynchos), in the midcontinent of North America during the non-breeding period. Using a previously published empirical movement metric, we separated the non-breeding period into three seasons, including autumn migration, winter, and spring migration. We defined spatial scales based on movement patterns such that movements >0.25 and <30.00 km were classified as local scale and movements >30.00 km were classified as relocation scale. Habitat selection at the local scale was generally influenced by local and landscape-level variables across all seasons. Variables in top models at the local scale included proximities to cropland, emergent wetland, open water, and woody wetland. Similarly, variables associated with area of cropland, emergent wetland, open water, and woody wetland were also included at the local scale. At the relocation scale, mallards selected resource units based on more generalized variables, including proximity to wetlands and total wetland area. Our results emphasize the role of landscape composition in waterbird habitat selection and provide further support for local wetland landscapes to be considered functional units of waterbird conservation and management.

  8. A megaregion-scale approach for assessing the impacts of climate change and strategic management decisions in the Northeast United States

    NASA Astrophysics Data System (ADS)

    Rosenzweig, B.; Vorosmarty, C. J.; Stewart, R. J.; Miara, A.; Lu, X.; Kicklighter, D. W.; Ehsani, N.; Wollheim, W. M.; Melillo, J. M.; Fekete, B. M.; Dilekli, N.; Duchin, F.; Gross, B.; Bhatt, V.

    2014-12-01

    'Megaregions' have been identified as an important new scale of geography for policy decision-making in the United States. These regions extend beyond local boundaries (ie. cities, states) to incorporate areas with linked economies, infrastructure and land-use patterns and shared climate and environmental systems, such as watersheds. The corridor of densely connected metropolitan areas and surrounding hinterlands along the U.S. east coast from Maine to Virginia is the archetype of this type of unit: The Northeast Megaregion. The Northeast faces a unique set of policy challenges including: projections of a wetter, more extreme climate, aging and underfunded infrastructure and economically distressed rural areas. Megaregion-scale policy efforts such as the Regional Greenhouse Gas Initiative (RGGI) and support for a regional food system have been recognized as strategic tools for climate change mitigation and adaptation, but decision-makers have limited information on the potential consequences of these strategies on the complex natural-human system of the Northeast, under various scenarios of global climate change. We have developed a Northeast Regional Earth System Model (NE-RESM) as a framework to provide this type of information. We integrate terrestrial ecosystem, hydrologic, energy system and economic models to investigate scenarios of paired regional socioeconomic pathways and global climate projections. Our initial results suggest that megaregion-scale strategic decisions in the Northeast may have important consequences for both local water management and global climate change mitigation.

  9. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  10. Attribution of regional flood changes based on scaling fingerprints

    PubMed Central

    Merz, Bruno; Viet Dung, Nguyen; Parajka, Juraj; Nester, Thomas; Blöschl, Günter

    2016-01-01

    Abstract Changes in the river flood regime may be due to atmospheric processes (e.g., increasing precipitation), catchment processes (e.g., soil compaction associated with land use change), and river system processes (e.g., loss of retention volume in the floodplains). This paper proposes a new framework for attributing flood changes to these drivers based on a regional analysis. We exploit the scaling characteristics (i.e., fingerprints) with catchment area of the effects of the drivers on flood changes. The estimation of their relative contributions is framed in Bayesian terms. Analysis of a synthetic, controlled case suggests that the accuracy of the regional attribution increases with increasing number of sites and record lengths, decreases with increasing regional heterogeneity, increases with increasing difference of the scaling fingerprints, and decreases with an increase of their prior uncertainty. The applicability of the framework is illustrated for a case study set in Austria, where positive flood trends have been observed at many sites in the past decades. The individual scaling fingerprints related to the atmospheric, catchment, and river system processes are estimated from rainfall data and simple hydrological modeling. Although the distributions of the contributions are rather wide, the attribution identifies precipitation change as the main driver of flood change in the study region. Overall, it is suggested that the extension from local attribution to a regional framework, including multiple drivers and explicit estimation of uncertainty, could constitute a similar shift in flood change attribution as the extension from local to regional flood frequency analysis. PMID:27609996

  11. Use of geomorphic regime diagrams in channel restoration

    NASA Astrophysics Data System (ADS)

    Buffington, J. M.; Parker, G.

    2005-12-01

    Regime diagrams can be used to predict channel characteristics (depth, grain size, slope) and reach-scale channel morphology (pool-riffle, plane-bed, etc.) as a function imposed values of discharge and bedload sediment supply. In terms of stream restoration, these diagrams can be used to set target values for creating or maintaining desired channel types and associated aquatic habitats or to assess the stable channel morphology for imposed watershed conditions. However, alluvial channels are dynamic and may move toward new states with interannual changes in discharge or sediment supply. These changes may be small-scale adjustments of channel dimensions, grain size, or slope, or they may be whole-sale metamorphosis to a new reach type. The degree of change likely depends on local physiography and the associated characteristic variations of discharge and sediment supply. We propose a framework for assessing the relative degree of channel stability in different physiographic settings using a regime diagram that is explicitly linked to rational equations for discharge and sediment supply. This approach allows a more dynamic representation of potential channel conditions that can be expected for a given restoration design (or for an existing channel), and links site conditions to discharge and sediment supply variability imposed by larger-scale basin conditions and physiography.

  12. Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware

    PubMed Central

    Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose

    2015-01-01

    Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839

  13. NARROW-LINE-WIDTH UV BURSTS IN THE TRANSITION REGION ABOVE SUNSPOTS OBSERVED BY IRIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhenyong; Huang, Zhenghua; Xia, Lidong

    Various small-scale structures abound in the solar atmosphere above active regions, playing an important role in the dynamics and evolution therein. We report on a new class of small-scale transition region structures in active regions, characterized by strong emissions but extremely narrow Si iv line profiles as found in observations taken with the Interface Region Imaging Spectrograph (IRIS). Tentatively named as narrow-line-width UV bursts (NUBs), these structures are located above sunspots and comprise one or multiple compact bright cores at sub-arcsecond scales. We found six NUBs in two data sets (a raster and a sit-and-stare data set). Among these, fourmore » events are short-lived with a duration of ∼10 minutes, while two last for more than 36 minutes. All NUBs have Doppler shifts of 15–18 km s{sup −1}, while the NUB found in sit-and-stare data possesses an additional component at ∼50 km s{sup −1} found only in the C ii and Mg ii lines. Given that these events are found to play a role in the local dynamics, it is important to further investigate the physical mechanisms that generate these phenomena and their role in the mass transport in sunspots.« less

  14. Activity and observability of meteor showers throughout the year

    NASA Astrophysics Data System (ADS)

    Zimnikoval, Peter

    2014-02-01

    Diagrams on the poster present the activity periods of meteor showers as well as the rising and setting times of meteor shower radiants. Plotted are sunrises, sunsets and the period of twilight. It was constructed according to data from the IMO Meteor Shower Working List. More active showers are displayed in red and less active showers in green. The diagrams are calculated for geographic latitudes of 40° N, 0° and 40° S. The time scale is given as local time at the relevant zonal meridian and supplemented by local daylight saving time. The diagrams contain rounded values of solar longitude J2000. The star chart shows the radiant positions and drift of IMO meteor showers while the other diagrams display shower activity and date of maximum.

  15. Irradiation-hyperthermia in canine hemangiopericytomas: large-animal model for therapeutic response.

    PubMed

    Richardson, R C; Anderson, V L; Voorhees, W D; Blevins, W E; Inskeep, T K; Janas, W; Shupe, R E; Babbs, C F

    1984-11-01

    Results of irradiation-hyperthermia treatment in 11 dogs with naturally occurring hemangiopericytoma were reported. Similarities of canine and human hemangiopericytomas were described. Orthovoltage X-irradiation followed by microwave-induced hyperthermia resulted in a 91% objective response rate. A statistical procedure was given to evaluate quantitatively the clinical behavior of locally invasive, nonmetastatic tumors in dogs that were undergoing therapy for control of local disease. The procedure used a small sample size and demonstrated distribution of the data on a scaled response as well as transformation of the data through classical parametric and nonparametric statistical methods. These statistical methods set confidence limits on the population mean and placed tolerance limits on a population percentage. Application of the statistical methods to human and animal clinical trials was apparent.

  16. The N-terminal Set-β Protein Isoform Induces Neuronal Death*

    PubMed Central

    Trakhtenberg, Ephraim F.; Morkin, Melina I.; Patel, Karan H.; Fernandez, Stephanie G.; Sang, Alan; Shaw, Peter; Liu, Xiongfei; Wang, Yan; Mlacker, Gregory M.; Gao, Han; Velmeshev, Dmitry; Dombrowski, Susan M.; Vitek, Michael P.; Goldberg, Jeffrey L.

    2015-01-01

    Set-β protein plays different roles in neurons, but the diversity of Set-β neuronal isoforms and their functions have not been characterized. The expression and subcellular localization of Set-β are altered in Alzheimer disease, cleavage of Set-β leads to neuronal death after stroke, and the full-length Set-β regulates retinal ganglion cell (RGC) and hippocampal neuron axon growth and regeneration in a subcellular localization-dependent manner. Here we used various biochemical approaches to investigate Set-β isoforms and their role in the CNS, using the same type of neurons, RGCs, across studies. We found multiple alternatively spliced isoforms expressed from the Set locus in purified RGCs. Set transcripts containing the Set-β-specific exon were the most highly expressed isoforms. We also identified a novel, alternatively spliced Set-β transcript lacking the nuclear localization signal and demonstrated that the full-length (∼39-kDa) Set-β is localized predominantly in the nucleus, whereas a shorter (∼25-kDa) Set-β isoform is localized predominantly in the cytoplasm. Finally, we show that an N-terminal Set-β cleavage product can induce neuronal death. PMID:25833944

  17. Capillary pressure heterogeneity and hysteresis for the supercritical CO2/water system in a sandstone

    NASA Astrophysics Data System (ADS)

    Pini, Ronny; Benson, Sally M.

    2017-10-01

    We report results from an experimental investigation on the hysteretic behaviour of the capillary pressure curve for the supercritical CO2-water system in a Berea Sandstone core. Previous observations have highlighted the importance of subcore-scale capillary heterogeneity in developing local saturations during drainage; we show in this study that the same is true for the imbibition process. Spatially distributed drainage and imbibition scanning curves were obtained for mm-scale subsets of the rock sample non-invasively using X-ray CT imagery. Core- and subcore-scale measurements are well described using the Brooks-Corey formalism, which uses a linear trapping model to compute mobile saturations during imbibition. Capillary scaling yields two separate universal drainage and imbibition curves that are representative of the full subcore-scale data set. This enables accurate parameterisation of rock properties at the subcore-scale in terms of capillary scaling factors and permeability, which in turn serve as effective indicators of heterogeneity at the same scale even when hysteresis is a factor. As such, the proposed core-analysis workflow is quite general and provides the required information to populate numerical models that can be used to extend core-flooding experiments to conditions prevalent in the subsurface, which would be otherwise not attainable in the laboratory.

  18. Evidence of a forward energy cascade and Kolmogorov self-similarity in submesoscale ocean surface drifter observations

    NASA Astrophysics Data System (ADS)

    Poje, Andrew C.; Ã-zgökmen, Tamay M.; Bogucki, Darek J.; Kirwan, A. D.

    2017-02-01

    Using two-point velocity and position data from the near-simultaneous release of O(100) GPS-tracked surface drifters in the northern Gulf of Mexico, we examine the applicability of classical turbulent scaling laws to upper ocean velocity fields. The dataset allows direct estimates of both velocity structure functions and the temporal evolution of the distribution of particle pair separations. On 100 m-10 km spatial scales, and time scales of order 1-10 days, all metrics of the observed surface fluctuations are consistent with standard Kolmogorov turbulence theory in an energy cascade inertial-range regime. The sign of the third-order structure function is negative and proportional to the separation distance for scales ≲10 km where local, fluctuating Rossby numbers are found to be larger than 0.1. The scale-independent energy dissipation rate, or downscale spectral flux, estimated from Kolmogorov's 4/5th law in this regime closely matches nearby microscale dissipation measurements in the near-surface. In contrast, similar statistics derived from a like-sized set of synthetic drifters advected by purely geostrophic altimetric AVISO data agree well with Kolmogorov-Kraichnan scaling for 2D turbulence in the forward enstrophy cascade range.

  19. Islam and Environmental Consciousness: A New Scale Development.

    PubMed

    Emari, Hossein; Vazifehdoust, Hossein; Nikoomaram, Hashem

    2017-04-01

    This research proposed a new construct, Islamic environmental consciousness (IEC), and developed a measurement scale to support this construct. Churchill's (J Mark Res 16(1):64-73, 1979) paradigm, adapted by Negra and Mzoughi (Internet Res 22(4):426-442, 2012), was utilized. A total of 32 items were generated based on the verses of the Qur'an from nine interviews with teachers in an Islamic seminary. This set of items was reduced to 19 after dropping redundant or non-representative items. In a pilot study, factor analysis of the 19-item scale yielded a two-factor structure scale of seven items with a reliability ranging from 0.7 to 0.8. The Islamic environmental consciousness scale (IECS) was statistically confirmed and validated in a subsequent investigation. The proposed measurement scale warrants further exploratory study. Future research should assess the IECS's validity across different Muslim countries, locales, and various Islamic schools of thought and practice. IEC is proposed as a new construct that focuses primarily on the Qur'an and seeks to achieve acceptance by both Sunni and Shia denominations. In this study, both cognitive attitudes and behavioral aspects were considered in the design of the IECS.

  20. Validation of the Chinese Challenging Behaviour Scale: clinical correlates of challenging behaviours in nursing home residents with dementia.

    PubMed

    Lam, Chi Leung; Chan, W C; Mok, Cycbie C M; Li, S W; Lam, Linda C W

    2006-08-01

    Behavioural and psychological symptoms of dementia (BPSD) are associated with considerable burden to patients with dementia and their caregivers. Formal caregivers in residential care settings face different challenges when delivering care. This study aimed at assessing the clinical correlates of challenging BPSD using the Chinese version of the Challenging Behaviour Scale (CCBS) designed for residential care settings. One hundred and twenty-five participants were recruited from three care-and-attention homes in Hong Kong. The CCBS was administered together with the Cantonese version of Mini-Mental State Examination (MMSE), Clinical Dementia Rating (CDR), Disability Assessment for Dementia (DAD) and Neuropsychiatric Inventory (NPI) to explore the relationships between challenging behaviour and important clinical correlates. The CCBS had good internal consistency (alpha = 0.86), inter-rater (ICC = 0.79) and test-retest reliability (ICC = 0.98). A four-factor structure is demonstrated by factor analysis: hyperactivity behaviours, hypoactivity behaviours, verbally aggressive and aberrant behaviours. Challenging behaviours were associated with male gender, cognitive impairment, functional disability, neuropsychiatric symptoms, and higher caregiver's workload. The CCBS is a valid and reliable measure to assess BPSD in residential care settings in local Chinese community. It is useful in evaluating the challenges faced by formal caregivers during daily care of the dementia patients.

Top