Sample records for large scale randomized

  1. Lessons Learned from Large-Scale Randomized Experiments

    ERIC Educational Resources Information Center

    Slavin, Robert E.; Cheung, Alan C. K.

    2017-01-01

    Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and…

  2. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  3. The Relationship of Class Size Effects and Teacher Salary

    ERIC Educational Resources Information Center

    Peevely, Gary; Hedges, Larry; Nye, Barbara A.

    2005-01-01

    The effects of class size on academic achievement have been studied for decades. Although the results of small-scale, randomized experiments and large-scale, econometric studies point to positive effects of small classes, some scholars see the evidence as ambiguous. Recent analyses from a 4-year, large-scale, randomized experiment on the effects…

  4. Band gaps and localization of surface water waves over large-scale sand waves with random fluctuations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Li, Yan; Shao, Hao; Zhong, Yaozhao; Zhang, Sai; Zhao, Zongxi

    2012-06-01

    Band structure and wave localization are investigated for sea surface water waves over large-scale sand wave topography. Sand wave height, sand wave width, water depth, and water width between adjacent sand waves have significant impact on band gaps. Random fluctuations of sand wave height, sand wave width, and water depth induce water wave localization. However, random water width produces a perfect transmission tunnel of water waves at a certain frequency so that localization does not occur no matter how large a disorder level is applied. Together with theoretical results, the field experimental observations in the Taiwan Bank suggest band gap and wave localization as the physical mechanism of sea surface water wave propagating over natural large-scale sand waves.

  5. Universal statistics of vortex tangles in three-dimensional random waves

    NASA Astrophysics Data System (ADS)

    Taylor, Alexander J.

    2018-02-01

    The tangled nodal lines (wave vortices) in random, three-dimensional wavefields are studied as an exemplar of a fractal loop soup. Their statistics are a three-dimensional counterpart to the characteristic random behaviour of nodal domains in quantum chaos, but in three dimensions the filaments can wind around one another to give distinctly different large scale behaviours. By tracing numerically the structure of the vortices, their conformations are shown to follow recent analytical predictions for random vortex tangles with periodic boundaries, where the local disorder of the model ‘averages out’ to produce large scale power law scaling relations whose universality classes do not depend on the local physics. These results explain previous numerical measurements in terms of an explicit effect of the periodic boundaries, where the statistics of the vortices are strongly affected by the large scale connectedness of the system even at arbitrarily high energies. The statistics are investigated primarily for static (monochromatic) wavefields, but the analytical results are further shown to directly describe the reconnection statistics of vortices evolving in certain dynamic systems, or occurring during random perturbations of the static configuration.

  6. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  7. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  9. Predicting protein functions from redundancies in large-scale protein interaction networks

    NASA Technical Reports Server (NTRS)

    Samanta, Manoj Pratim; Liang, Shoudan

    2003-01-01

    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  10. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  11. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  12. An effective medium approach to predict the apparent contact angle of drops on super-hydrophobic randomly rough surfaces.

    PubMed

    Bottiglione, F; Carbone, G

    2015-01-14

    The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.

  13. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  14. The cosmological principle is not in the sky

    NASA Astrophysics Data System (ADS)

    Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan

    2017-08-01

    The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.

  15. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  16. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  17. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  18. Topological analysis of the CfA redshift survey

    NASA Technical Reports Server (NTRS)

    Vogeley, Michael S.; Park, Changbom; Geller, Margaret J.; Huchra, John P.; Gott, J. Richard, III

    1994-01-01

    We study the topology of large-scale structure in the Center for Astrophysics Redshift Survey, which now includes approximately 12,000 galaxies with limiting magnitude m(sub B) is less than or equal to 15.5. The dense sampling and large volume of this survey allow us to compute the topology on smoothing scales from 6 to 20/h Mpc; we thus examine the topology of structure in both 'nonlinear' and 'linear' regimes. On smoothing scales less than or equal to 10/h Mpc this sample has 3 times the number of resolution elements of samples examined in previous studies. Isodensity surface of the smoothed galaxy density field demonstrate that coherent high-density structures and large voids dominate the galaxy distribution. We compute the genus-threshold density relation for isodensity surfaces of the CfA survey. To quantify phase correlation in these data, we compare the CfA genus with the genus of realizations of Gaussian random fields with the power spectrum measured for the CfA survey. On scales less than or equal to 10/h Mpc the observed genus amplitude is smaller than random phase (96% confidence level). This decrement reflects the degree of phase coherence in the observed galaxy distribution. In other words the genus amplitude on these scales is not good measure of the power spectrum slope. On scales greater than 10/h Mpc, where the galaxy distribution is rougly in the 'linear' regime, the genus ampitude is consistent with the random phase amplitude. The shape of the genus curve reflects the strong coherence in the observed structure; the observed genus curve appears broader than random phase (94% confidence level for smoothing scales less than or equal to 10/h Mpc) because the topolgoy is spongelike over a very large range of density threshold. This departre from random phase consistent with a distribution like a filamentary net of 'walls with holes.' On smoothing scales approaching approximately 20/h Mpc the shape of the CfA genus curve is consistent with random phase. There is very weak evidence for a shift of the genus toward a 'bubble-like' topology. To test cosmological models, we compute the genus for mock CfA surveys drawn from large (L greater than or approximately 400/h Mpc) N-body simulations of three variants of the cold dark matter (CDM) cosmogony. The genus amplitude of the 'standard' CDM model (omega h = 0.5, b = 1.5) differs from the observations (96% confidence level) on smoothing scales is less than or approximately 10/h Mpc. An open CDM model (omega h = 0.2) and a CDM model with nonzero cosmological constant (omega h = 0.24, lambda (sub 0) = 0.6) are consistent with the observed genus amplitude over the full range of smoothing scales. All of these models fail (97% confidence level) to match the broadness of the observed genus curve on smoothing scales is less than or equal to 10/h Mpc.

  19. Scale-free characteristics of random networks: the topology of the world-wide web

    NASA Astrophysics Data System (ADS)

    Barabási, Albert-László; Albert, Réka; Jeong, Hawoong

    2000-06-01

    The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.

  20. Random-field-induced disordering mechanism in a disordered ferromagnet: Between the Imry-Ma and the standard disordering mechanism

    NASA Astrophysics Data System (ADS)

    Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe

    2017-12-01

    Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.

  1. Intervention for First Graders with Limited Number Knowledge: Large-Scale Replication of a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Gersten, Russell; Rolfhus, Eric; Clarke, Ben; Decker, Lauren E.; Wilkins, Chuck; Dimino, Joseph

    2015-01-01

    Replication studies are extremely rare in education. This randomized controlled trial (RCT) is a scale-up replication of Fuchs et al., which in a sample of 139 found a statistically significant positive impact for Number Rockets, a small-group intervention for at-risk first graders that focused on building understanding of number operations. The…

  2. A large-scale cluster randomized trial to determine the effects of community-based dietary sodium reduction--the China Rural Health Initiative Sodium Reduction Study.

    PubMed

    Li, Nicole; Yan, Lijing L; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce

    2013-11-01

    Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24-hour urine. The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. © 2013.

  3. Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs

    NASA Astrophysics Data System (ADS)

    van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.

    2018-04-01

    Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.

  4. A Data Management System Integrating Web-Based Training and Randomized Trials

    ERIC Educational Resources Information Center

    Muroff, Jordana; Amodeo, Maryann; Larson, Mary Jo; Carey, Margaret; Loftin, Ralph D.

    2011-01-01

    This article describes a data management system (DMS) developed to support a large-scale randomized study of an innovative web-course that was designed to improve substance abuse counselors' knowledge and skills in applying a substance abuse treatment method (i.e., cognitive behavioral therapy; CBT). The randomized trial compared the performance…

  5. Implementing Large-Scale Instructional Technology in Kenya: Changing Instructional Practice and Developing Accountability in a National Education System

    ERIC Educational Resources Information Center

    Piper, Benjamin; Oyanga, Arbogast; Mejia, Jessica; Pouezevara, Sarah

    2017-01-01

    Previous large-scale education technology interventions have shown only modest impacts on student achievement. Building on results from an earlier randomized controlled trial of three different applications of information and communication technologies (ICTs) on primary education in Kenya, the Tusome Early Grade Reading Activity developed the…

  6. Effectiveness and cost-effectiveness of telehealthcare for chronic obstructive pulmonary disease: study protocol for a cluster randomized controlled trial.

    PubMed

    Udsen, Flemming Witt; Lilholt, Pernille Heyckendorff; Hejlesen, Ole; Ehlers, Lars Holger

    2014-05-21

    Several feasibility studies show promising results of telehealthcare on health outcomes and health-related quality of life for patients suffering from chronic obstructive pulmonary disease, and some of these studies show that telehealthcare may even lower healthcare costs. However, the only large-scale trial we have so far - the Whole System Demonstrator Project in England - has raised doubts about these results since it conclude that telehealthcare as a supplement to usual care is not likely to be cost-effective compared with usual care alone. The present study is known as 'TeleCare North' in Denmark. It seeks to address these doubts by implementing a large-scale, pragmatic, cluster-randomized trial with nested economic evaluation. The purpose of the study is to assess the effectiveness and the cost-effectiveness of a telehealth solution for patients suffering from chronic obstructive pulmonary disease compared to usual practice. General practitioners will be responsible for recruiting eligible participants (1,200 participants are expected) for the trial in the geographical area of the North Denmark Region. Twenty-six municipality districts in the region define the randomization clusters. The primary outcomes are changes in health-related quality of life, and the incremental cost-effectiveness ratio measured from baseline to follow-up at 12 months. Secondary outcomes are changes in mortality and physiological indicators (diastolic and systolic blood pressure, pulse, oxygen saturation, and weight). There has been a call for large-scale clinical trials with rigorous cost-effectiveness assessments in telehealthcare research. This study is meant to improve the international evidence base for the effectiveness and cost-effectiveness of telehealthcare to patients suffering from chronic obstructive pulmonary disease by implementing a large-scale pragmatic cluster-randomized clinical trial. Clinicaltrials.gov, http://NCT01984840, November 14, 2013.

  7. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  8. A LARGE-SCALE CLUSTER RANDOMIZED TRIAL TO DETERMINE THE EFFECTS OF COMMUNITY-BASED DIETARY SODIUM REDUCTION – THE CHINA RURAL HEALTH INITIATIVE SODIUM REDUCTION STUDY

    PubMed Central

    Li, Nicole; Yan, Lijing L.; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce

    2013-01-01

    Background Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. Design This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24 hour urine. Trial status The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. Discussion The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. PMID:24176436

  9. On the statistical mechanics of the 2D stochastic Euler equation

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.

  10. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  11. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  12. Spatiotemporal property and predictability of large-scale human mobility

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  13. Use of electronic healthcare records in large-scale simple randomized trials at the point of care for the documentation of value-based medicine.

    PubMed

    van Staa, T-P; Klungel, O; Smeeth, L

    2014-06-01

    A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  14. A quantitative approach to the topology of large-scale structure. [for galactic clustering computation

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.

  15. Parameters affecting the resilience of scale-free networks to random failures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran

    2005-09-01

    It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degreemore » of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.« less

  16. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  17. Effects of Interim Assessments on Student Achievement: Evidence from a Large-Scale Experiment

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros; Miller, Shazia R.; van der Ploeg, Arie; Li, Wei

    2016-01-01

    We use data from a large-scale, school-level randomized experiment conducted in 2010-2011 in public schools in Indiana. Our sample includes more than 30,000 students in 70 schools. We examine the impact of two interim assessment programs (i.e., mCLASS in Grades K-2 and Acuity in Grades 3--8) on mathematics and reading achievement. Two-level models…

  18. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Utilisation of ISA Reverse Genetics and Large-Scale Random Codon Re-Encoding to Produce Attenuated Strains of Tick-Borne Encephalitis Virus within Days.

    PubMed

    de Fabritus, Lauriane; Nougairède, Antoine; Aubry, Fabien; Gould, Ernest A; de Lamballerie, Xavier

    2016-01-01

    Large-scale codon re-encoding is a new method of attenuating RNA viruses. However, the use of infectious clones to generate attenuated viruses has inherent technical problems. We previously developed a bacterium-free reverse genetics protocol, designated ISA, and now combined it with large-scale random codon-re-encoding method to produce attenuated tick-borne encephalitis virus (TBEV), a pathogenic flavivirus which causes febrile illness and encephalitis in humans. We produced wild-type (WT) and two re-encoded TBEVs, containing 273 or 273+284 synonymous mutations in the NS5 and NS5+NS3 coding regions respectively. Both re-encoded viruses were attenuated when compared with WT virus using a laboratory mouse model and the relative level of attenuation increased with the degree of re-encoding. Moreover, all infected animals produced neutralizing antibodies. This novel, rapid and efficient approach to engineering attenuated viruses could potentially expedite the development of safe and effective new-generation live attenuated vaccines.

  20. Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches

    PubMed Central

    Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand

    2018-01-01

    Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086

  1. Flexible sampling large-scale social networks by self-adjustable random walk

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  2. Coupled continuous time-random walks in quenched random environment

    NASA Astrophysics Data System (ADS)

    Magdziarz, M.; Szczotka, W.

    2018-02-01

    We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.

  3. Emergence of Multiscaling in a Random-Force Stirred Fluid

    NASA Astrophysics Data System (ADS)

    Yakhot, Victor; Donzis, Diego

    2017-07-01

    We consider the transition to strong turbulence in an infinite fluid stirred by a Gaussian random force. The transition is defined as a first appearance of anomalous scaling of normalized moments of velocity derivatives (dissipation rates) emerging from the low-Reynolds-number Gaussian background. It is shown that, due to multiscaling, strongly intermittent rare events can be quantitatively described in terms of an infinite number of different "Reynolds numbers" reflecting a multitude of anomalous scaling exponents. The theoretically predicted transition disappears at Rλ≤3 . The developed theory is in quantitative agreement with the outcome of large-scale numerical simulations.

  4. Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1988-01-01

    These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.

  5. Reference Values of Within-District Intraclass Correlations of Academic Achievement by District Characteristics: Results from a Meta-Analysis of District-Specific Values

    ERIC Educational Resources Information Center

    Hedberg, E. C.; Hedges, Larry V.

    2014-01-01

    Randomized experiments are often considered the strongest designs to study the impact of educational interventions. Perhaps the most prevalent class of designs used in large scale education experiments is the cluster randomized design in which entire schools are assigned to treatments. In cluster randomized trials (CRTs) that assign schools to…

  6. A Bayesian Hierarchical Model for Large-Scale Educational Surveys: An Application to the National Assessment of Educational Progress. Research Report. ETS RR-04-38

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Jenkins, Frank

    2005-01-01

    Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…

  7. Statistical Analysis of Small-Scale Magnetic Flux Emergence Patterns: A Useful Subsurface Diagnostic?

    NASA Astrophysics Data System (ADS)

    Lamb, Derek A.

    2016-10-01

    While sunspots follow a well-defined pattern of emergence in space and time, small-scale flux emergence is assumed to occur randomly at all times in the quiet Sun. HMI's full-disk coverage, high cadence, spatial resolution, and duty cycle allow us to probe that basic assumption. Some case studies of emergence suggest that temporal clustering on spatial scales of 50-150 Mm may occur. If clustering is present, it could serve as a diagnostic of large-scale subsurface magnetic field structures. We present the results of a manual survey of small-scale flux emergence events over a short time period, and a statistical analysis addressing the question of whether these events show spatio-temporal behavior that is anything other than random.

  8. Scale-free Graphs for General Aviation Flight Schedules

    NASA Technical Reports Server (NTRS)

    Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.

    2003-01-01

    In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.

  9. Homogenization analysis of invasion dynamics in heterogeneous landscapes with differential bias and motility.

    PubMed

    Yurk, Brian P

    2018-07-01

    Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.

  10. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  11. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  12. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  13. Application of stochastic processes in random growth and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panagiotis

    We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.

  14. Effects of topology on network evolution

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panos; Cluzel, Philippe

    2006-08-01

    The ubiquity of scale-free topology in nature raises the question of whether this particular network design confers an evolutionary advantage. A series of studies has identified key principles controlling the growth and the dynamics of scale-free networks. Here, we use neuron-based networks of boolean components as a framework for modelling a large class of dynamical behaviours in both natural and artificial systems. Applying a training algorithm, we characterize how networks with distinct topologies evolve towards a pre-established target function through a process of random mutations and selection. We find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. Whereas homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously. Remarkably, this latter property is robust to variations of the degree exponent. In contrast, homogeneous random networks require a specific tuning of their connectivity to optimize their ability to evolve. These results highlight an organizing principle that governs the evolution of complex networks and that can improve the design of engineered systems.

  15. Naming games in two-dimensional and small-world-connected random geometric networks.

    PubMed

    Lu, Qiming; Korniss, G; Szymanski, B K

    2008-01-01

    We investigate a prototypical agent-based model, the naming game, on two-dimensional random geometric networks. The naming game [Baronchelli, J. Stat. Mech.: Theory Exp. (2006) P06014] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the naming games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.

  16. Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow

    PubMed Central

    Suzuki, Kazuya; Miyazaki, Makito; Takagi, Jun; Itabashi, Takeshi; Ishiwata, Shin’ichi

    2017-01-01

    Collective behaviors of motile units through hydrodynamic interactions induce directed fluid flow on a larger length scale than individual units. In cells, active cytoskeletal systems composed of polar filaments and molecular motors drive fluid flow, a process known as cytoplasmic streaming. The motor-driven elongation of microtubule bundles generates turbulent-like flow in purified systems; however, it remains unclear whether and how microtubule bundles induce large-scale directed flow like the cytoplasmic streaming observed in cells. Here, we adopted Xenopus egg extracts as a model system of the cytoplasm and found that microtubule bundle elongation induces directed flow for which the length scale and timescale depend on the existence of geometrical constraints. At the lower activity of dynein, kinesins bundle and slide microtubules, organizing extensile microtubule bundles. In bulk extracts, the extensile bundles connected with each other and formed a random network, and vortex flows with a length scale comparable to the bundle length continually emerged and persisted for 1 min at multiple places. When the extracts were encapsulated in droplets, the extensile bundles pushed the droplet boundary. This pushing force initiated symmetry breaking of the randomly oriented bundle network, leading to bundles aligning into a rotating vortex structure. This vortex induced rotational cytoplasmic flows on the length scale and timescale that were 10- to 100-fold longer than the vortex flows emerging in bulk extracts. Our results suggest that microtubule systems use not only hydrodynamic interactions but also mechanical interactions to induce large-scale temporally stable cytoplasmic flow. PMID:28265076

  17. Fractional Stochastic Field Theory

    NASA Astrophysics Data System (ADS)

    Honkonen, Juha

    2018-02-01

    Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.

  18. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  19. The structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayder, M. E.; Povinelli, Louis A.

    1993-01-01

    Large-eddy simulation of a supersonic jet is presented with emphasis on capturing the unsteady features of the flow pertinent to sound emission. A high-accuracy numerical scheme is used to solve the filtered, unsteady, compressible Navier-Stokes equations while modelling the subgrid-scale turbulence. For random inflow disturbance, the wave-like feature of the large-scale structure is demonstrated. The large-scale structure was then enhanced by imposing harmonic disturbances to the inflow. The limitation of using the full Navier-Stokes equation to calculate the far-field sound is discussed. Application of Lighthill's acoustic analogy is given with the objective of highlighting the difficulties that arise from the non-compactness of the source term.

  20. Screening large-scale association study data: exploiting interactions using random forests.

    PubMed

    Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul

    2004-12-10

    Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.

  1. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  2. Multisite Randomized Controlled Trial Examining Intelligent Tutoring of Structure Strategy for Fifth-Grade Readers

    ERIC Educational Resources Information Center

    Wijekumar, Kausalai; Meyer, Bonnie J. F.; Lei, Pui-Wa; Lin, Yu-Chu; Johnson, Lori A.; Spielvogel, James A.; Shurmatz, Kathryn M.; Ray, Melissa; Cook, Michael

    2014-01-01

    This article reports on a large scale randomized controlled trial to study the efficacy of a web-based intelligent tutoring system for the structure strategy designed to improve content area reading comprehension. The research was conducted with 128 fifth-grade classrooms within 12 school districts in rural and suburban settings. Classrooms within…

  3. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  4. Efficient design of clinical trials and epidemiological research: is it possible?

    PubMed

    Lauer, Michael S; Gordon, David; Wei, Gina; Pearson, Gail

    2017-08-01

    Randomized clinical trials and large-scale, cohort studies continue to have a critical role in generating evidence in cardiovascular medicine; however, the increasing concern is that ballooning costs threaten the clinical trial enterprise. In this Perspectives article, we discuss the changing landscape of clinical research, and clinical trials in particular, focusing on reasons for the increasing costs and inefficiencies. These reasons include excessively complex design, overly restrictive inclusion and exclusion criteria, burdensome regulations, excessive source-data verification, and concerns about the effect of clinical research conduct on workflow. Thought leaders have called on the clinical research community to consider alternative, transformative business models, including those models that focus on simplicity and leveraging of digital resources. We present some examples of innovative approaches by which some investigators have successfully conducted large-scale, clinical trials at relatively low cost. These examples include randomized registry trials, cluster-randomized trials, adaptive trials, and trials that are fully embedded within digital clinical care or administrative platforms.

  5. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  6. Chaotic gas turbine subject to augmented Lorenz equations.

    PubMed

    Cho, Kenichiro; Miyano, Takaya; Toriyama, Toshiyuki

    2012-09-01

    Inspired by the chaotic waterwheel invented by Malkus and Howard about 40 years ago, we have developed a gas turbine that randomly switches the sense of rotation between clockwise and counterclockwise. The nondimensionalized expressions for the equations of motion of our turbine are represented as a starlike network of many Lorenz subsystems sharing the angular velocity of the turbine rotor as the central node, referred to as augmented Lorenz equations. We show qualitative similarities between the statistical properties of the angular velocity of the turbine rotor and the velocity field of large-scale wind in turbulent Rayleigh-Bénard convection reported by Sreenivasan et al. [Phys. Rev. E 65, 056306 (2002)]. Our equations of motion achieve the random reversal of the turbine rotor through the stochastic resonance of the angular velocity in a double-well potential and the force applied by rapidly oscillating fields. These results suggest that the augmented Lorenz model is applicable as a dynamical model for the random reversal of turbulent large-scale wind through cessation.

  7. Measuring the topology of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  8. Measuring the topology of large-scale structure in the universe

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  9. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  10. Scale-Up of Safe & Civil Schools' Model for School-Wide Positive Behavioral Interventions and Supports

    ERIC Educational Resources Information Center

    Smolkowski, Keith; Strycker, Lisa; Ward, Bryce

    2016-01-01

    This study evaluated the scale-up of a Safe & Civil Schools "Foundations: Establishing Positive Discipline Policies" positive behavioral interventions and supports initiative through 4 years of "real-world" implementation in a large urban school district. The study extends results from a previous randomized controlled trial…

  11. Site Selection in Experiments: An Assessment of Site Recruitment and Generalizability in Two Scale-Up Studies

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castilla, Veronica

    2016-01-01

    Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides…

  12. Robust-yet-fragile nature of interdependent networks

    NASA Astrophysics Data System (ADS)

    Tan, Fei; Xia, Yongxiang; Wei, Zhi

    2015-05-01

    Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.

  13. Pain Neurophysiology Education and Therapeutic Exercise for Patients With Chronic Low Back Pain: A Single-Blind Randomized Controlled Trial.

    PubMed

    Bodes Pardo, Gema; Lluch Girbés, Enrique; Roussel, Nathalie A; Gallego Izquierdo, Tomás; Jiménez Penick, Virginia; Pecos Martín, Daniel

    2018-02-01

    To assess the effect of a pain neurophysiology education (PNE) program plus therapeutic exercise (TE) for patients with chronic low back pain (CLBP). Single-blind randomized controlled trial. Private clinic and university. Patients with CLBP for ≥6 months (N=56). Participants were randomized to receive either a TE program consisting of motor control, stretching, and aerobic exercises (n=28) or the same TE program in addition to a PNE program (n=28), conducted in two 30- to 50-minute sessions in groups of 4 to 6 participants. The primary outcome was pain intensity rated on the numerical pain rating scale which was completed immediately after treatment and at 1- and 3-month follow-up. Secondary outcome measures were pressure pain threshold, finger-to-floor distance, Roland-Morris Disability Questionnaire, Pain Catastrophizing Scale, Tampa Scale for Kinesiophobia, and Patient Global Impression of Change. At 3-month follow-up, a large change in pain intensity (numerical pain rating scale: -2.2; -2.93 to -1.28; P<.001; d=1.37) was observed for the PNE plus TE group, and a moderate effect size was observed for the secondary outcome measures. Combining PNE with TE resulted in significantly better results for participants with CLBP, with a large effect size, compared with TE alone. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  14. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.

    2014-12-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M {sub stellar} > 10{sup 11.56} M {sub ☉}. We study the topology at two smoothing lengths: R {sub G} = 21 h {sup –1} Mpc and R {sub G} = 34 h {sup –1} Mpc. The genus topology studied at the R {sub G} = 21 h {sup –1} Mpc scale results in the highest genusmore » amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.« less

  15. Determining Scale-dependent Patterns in Spatial and Temporal Datasets

    NASA Astrophysics Data System (ADS)

    Roy, A.; Perfect, E.; Mukerji, T.; Sylvester, L.

    2016-12-01

    Spatial and temporal datasets of interest to Earth scientists often contain plots of one variable against another, e.g., rainfall magnitude vs. time or fracture aperture vs. spacing. Such data, comprised of distributions of events along a transect / timeline along with their magnitudes, can display persistent or antipersistent trends, as well as random behavior, that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data. In a recent study we showed that lacunarity can be used for revealing changes in scale-dependent patterns in fracture spacing data. Here we present a further improvement in our technique, with lacunarity applied to various non-binary datasets comprised of event spacings and magnitudes. We test our technique on a set of four synthetic datasets, three of which are based on an autoregressive model and have magnitudes at every point along the "timeline" thus representing antipersistent, persistent, and random trends. The fourth dataset is made up of five clusters of events, each containing a set of random magnitudes. The concept of lacunarity ratio, LR, is introduced; this is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. It is demonstrated that LR can successfully delineate scale-dependent changes in terms of antipersistence and persistence in the synthetic datasets. This technique is then applied to three different types of data: a hundred-year rainfall record from Knoxville, TN, USA, a set of varved sediments from Marca Shale, and a set of fracture aperture and spacing data from NE Mexico. While the rainfall data and varved sediments both appear to be persistent at small scales, at larger scales they both become random. On the other hand, the fracture data shows antipersistence at small scale (within cluster) and random behavior at large scales. Such differences in behavior with respect to scale-dependent changes in antipersistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.

  16. Optimal Design in Three-Level Block Randomized Designs with Two Levels of Nesting: An ANOVA Framework with Random Effects

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2013-01-01

    Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…

  17. The Effects of Math Video Games on Learning: A Randomized Evaluation Study with Innovative Impact Estimation Techniques. CRESST Report 841

    ERIC Educational Resources Information Center

    Chung, Gregory K. W. K.; Choi, Kilchan; Baker, Eva L.; Cai, Li

    2014-01-01

    A large-scale randomized controlled trial tested the effects of researcher-developed learning games on a transfer measure of fractions knowledge. The measure contained items similar to standardized assessments. Thirty treatment and 29 control classrooms (~1500 students, 9 districts, 26 schools) participated in the study. Students in treatment…

  18. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  19. Cooperation without culture? The null effect of generalized trust on intentional homicide: a cross-national panel analysis, 1995-2009.

    PubMed

    Robbins, Blaine

    2013-01-01

    Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.

  20. Comparison of evidence on harms of medical interventions in randomized and nonrandomized studies

    PubMed Central

    Papanikolaou, Panagiotis N.; Christidi, Georgia D.; Ioannidis, John P.A.

    2006-01-01

    Background Information on major harms of medical interventions comes primarily from epidemiologic studies performed after licensing and marketing. Comparison with data from large-scale randomized trials is occasionally feasible. We compared evidence from randomized trials with that from epidemiologic studies to determine whether they give different estimates of risk for important harms of medical interventions. Methods We targeted well-defined, specific harms of various medical interventions for which data were already available from large-scale randomized trials (> 4000 subjects). Nonrandomized studies involving at least 4000 subjects addressing these same harms were retrieved through a search of MEDLINE. We compared the relative risks and absolute risk differences for specific harms in the randomized and nonrandomized studies. Results Eligible nonrandomized studies were found for 15 harms for which data were available from randomized trials addressing the same harms. Comparisons of relative risks between the study types were feasible for 13 of the 15 topics, and of absolute risk differences for 8 topics. The estimated increase in relative risk differed more than 2-fold between the randomized and nonrandomized studies for 7 (54%) of the 13 topics; the estimated increase in absolute risk differed more than 2-fold for 5 (62%) of the 8 topics. There was no clear predilection for randomized or nonrandomized studies to estimate greater relative risks, but usually (75% [6/8]) the randomized trials estimated larger absolute excess risks of harm than the nonrandomized studies did. Interpretation Nonrandomized studies are often conservative in estimating absolute risks of harms. It would be useful to compare and scrutinize the evidence on harms obtained from both randomized and nonrandomized studies. PMID:16505459

  1. Impact of degree heterogeneity on the behavior of trapping in Koch networks

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongzhi; Gao, Shuyang; Xie, Wenlei

    2010-12-01

    Previous work shows that the mean first-passage time (MFPT) for random walks to a given hub node (node with maximum degree) in uncorrelated random scale-free networks is closely related to the exponent γ of power-law degree distribution P(k )˜k-γ, which describes the extent of heterogeneity of scale-free network structure. However, extensive empirical research indicates that real networked systems also display ubiquitous degree correlations. In this paper, we address the trapping issue on the Koch networks, which is a special random walk with one trap fixed at a hub node. The Koch networks are power-law with the characteristic exponent γ in the range between 2 and 3, they are either assortative or disassortative. We calculate exactly the MFPT that is the average of first-passage time from all other nodes to the trap. The obtained explicit solution shows that in large networks the MFPT varies lineally with node number N, which is obviously independent of γ and is sharp contrast to the scaling behavior of MFPT observed for uncorrelated random scale-free networks, where γ influences qualitatively the MFPT of trapping problem.

  2. Large-scale data analysis of power grid resilience across multiple US service regions

    NASA Astrophysics Data System (ADS)

    Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert

    2016-05-01

    Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.

  3. Implications of Small Samples for Generalization: Adjustments and Rules of Thumb

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy

    2015-01-01

    Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…

  4. Study design of a cluster-randomized controlled trial to evaluate a large-scale distribution of cook stoves and water filters in Western Province, Rwanda.

    PubMed

    Nagel, Corey L; Kirby, Miles A; Zambrano, Laura D; Rosa, Ghislane; Barstow, Christina K; Thomas, Evan A; Clasen, Thomas F

    2016-12-15

    In Rwanda, pneumonia and diarrhea are the first and second leading causes of death, respectively, among children under five. Household air pollution (HAP) resultant from cooking indoors with biomass fuels on traditional stoves is a significant risk factor for pneumonia, while consumption of contaminated drinking water is a primary cause of diarrheal disease. To date, there have been no large-scale effectiveness trials of programmatic efforts to provide either improved cookstoves or household water filters at scale in a low-income country. In this paper we describe the design of a cluster-randomized trial to evaluate the impact of a national-level program to distribute and promote the use of improved cookstoves and advanced water filters to the poorest quarter of households in Rwanda. We randomly allocated 72 sectors (administratively defined units) in Western Province to the intervention, with the remaining 24 sectors in the province serving as controls. In the intervention sectors, roughly 100,000 households received improved cookstoves and household water filters through a government-sponsored program targeting the poorest quarter of households nationally. The primary outcome measures are the incidence of acute respiratory infection (ARI) and diarrhea among children under five years of age. Over a one-year surveillance period, all cases of acute respiratory infection (ARI) and diarrhea identified by health workers in the study area will be extracted from records maintained at health facilities and by community health workers (CHW). In addition, we are conducting intensive, longitudinal data collection among a random sample of households in the study area for in-depth assessment of coverage, use, environmental exposures, and additional health measures. Although previous research has examined the impact of providing household water treatment and improved cookstoves on child health, there have been no studies of national-level programs to deliver these interventions at scale in a developing country. The results of this study, the first RCT of a large-scale programmatic cookstove or household water filter intervention, will inform global efforts to reduce childhood morbidity and mortality from diarrheal disease and pneumonia. This trial is registered at Clinicaltrials.gov (NCT02239250).

  5. Backscattering from a Gaussian distributed, perfectly conducting, rough surface

    NASA Technical Reports Server (NTRS)

    Brown, G. S.

    1977-01-01

    The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.

  6. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  7. Fluid limit of nonintegrable continuous-time random walks in terms of fractional differential equations.

    PubMed

    Sánchez, R; Carreras, B A; van Milligen, B Ph

    2005-01-01

    The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.

  8. On supervised graph Laplacian embedding CA model & kernel construction and its application

    NASA Astrophysics Data System (ADS)

    Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong

    2017-01-01

    There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.

  9. Polymer Dynamics from Synthetic to Biological Macromolecules

    NASA Astrophysics Data System (ADS)

    Richter, D.; Niedzwiedz, K.; Monkenbusch, M.; Wischnewski, A.; Biehl, R.; Hoffmann, B.; Merkel, R.

    2008-02-01

    High resolution neutron scattering together with a meticulous choice of the contrast conditions allows to access the large scale dynamics of soft materials including biological molecules in space and time. In this contribution we present two examples. One from the world of synthetic polymers, the other from biomolecules. First, we will address the peculiar dynamics of miscible polymer blends with very different component glass transition temperatures. Polymethylmetacrylate (PMMA), polyethyleneoxide (PEO) are perfectly miscible but exhibit a difference in the glass transition temperature by 200 K. We present quasielastic neutron scattering investigations on the dynamics of the fast component in the range from angströms to nanometers over a time frame of five orders of magnitude. All data may be consistently described in terms of a Rouse model with random friction, reflecting the random environment imposed by the nearly frozen PMMA matrix on the fast mobile PEO. In the second part we touch on some new developments relating to large scale internal dynamics of proteins by neutron spin echo. We will report results of some pioneering studies which show the feasibility of such experiments on large scale protein motion which will most likely initiate further studies in the future.

  10. Spatial Temporal Mathematics at Scale: An Innovative and Fully Developed Paradigm to Boost Math Achievement among All Learners

    ERIC Educational Resources Information Center

    Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.

    2010-01-01

    This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…

  11. Building spatially-explicit model predictions for ecological condition of streams in the Pacific Northwest: An assessment of landscape variables, models, endpoints and prediction scale

    EPA Science Inventory

    While large-scale, randomized surveys estimate the percentage of a region’s streams in poor ecological condition, identifying particular stream reaches or watersheds in poor condition is an equally important goal for monitoring and management. We built predictive models of strea...

  12. The global reference atmospheric model, mod 2 (with two scale perturbation model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Hargraves, W. R.

    1976-01-01

    The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.

  13. A Randomized Controlled Trial Evaluation of "Time to Read", a Volunteer Tutoring Program for 8- to 9-Year-Olds

    ERIC Educational Resources Information Center

    Miller, Sarah; Connolly, Paul

    2013-01-01

    Tutoring is commonly employed to prevent early reading failure, and evidence suggests that it can have a positive effect. This article presents findings from a large-scale ("n" = 734) randomized controlled trial evaluation of the effect of "Time to Read"--a volunteer tutoring program aimed at children aged 8 to 9 years--on…

  14. Review of Three Recent Randomized Trials of School-Based Mentoring: Making Sense of Mixed Findings. Social Policy Report. Volume 24, Number 3

    ERIC Educational Resources Information Center

    Wheeler, Marc E.; Keller, Thomas E.; DuBois, David L.

    2010-01-01

    Between 2007 and 2009, reports were released on the results of three separate large-scale random assignment studies of the effectiveness of school-based mentoring programs for youth. The studies evaluated programs implemented by Big Brothers Big Sisters of America (BBBSA) affiliates (Herrera et al., 2007), Communities In Schools of San Antonio,…

  15. The role of fanatics in consensus formation

    NASA Astrophysics Data System (ADS)

    Gündüç, Semra

    2015-08-01

    A model of opinion dynamics with two types of agents as social actors are presented, using the Ising thermodynamic model as the dynamics template. The agents are considered as opportunists which live at sites and interact with the neighbors, or fanatics/missionaries which move from site to site randomly in persuasion of converting agents of opposite opinion with the help of opportunists. Here, the moving agents act as an external influence on the opportunists to convert them to the opposite opinion. It is shown by numerical simulations that such dynamics of opinion formation may explain some details of consensus formation even when one of the opinions are held by a minority. Regardless the distribution of the opinion, different size societies exhibit different opinion formation behavior and time scales. In order to understand general behavior, the scaling relations obtained by comparing opinion formation processes observed in societies with varying population and number of randomly moving agents are studied. For the proposed model two types of scaling relations are observed. In fixed size societies, increasing the number of randomly moving agents give a scaling relation for the time scale of the opinion formation process. The second type of scaling relation is due to the size dependent information propagation in finite but large systems, namely finite-size scaling.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  17. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals.

    PubMed

    Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch

    2011-11-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.

  18. Effects of coarse-graining on the scaling behavior of long-range correlated and anti-correlated signals

    PubMed Central

    Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.

    2014-01-01

    We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599

  19. Topology of large-scale structure. IV - Topology in two dimensions

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Cohen, Alexander P.; Hamilton, Andrew J. S.; Gott, J. Richard, III; Weinberg, David H.

    1989-01-01

    In a recent series of papers, an algorithm was developed for quantitatively measuring the topology of the large-scale structure of the universe and this algorithm was applied to numerical models and to three-dimensional observational data sets. In this paper, it is shown that topological information can be derived from a two-dimensional cross section of a density field, and analytic expressions are given for a Gaussian random field. The application of a two-dimensional numerical algorithm for measuring topology to cross sections of three-dimensional models is demonstrated.

  20. Stability of knotted vortices in wave chaos

    NASA Astrophysics Data System (ADS)

    Taylor, Alexander; Dennis, Mark

    Large scale tangles of disordered filaments occur in many diverse physical systems, from turbulent superfluids to optical volume speckle to liquid crystal phases. They can exhibit particular large scale random statistics despite very different local physics. We have previously used the topological statistics of knotting and linking to characterise the large scale tangling, using the vortices of three-dimensional wave chaos as a universal model system whose physical lengthscales are set only by the wavelength. Unlike geometrical quantities, the statistics of knotting depend strongly on the physical system and boundary conditions. Although knotting patterns characterise different systems, the topology of vortices is highly unstable to perturbation, under which they may reconnect with one another. In systems of constructed knots, these reconnections generally rapidly destroy the knot, but for vortex tangles the topological statistics must be stable. Using large scale simulations of chaotic eigenfunctions, we numerically investigate the prevalence and impact of reconnection events, and their effect on the topology of the tangle.

  1. Cooperation without Culture? The Null Effect of Generalized Trust on Intentional Homicide: A Cross-National Panel Analysis, 1995–2009

    PubMed Central

    Robbins, Blaine

    2013-01-01

    Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211

  2. Community turnover of wood-inhabiting fungi across hierarchical spatial scales.

    PubMed

    Abrego, Nerea; García-Baquero, Gonzalo; Halme, Panu; Ovaskainen, Otso; Salcedo, Isabel

    2014-01-01

    For efficient use of conservation resources it is important to determine how species diversity changes across spatial scales. In many poorly known species groups little is known about at which spatial scales the conservation efforts should be focused. Here we examined how the community turnover of wood-inhabiting fungi is realised at three hierarchical levels, and how much of community variation is explained by variation in resource composition and spatial proximity. The hierarchical study design consisted of management type (fixed factor), forest site (random factor, nested within management type) and study plots (randomly placed plots within each study site). To examine how species richness varied across the three hierarchical scales, randomized species accumulation curves and additive partitioning of species richness were applied. To analyse variation in wood-inhabiting species and dead wood composition at each scale, linear and Permanova modelling approaches were used. Wood-inhabiting fungal communities were dominated by rare and infrequent species. The similarity of fungal communities was higher within sites and within management categories than among sites or between the two management categories, and it decreased with increasing distance among the sampling plots and with decreasing similarity of dead wood resources. However, only a small part of community variation could be explained by these factors. The species present in managed forests were in a large extent a subset of those species present in natural forests. Our results suggest that in particular the protection of rare species requires a large total area. As managed forests have only little additional value complementing the diversity of natural forests, the conservation of natural forests is the key to ecologically effective conservation. As the dissimilarity of fungal communities increases with distance, the conserved natural forest sites should be broadly distributed in space, yet the individual conserved areas should be large enough to ensure local persistence.

  3. Community Turnover of Wood-Inhabiting Fungi across Hierarchical Spatial Scales

    PubMed Central

    Abrego, Nerea; García-Baquero, Gonzalo; Halme, Panu; Ovaskainen, Otso; Salcedo, Isabel

    2014-01-01

    For efficient use of conservation resources it is important to determine how species diversity changes across spatial scales. In many poorly known species groups little is known about at which spatial scales the conservation efforts should be focused. Here we examined how the community turnover of wood-inhabiting fungi is realised at three hierarchical levels, and how much of community variation is explained by variation in resource composition and spatial proximity. The hierarchical study design consisted of management type (fixed factor), forest site (random factor, nested within management type) and study plots (randomly placed plots within each study site). To examine how species richness varied across the three hierarchical scales, randomized species accumulation curves and additive partitioning of species richness were applied. To analyse variation in wood-inhabiting species and dead wood composition at each scale, linear and Permanova modelling approaches were used. Wood-inhabiting fungal communities were dominated by rare and infrequent species. The similarity of fungal communities was higher within sites and within management categories than among sites or between the two management categories, and it decreased with increasing distance among the sampling plots and with decreasing similarity of dead wood resources. However, only a small part of community variation could be explained by these factors. The species present in managed forests were in a large extent a subset of those species present in natural forests. Our results suggest that in particular the protection of rare species requires a large total area. As managed forests have only little additional value complementing the diversity of natural forests, the conservation of natural forests is the key to ecologically effective conservation. As the dissimilarity of fungal communities increases with distance, the conserved natural forest sites should be broadly distributed in space, yet the individual conserved areas should be large enough to ensure local persistence. PMID:25058128

  4. Weak gravitational lensing due to large-scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Jaroszynski, Michal; Park, Changbom; Paczynski, Bohdan; Gott, J. Richard, III

    1990-01-01

    The effect of the large-scale structure of the universe on the propagation of light rays is studied. The development of the large-scale density fluctuations in the omega = 1 universe is calculated within the cold dark matter scenario using a smooth particle approximation. The propagation of about 10 to the 6th random light rays between the redshift z = 5 and the observer was followed. It is found that the effect of shear is negligible, and the amplification of single images is dominated by the matter in the beam. The spread of amplifications is very small. Therefore, the filled-beam approximation is very good for studies of strong lensing by galaxies or clusters of galaxies. In the simulation, the column density was averaged over a comoving area of approximately (1/h Mpc)-squared. No case of a strong gravitational lensing was found, i.e., no 'over-focused' image that would suggest that a few images might be present. Therefore, the large-scale structure of the universe as it is presently known does not produce multiple images with gravitational lensing on a scale larger than clusters of galaxies.

  5. Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.

    2015-08-01

    In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.

  6. Linear velocity fields in non-Gaussian models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  7. Evaluation of the effect of Spiritual care on patients with generalized anxiety and depression: a randomized controlled study.

    PubMed

    Sankhe, A; Dalal, K; Save, D; Sarve, P

    2017-12-01

    The present study was conducted to assess the effect of spiritual care in patients with depression, anxiety or both in a randomized controlled design. The participants were randomized either to receive spiritual care or not and Hamilton anxiety rating scale-A (HAM-A), Hamilton depression rating scale-D (HAM-D), WHO-quality of life-Brief (WHOQOL-BREF) and Functional assessment of chronic illness therapy - Spiritual well-being (FACIT-Sp) were assessed before therapy and two follow-ups at 3 and 6 week. However, with regard to the spiritual care therapy group, statistically significant differences were observed in both HAM-A and HAM-D scales between the baseline and visit 2 (p < 0.001), thus significantly reducing symptoms of anxiety and depression, respectively. No statistically significant differences were observed for any of the scales during the follow-up periods for the control group of participants. When the scores were compared between the study groups, HAM-A, HAM-D and FACIT-Sp 12 scores were significantly lower in the interventional group as compared to the control group at both third and sixth weeks. This suggests a significant improvement in symptoms of anxiety and depression in the spiritual care therapy group than the control group; however, large randomized controlled trials with robust design are needed to confirm the same.

  8. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  9. Aromatherapy for the treatment of PONV in children: a pilot RCT.

    PubMed

    Kiberd, Mathew B; Clarke, Suzanne K; Chorney, Jill; d'Eon, Brandon; Wright, Stuart

    2016-11-09

    Postoperative nausea and vomiting (PONV) is one of the most common postoperative complications of general anesthesia in pediatrics. Aromatherapy has been shown to be effective in treating PONV in adults. Given the encouraging results of the adult studies, we planned to determine feasibility of doing a large-scale study in the pediatric population. Our group conducted a pilot randomized controlled trial examining the effect of aromatherapy on post-operative nausea and vomiting in patients 4-16 undergoing ambulatory surgery at a single center. Nausea was defined as a score of 4/10 on the Baxter Retching Faces Scale (BARF scale). A clinically significant reduction was defined as a two-point reduction in Nausea. Post operatively children were administered the BARF scale in 15 min internals until discharge home or until nausea score of 4/10 or greater. Children with nausea were randomized to saline placebo group or aromatherapy QueaseEase™ (Soothing Scents, Inc, Enterprise, AL: blend of ginger, lavender, mint and spearmint). Nausea scores were recorded post intervention. A total of 162 subjects were screened for inclusion in the study. Randomization occurred in 41 subjects of which 39 were included in the final analysis. For the primary outcome, 14/18 (78 %) of controls reached primary outcome compared to 19/21 (90 %) in the aromatherapy group (p = 0.39, Eta 0.175). Other outcomes included use of antiemetic in PACU (control 44 %, aromatherapy 52 % P = 0.75, Eta 0.08), emesis (Control 11 %, 9 % aromatherapy, P = 0.87, Eta = 0.03). There was a statistically significant difference in whether subjects continued to use the intervention (control 28 %, aromatherapy 66 %, p-value 0.048, Eta 0.33). Aromatherapy had a small non-significant effect size in treating postoperative nausea and vomiting compared with control. A large-scale randomized control trial would not be feasible at our institution and would be of doubtful utility. ClinicalTrials.gov NCT02663154 .

  10. Diffusion of strongly magnetized cosmic ray particles in a turbulent medium

    NASA Technical Reports Server (NTRS)

    Ptuskin, V. S.

    1985-01-01

    Cosmic ray (CR) propagation in a turbulent medium is usually considered in the diffusion approximation. Here, the diffusion equation is obtained for strongly magnetized particles in the general form. The influence of a large-scale random magnetic field on CR propagation in interstellar medium is discussed. Cosmic rays are assumed to propagate in a medium with a regular field H and an ensemble of random MHD waves. The energy density of waves on scales smaller than the free path 1 of CR particles is small. The collision integral of the general form which describes interaction between relativistic particles and waves in the quasilinear approximation is used.

  11. A Randomized Trial Examining the Effects of Conjoint Behavioral Consultation in Rural Schools: Student Outcomes and the Mediating Role of the Teacher-Parent Relationship

    ERIC Educational Resources Information Center

    Sheridan, Susan M.; Witte, Amanda L.; Holmes, Shannon R.; Coutts, Michael J.; Dent, Amy L.; Kunz, Gina M.; Wu, ChaoRong

    2017-01-01

    The results of a large-scale randomized controlled trial of Conjoint Behavioral Consultation (CBC) on student outcomes and teacher-parent relationships in rural schools are presented. CBC is an indirect service delivery model that addresses concerns shared by teachers and parents about students. In the present study, the intervention was aimed at…

  12. Comparison of prestellar core elongations and large-scale molecular cloud structures in the Lupus I region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.

    2014-08-10

    Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less

  13. Radiation breakage of DNA: a model based on random-walk chromatin structure

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Sachs, R. K.

    2001-01-01

    Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.

  14. A randomized controlled pilot study of the effectiveness of occupational therapy for children with sensory modulation disorder.

    PubMed

    Miller, Lucy Jane; Coll, Joseph R; Schoen, Sarah A

    2007-01-01

    A pilot randomized controlled trial (RCT) of the effectiveness of occupational therapy using a sensory integration approach (OT-SI) was conducted with children who had sensory modulation disorders (SMDs). This study evaluated the effectiveness of three treatment groups. In addition, sample size estimates for a large scale, multisite RCT were calculated. Twenty-four children with SMD were randomly assigned to one of three treatment conditions; OT-SI, Activity Protocol, and No Treatment. Pretest and posttest measures of behavior, sensory and adaptive functioning, and physiology were administered. The OT-SI group, compared to the other two groups, made significant gains on goal attainment scaling and on the Attention subtest and the Cognitive/Social composite of the Leiter International Performance Scale-Revised. Compared to the control groups, OT-SI improvement trends on the Short Sensory Profile, Child Behavior Checklist, and electrodermal reactivity were in the hypothesized direction. Findings suggest that OT-SI may be effective in ameliorating difficulties of children with SMD.

  15. A random-walk/giant-loop model for interphase chromosomes.

    PubMed Central

    Sachs, R K; van den Engh, G; Trask, B; Yokota, H; Hearst, J E

    1995-01-01

    Fluorescence in situ hybridization data on distances between defined genomic sequences are used to construct a quantitative model for the overall geometric structure of a human chromosome. We suggest that the large-scale geometry during the G0/G1 part of the cell cycle may consist of flexible chromatin loops, averaging approximately 3 million bp, with a random-walk backbone. A fully explicit, three-parametric polymer model of this random-walk/giant-loop structure can account well for the data. More general models consistent with the data are briefly discussed. PMID:7708711

  16. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  17. Statistical model of exotic rotational correlations in emergent space-time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan

    2017-06-06

    A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictionsmore » for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.« less

  18. Vertical Descent and Landing Tests of a 0.13-Scale Model of the Convair XFY-1 Vertically Rising Airplane in Still Air, TED No. NACA DE 368

    NASA Technical Reports Server (NTRS)

    Smith, Charlee C., Jr.; Lovell, Powell M., Jr.

    1954-01-01

    An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.

  19. Multilingual Literacy Skill Development in Kenya: Results from Medium Scale Randomized Controlled Trials

    ERIC Educational Resources Information Center

    Piper, Benjamin

    2016-01-01

    If children do not learn how to read in the first few years of primary school, they at greater risk of dropping out. It is therefore crucial to identify and test interventions that have the potential of making a large impact, can be implemented quickly, and are affordable to be taken to scale by the Kenyan government. This paper presents the…

  20. Psychometric Properties of Spanish Adaptation of the PDD-MRS Scale in Adults with Intellectual Developmental Disorders: The EVTEA-DI Scale

    ERIC Educational Resources Information Center

    Cortés, Maria José; Orejuela, Carmen; Castellví, Gemma; Folch, Annabel; Rovira, Lluís; Salvador-Carulla, Luis; Irazábal, Marcia; Muñoz, Silvia; Haro, Josep Maria; Vilella, Elisabet; Martínez-Leal, Rafael

    2018-01-01

    Strategies for the early detection of autism spectrum disorders (ASD) in people with intellectual developmental disorder (IDD) are urgently needed, but few specific tools have been developed. The present study examines the psychometric properties of the EVTEA-DI, a Spanish adaptation of the PDD-MRS, in a large randomized sample of 979 adults with…

  1. Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.

    PubMed

    Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M

    2009-04-03

    We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.

  2. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  3. Resonance, criticality, and emergence in city traffic investigated in cellular automaton models.

    PubMed

    Varas, A; Cornejo, M D; Toledo, B A; Muñoz, V; Rogan, J; Zarama, R; Valdivia, J A

    2009-11-01

    The complex behavior that occurs when traffic lights are synchronized is studied for a row of interacting cars. The system is modeled through a cellular automaton. Two strategies are considered: all lights in phase and a "green wave" with a propagating green signal. It is found that the mean velocity near the resonant condition follows a critical scaling law. For the green wave, it is shown that the mean velocity scaling law holds even for random separation between traffic lights and is not dependent on the density. This independence on car density is broken when random perturbations are considered in the car velocity. Random velocity perturbations also have the effect of leading the system to an emergent state, where cars move in clusters, but with an average velocity which is independent of traffic light switching for large injection rates.

  4. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  5. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  6. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  7. Combined cognitive-strategy and task-specific training improves transfer to untrained activities in sub-acute stroke: An exploratory randomized controlled trial

    PubMed Central

    McEwen, Sara; Polatajko, Helene; Baum, Carolyn; Rios, Jorge; Cirone, Dianne; Doherty, Meghan; Wolf, Timothy

    2014-01-01

    Purpose The purpose of this study was to estimate the effect of the Cognitive Orientation to daily Occupational Performance (CO-OP) approach compared to usual outpatient rehabilitation on activity and participation in people less than 3 months post stroke. Methods An exploratory, single blind, randomized controlled trial with a usual care control arm was conducted. Participants referred to 2 stroke rehabilitation outpatient programs were randomized to receive either Usual Care or CO-OP. The primary outcome was actual performance of trained and untrained self-selected activities, measured using the Performance Quality Rating Scale (PQRS). Additional outcomes included the Canadian Occupational Performance Measure (COPM), the Stroke Impact Scale Participation Domain, the Community Participation Index, and the Self Efficacy Gauge. Results Thirty-five (35) eligible participants were randomized; 26 completed the intervention. Post-intervention, PQRS change scores demonstrated CO-OP had a medium effect over Usual Care on trained self-selected activities (d=0.5) and a large effect on untrained (d=1.2). At a 3 month follow-up, PQRS change scores indicated a large effect of CO-OP on both trained (d=1.6) and untrained activities (d=1.1). CO-OP had a small effect on COPM and a medium effect on the Community Participation Index perceived control and the Self-Efficacy Gauge. Conclusion CO-OP was associated with a large treatment effect on follow up performances of self-selected activities, and demonstrated transfer to untrained activities. A larger trial is warranted. PMID:25416738

  8. Combined Cognitive-Strategy and Task-Specific Training Improve Transfer to Untrained Activities in Subacute Stroke: An Exploratory Randomized Controlled Trial.

    PubMed

    McEwen, Sara; Polatajko, Helene; Baum, Carolyn; Rios, Jorge; Cirone, Dianne; Doherty, Meghan; Wolf, Timothy

    2015-07-01

    The purpose of this study was to estimate the effect of the Cognitive Orientation to daily Occupational Performance (CO-OP) approach compared with usual outpatient rehabilitation on activity and participation in people <3 months poststroke. An exploratory, single-blind, randomized controlled trial, with a usual-care control arm, was conducted. Participants referred to 2 stroke rehabilitation outpatient programs were randomized to receive either usual care or CO-OP. The primary outcome was actual performance of trained and untrained self-selected activities, measured using the Performance Quality Rating Scale (PQRS). Additional outcomes included the Canadian Occupational Performance Measure (COPM), the Stroke Impact Scale Participation Domain, the Community Participation Index, and the Self-Efficacy Gauge. A total of 35 eligible participants were randomized; 26 completed the intervention. Post intervention, PQRS change scores demonstrated that CO-OP had a medium effect over usual care on trained self-selected activities (d = 0.5) and a large effect on untrained activities (d = 1.2). At a 3-month follow-up, PQRS change scores indicated a large effect of CO-OP on both trained (d = 1.6) and untrained activities (d = 1.1). CO-OP had a small effect on COPM and a medium effect on the Community Participation Index perceived control and on the Self-Efficacy Gauge. CO-OP was associated with a large treatment effect on follow-up performances of self-selected activities and demonstrated transfer to untrained activities. A larger trial is warranted. © The Author(s) 2014.

  9. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    PubMed

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  10. Global diffusion of cosmic rays in random magnetic fields

    NASA Astrophysics Data System (ADS)

    Snodin, A. P.; Shukurov, A.; Sarson, G. R.; Bushby, P. J.; Rodrigues, L. F. S.

    2016-04-01

    The propagation of charged particles, including cosmic rays, in a partially ordered magnetic field is characterized by a diffusion tensor whose components depend on the particle's Larmor radius RL and the degree of order in the magnetic field. Most studies of the particle diffusion presuppose a scale separation between the mean and random magnetic fields (e.g. there being a pronounced minimum in the magnetic power spectrum at intermediate scales). Scale separation is often a good approximation in laboratory plasmas, but not in most astrophysical environments such as the interstellar medium (ISM). Modern simulations of the ISM have numerical resolution of the order of 1 pc, so the Larmor radius of the cosmic rays that dominate in energy density is at least 106 times smaller than the resolved scales. Large-scale simulations of cosmic ray propagation in the ISM thus rely on oversimplified forms of the diffusion tensor. We take the first steps towards a more realistic description of cosmic ray diffusion for such simulations, obtaining direct estimates of the diffusion tensor from test particle simulations in random magnetic fields (with the Larmor radius scale being fully resolved), for a range of particle energies corresponding to 10-2 ≲ RL/lc ≲ 103, where lc is the magnetic correlation length. We obtain explicit expressions for the cosmic ray diffusion tensor for RL/lc ≪ 1, that might be used in a sub-grid model of cosmic ray diffusion. The diffusion coefficients obtained are closely connected with existing transport theories that include the random walk of magnetic lines.

  11. Global Behavior in Large Scale Systems

    DTIC Science & Technology

    2013-12-05

    release. AIR FORCE RESEARCH LABORATORY AF OFFICE OF SCIENTIFIC RESEARCH (AFOSR)/RSL ARLINGTON, VIRGINIA 22203 AIR FORCE MATERIEL COMMAND AFRL-OSR-VA...and Research 875 Randolph Street, Suite 325 Room 3112, Arlington, VA 22203 December 3, 2013 1 Abstract This research attained two main achievements: 1...microscopic random interactions among the agents. 2 1 Introduction In this research we considered two main problems: 1) large deviation error performance in

  12. Single-shot stand-off chemical identification of powders using random Raman lasing

    PubMed Central

    Hokr, Brett H.; Bixler, Joel N.; Noojin, Gary D.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.; Scully, Marlan O.

    2014-01-01

    The task of identifying explosives, hazardous chemicals, and biological materials from a safe distance is the subject we consider. Much of the prior work on stand-off spectroscopy using light has been devoted to generating a backward-propagating beam of light that can be used drive further spectroscopic processes. The discovery of random lasing and, more recently, random Raman lasing provide a mechanism for remotely generating copious amounts of chemically specific Raman scattered light. The bright nature of random Raman lasing renders directionality unnecessary, allowing for the detection and identification of chemicals from large distances in real time. In this article, the single-shot remote identification of chemicals at kilometer-scale distances is experimentally demonstrated using random Raman lasing. PMID:25114231

  13. Extended self-similarity in the two-dimensional metal-insulator transition

    NASA Astrophysics Data System (ADS)

    Moriconi, L.

    2003-09-01

    We show that extended self-similarity, a scaling phenomenon first observed in classical turbulent flows, holds for a two-dimensional metal-insulator transition that belongs to the universality class of random Dirac fermions. Deviations from multifractality, which in turbulence are due to the dominance of diffusive processes at small scales, appear in the condensed-matter context as a large-scale, finite-size effect related to the imposition of an infrared cutoff in the field theory formulation. We propose a phenomenological interpretation of extended self-similarity in the metal-insulator transition within the framework of the random β-model description of multifractal sets. As a natural step, our discussion is bridged to the analysis of strange attractors, where crossovers between multifractal and nonmultifractal regimes are found and extended self-similarity turns out to be verified as well.

  14. A randomized approach to speed up the analysis of large-scale read-count data in the application of CNV detection.

    PubMed

    Wang, WeiBo; Sun, Wei; Wang, Wei; Szatkiewicz, Jin

    2018-03-01

    The application of high-throughput sequencing in a broad range of quantitative genomic assays (e.g., DNA-seq, ChIP-seq) has created a high demand for the analysis of large-scale read-count data. Typically, the genome is divided into tiling windows and windowed read-count data is generated for the entire genome from which genomic signals are detected (e.g. copy number changes in DNA-seq, enrichment peaks in ChIP-seq). For accurate analysis of read-count data, many state-of-the-art statistical methods use generalized linear models (GLM) coupled with the negative-binomial (NB) distribution by leveraging its ability for simultaneous bias correction and signal detection. However, although statistically powerful, the GLM+NB method has a quadratic computational complexity and therefore suffers from slow running time when applied to large-scale windowed read-count data. In this study, we aimed to speed up substantially the GLM+NB method by using a randomized algorithm and we demonstrate here the utility of our approach in the application of detecting copy number variants (CNVs) using a real example. We propose an efficient estimator, the randomized GLM+NB coefficients estimator (RGE), for speeding up the GLM+NB method. RGE samples the read-count data and solves the estimation problem on a smaller scale. We first theoretically validated the consistency and the variance properties of RGE. We then applied RGE to GENSENG, a GLM+NB based method for detecting CNVs. We named the resulting method as "R-GENSENG". Based on extensive evaluation using both simulated and empirical data, we concluded that R-GENSENG is ten times faster than the original GENSENG while maintaining GENSENG's accuracy in CNV detection. Our results suggest that RGE strategy developed here could be applied to other GLM+NB based read-count analyses, i.e. ChIP-seq data analysis, to substantially improve their computational efficiency while preserving the analytic power.

  15. Synthesis of wavelet envelope in 2-D random media having power-law spectra: comparison with FD simulations

    NASA Astrophysics Data System (ADS)

    Sato, Haruo; Fehler, Michael C.

    2016-10-01

    The envelope broadening and the peak delay of the S-wavelet of a small earthquake with increasing travel distance are results of scattering by random velocity inhomogeneities in the earth medium. As a simple mathematical model, Sato proposed a new stochastic synthesis of the scalar wavelet envelope in 3-D von Kármán type random media when the centre wavenumber of the wavelet is in the power-law spectral range of the random velocity fluctuation. The essential idea is to split the random medium spectrum into two components using the centre wavenumber as a reference: the long-scale (low-wavenumber spectral) component produces the peak delay and the envelope broadening by multiple scattering around the forward direction; the short-scale (high-wavenumber spectral) component attenuates wave amplitude by wide angle scattering. The former is calculated by the Markov approximation based on the parabolic approximation and the latter is calculated by the Born approximation. Here, we extend the theory for the envelope synthesis of a wavelet in 2-D random media, which makes it easy to compare with finite difference (FD) simulation results. The synthetic wavelet envelope is analytically written by using the random medium parameters in the angular frequency domain. For the case that the power spectral density function of the random velocity fluctuation has a steep roll-off at large wavenumbers, the envelope broadening is small and frequency independent, and scattering attenuation is weak. For the case of a small roll-off, however, the envelope broadening is large and increases with frequency, and the scattering attenuation is strong and increases with frequency. As a preliminary study, we compare synthetic wavelet envelopes with the average of FD simulation wavelet envelopes in 50 synthesized random media, which are characterized by the RMS fractional velocity fluctuation ε = 0.05, correlation scale a = 5 km and the background wave velocity V0 = 4 km s-1. We use the radiation of a 2 Hz Ricker wavelet from a point source. For all the cases of von Kármán order κ = 0.1, 0.5 and 1, we find the synthetic wavelet envelopes are a good match to the characteristics of FD simulation wavelet envelopes in a time window starting from the onset through the maximum peak to the time when the amplitude decreases to half the peak amplitude.

  16. Genetic drift at expanding frontiers promotes gene segregation

    PubMed Central

    Hallatschek, Oskar; Hersen, Pascal; Ramanathan, Sharad; Nelson, David R.

    2007-01-01

    Competition between random genetic drift and natural selection play a central role in evolution: Whereas nonbeneficial mutations often prevail in small populations by chance, mutations that sweep through large populations typically confer a selective advantage. Here, however, we observe chance effects during range expansions that dramatically alter the gene pool even in large microbial populations. Initially well mixed populations of two fluorescently labeled strains of Escherichia coli develop well defined, sector-like regions with fractal boundaries in expanding colonies. The formation of these regions is driven by random fluctuations that originate in a thin band of pioneers at the expanding frontier. A comparison of bacterial and yeast colonies (Saccharomyces cerevisiae) suggests that this large-scale genetic sectoring is a generic phenomenon that may provide a detectable footprint of past range expansions. PMID:18056799

  17. Decision aid on breast cancer screening reduces attendance rate: results of a large-scale, randomized, controlled study by the DECIDEO group

    PubMed Central

    Bourmaud, Aurelie; Soler-Michel, Patricia; Oriol, Mathieu; Regnier, Véronique; Tinquaut, Fabien; Nourissat, Alice; Bremond, Alain; Moumjid, Nora; Chauvin, Franck

    2016-01-01

    Controversies regarding the benefits of breast cancer screening programs have led to the promotion of new strategies taking into account individual preferences, such as decision aid. The aim of this study was to assess the impact of a decision aid leaflet on the participation of women invited to participate in a national breast cancer screening program. This Randomized, multicentre, controlled trial. Women aged 50 to 74 years, were randomly assigned to receive either a decision aid or the usual invitation letter. Primary outcome was the participation rate 12 months after the invitation. 16 000 women were randomized and 15 844 included in the modified intention-to-treat analysis. The participation rate in the intervention group was 40.25% (3174/7885 women) compared with 42.13% (3353/7959) in the control group (p = 0.02). Previous attendance for screening (RR = 6.24; [95%IC: 5.75-6.77]; p < 0.0001) and medium household income (RR = 1.05; [95%IC: 1.01-1.09]; p = 0.0074) were independently associated with attendance for screening. This large-scale study demonstrates that the decision aid reduced the participation rate. The decision aid activate the decision making process of women toward non-attendance to screening. These results show the importance of promoting informed patient choices, especially when those choices cannot be anticipated. PMID:26883201

  18. Mirror Instability in the Turbulent Solar Wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellinger, Petr; Landi, Simone; Verdini, Andrea

    2017-04-01

    The relationship between a decaying strong turbulence and the mirror instability in a slowly expanding plasma is investigated using two-dimensional hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and a vanishing correlation between the two fields. A turbulent cascade rapidly develops, magnetic field fluctuations exhibit a Kolmogorov-like power-law spectrum at large scales and a steeper spectrum at sub-ion scales. The imposed expansion (taking a strictly transverse ambient magnetic field) leadsmore » to the generation of an important perpendicular proton temperature anisotropy that eventually drives the mirror instability. This instability generates large-amplitude, nonpropagating, compressible, pressure-balanced magnetic structures in a form of magnetic enhancements/humps that reduce the perpendicular temperature anisotropy.« less

  19. Modeling the effects of small turbulent scales on the drag force for particles below and above the Kolmogorov scale

    NASA Astrophysics Data System (ADS)

    Gorokhovski, Mikhael; Zamansky, Rémi

    2018-03-01

    Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.

  20. Non-Hookean statistical mechanics of clamped graphene ribbons

    NASA Astrophysics Data System (ADS)

    Bowick, Mark J.; Košmrlj, Andrej; Nelson, David R.; Sknepnek, Rastko

    2017-03-01

    Thermally fluctuating sheets and ribbons provide an intriguing forum in which to investigate strong violations of Hooke's Law: Large distance elastic parameters are in fact not constant but instead depend on the macroscopic dimensions. Inspired by recent experiments on free-standing graphene cantilevers, we combine the statistical mechanics of thin elastic plates and large-scale numerical simulations to investigate the thermal renormalization of the bending rigidity of graphene ribbons clamped at one end. For ribbons of dimensions W ×L (with L ≥W ), the macroscopic bending rigidity κR determined from cantilever deformations is independent of the width when W <ℓth , where ℓth is a thermal length scale, as expected. When W >ℓth , however, this thermally renormalized bending rigidity begins to systematically increase, in agreement with the scaling theory, although in our simulations we were not quite able to reach the system sizes necessary to determine the fully developed power law dependence on W . When the ribbon length L >ℓp , where ℓp is the W -dependent thermally renormalized ribbon persistence length, we observe a scaling collapse and the beginnings of large scale random walk behavior.

  1. An innovative large scale integration of silicon nanowire-based field effect transistors

    NASA Astrophysics Data System (ADS)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  2. Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.

    2017-12-01

    In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Naim, Eli; Krapivsky, Paul

    Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less

  4. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  5. Randomized Trial of the Effect of Four Second-Generation Antipsychotics and One First-Generation Antipsychotic on Cigarette Smoking, Alcohol, and Drug Use in Chronic Schizophrenia.

    PubMed

    Mohamed, Somaia; Rosenheck, Robert A; Lin, Haiqun; Swartz, Marvin; McEvoy, Joseph; Stroup, Scott

    2015-07-01

    No large-scale randomized trial has compared the effect of different second-generation antipsychotic drugs and any first-generation drug on alcohol, drug and nicotine use in patients with schizophrenia. The Clinical Antipsychotic Trial of Intervention Effectiveness study randomly assigned 1432 patients formally diagnosed with schizophrenia to four second-generation antipsychotic drugs (olanzapine, risperidone quetiapine, and ziprasidone) and one first-generation antipsychotic (perphenazine) and followed them for up to 18 months. Secondary outcome data documented cigarettes smoked in the past week and alcohol and drug use severity ratings. At baseline, 61% of patients smoked, 35% used alcohol, and 23% used illicit drugs. Although there were significant effects of time showing reduction in substance use over the 18 months (all p < 0.0001), this study found no evidence that any antipsychotic was robustly superior to any other in a secondary analysis of data on substance use outcomes from a large 18-month randomized schizophrenia trial.

  6. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Influence of Blurred Ways on Pattern Recognition of a Scale-Free Hopfield Neural Network

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Li

    2010-01-01

    We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.

  7. Theory of rumour spreading in complex social networks

    NASA Astrophysics Data System (ADS)

    Nekovee, M.; Moreno, Y.; Bianconi, G.; Marsili, M.

    2007-01-01

    We introduce a general stochastic model for the spread of rumours, and derive mean-field equations that describe the dynamics of the model on complex social networks (in particular, those mediated by the Internet). We use analytical and numerical solutions of these equations to examine the threshold behaviour and dynamics of the model on several models of such networks: random graphs, uncorrelated scale-free networks and scale-free networks with assortative degree correlations. We show that in both homogeneous networks and random graphs the model exhibits a critical threshold in the rumour spreading rate below which a rumour cannot propagate in the system. In the case of scale-free networks, on the other hand, this threshold becomes vanishingly small in the limit of infinite system size. We find that the initial rate at which a rumour spreads is much higher in scale-free networks than in random graphs, and that the rate at which the spreading proceeds on scale-free networks is further increased when assortative degree correlations are introduced. The impact of degree correlations on the final fraction of nodes that ever hears a rumour, however, depends on the interplay between network topology and the rumour spreading rate. Our results show that scale-free social networks are prone to the spreading of rumours, just as they are to the spreading of infections. They are relevant to the spreading dynamics of chain emails, viral advertising and large-scale information dissemination algorithms on the Internet.

  8. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  9. Transposon facilitated DNA sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, D.E.; Berg, C.M.; Huang, H.V.

    1990-01-01

    The purpose of this research is to investigate and develop methods that exploit the power of bacterial transposable elements for large scale DNA sequencing: Our premise is that the use of transposons to put primer binding sites randomly in target DNAs should provide access to all portions of large DNA fragments, without the inefficiencies of methods involving random subcloning and attendant repetitive sequencing, or of sequential synthesis of many oligonucleotide primers that are used to match systematically along a DNA molecule. Two unrelated bacterial transposons, Tn5 and {gamma}{delta}, are being used because they have both proven useful for molecular analyses,more » and because they differ sufficiently in mechanism and specificity of transposition to merit parallel development.« less

  10. A randomized controlled trial of acupuncture and moxibustion to treat Bell's palsy according to different stages: design and protocol.

    PubMed

    Chen, Xiaoqin; Li, Ying; Zheng, Hui; Hu, Kaming; Zhang, Hongxing; Zhao, Ling; Li, Yan; Liu, Lian; Mang, Lingling; Yu, Shuyuan

    2009-07-01

    Acupuncture to treat Bell's palsy is one of the most commonly used methods in China. There are a variety of acupuncture treatment options to treat Bell's palsy in clinical practice. Since Bell's palsy has three different path-stages (acute stage, resting stage and restoration stage), so whether acupuncture is effective in the different path-stages and which acupuncture treatment is the best method are major issues in acupuncture clinical trials about Bell's palsy. In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with acupuncture. There are five acupuncture groups, with four according to different path-stages and one not. In total, 900 patients with Bell's palsy are enrolled in this study. These patients are randomly assigned to receive one of the following four treatment groups according to different path-stages, i.e. 1) staging acupuncture group, 2) staging acupuncture and moxibustion group, 3) staging electro-acupuncture group, 4) staging acupuncture along yangming musculature group or non-staging acupuncture control group. The outcome measurements in this trial are the effect comparison achieved among these five groups in terms of House-Brackmann scale (Global Score and Regional Score), Facial Disability Index scale, Classification scale of Facial Paralysis, and WHOQOL-BREF scale before randomization (baseline phase) and after randomization. The result of this trial will certify the efficacy of using staging acupuncture and moxibustion to treat Bell's palsy, and to approach a best acupuncture treatment among these five different methods for treating Bell's palsy.

  11. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  12. A two-stage model of fracture of rocks

    USGS Publications Warehouse

    Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.

    1996-01-01

    In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.

  13. The luminosity function for the CfA redshift survey slices

    NASA Technical Reports Server (NTRS)

    De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.

    1989-01-01

    The luminosity function for two complete slices of the extension of the CfA redshift survey is calculated. The nonparametric technique of Lynden-Bell (1971) and Turner (1979) is used to determine the shape for the luminosity function of the 12 deg slice of the redshift survey. The amplitude of the luminosity function is determined, taking large-scale inhomogeneities into account. The effects of the Malmquist bias on a magnitude-limited redshift survey are examined, showing that the random errors in the magnitudes for the 12 deg slice affect both the determination of the luminosity function and the spatial density constrast of large scale structures.

  14. Assessments of the quality of randomized controlled trials published in International Journal of Urology from 1994 to 2011.

    PubMed

    Cho, Hee Ju; Chung, Jae Hoon; Jo, Jung Ki; Kang, Dong Hyuk; Cho, Jeong Man; Yoo, Tag Keun; Lee, Seung Wook

    2013-12-01

    Randomized controlled trials are one of the most reliable resources for assessing the effectiveness and safety of medical treatments. Low quality randomized controlled trials carry a large bias that can ultimately impair the reliability of their conclusions. The present study aimed to evaluate the quality of randomized controlled trials published in International Journal of Urology by using multiple quality assessment tools. Randomized controlled trials articles published in International Journal of Urology were found using the PubMed MEDLINE database, and qualitative analysis was carried out with three distinct assessment tools: the Jadad scale, the van Tulder scale and the Cochrane Collaboration Risk of Bias Tool. The quality of randomized controlled trials was analyzed by publication year, type of subjects, intervention, presence of funding and whether an institutional review board reviewed the study. A total of 68 randomized controlled trial articles were published among a total of 1399 original articles in International Journal of Urology. Among these randomized controlled trials, 10 (2.70%) were from 1994 to 1999, 23 (4.10%) were from 2000 to 2005 and 35 (4.00%) were from 2006 to 2011 (P = 0.494). On the assessment with the Jadad and van Tulder scale, the numbers and percentage of high quality randomized controlled trials increased over time. The studies that had institutional review board reviews, funding resources or that were carried out in multiple institutions had an increased percentage of high quality articles. The numbers and percentage of high-quality randomized controlled trials published in International Journal of Urology have increased over time. Furthermore, randomized controlled trials with funding resources, institutional review board reviews or carried out in multiple institutions have been found to be of higher quality compared with others not presenting these features. © 2013 The Japanese Urological Association.

  15. Multi-field inflation with a random potential

    NASA Astrophysics Data System (ADS)

    Tye, S.-H. Henry; Xu, Jiajun; Zhang, Yang

    2009-04-01

    Motivated by the possibility of inflation in the cosmic landscape, which may be approximated by a complicated potential, we study the density perturbations in multi-field inflation with a random potential. The random potential causes the inflaton to undergo a Brownian-like motion with a drift in the D-dimensional field space, allowing entropic perturbation modes to continuously and randomly feed into the adiabatic mode. To quantify such an effect, we employ a stochastic approach to evaluate the two-point and three-point functions of primordial perturbations. We find that in the weakly random scenario where the stochastic scatterings are frequent but mild, the resulting power spectrum resembles that of the single field slow-roll case, with up to 2% more red tilt. The strongly random scenario, in which the coarse-grained motion of the inflaton is significantly slowed down by the scatterings, leads to rich phenomenologies. The power spectrum exhibits primordial fluctuations on all angular scales. Such features may already be hiding in the error bars of observed CMB TT (as well as TE and EE) power spectrum and have been smoothed out by binning of data points. With more data coming in the future, we expect these features can be detected or falsified. On the other hand the tensor power spectrum itself is free of fluctuations and the tensor to scalar ratio is enhanced by the large ratio of the Brownian-like motion speed over the drift speed. In addition a large negative running of the power spectral index is possible. Non-Gaussianity is generically suppressed by the growth of adiabatic perturbations on super-horizon scales, and is negligible in the weakly random scenario. However, non-Gaussianity can possibly be enhanced by resonant effects in the strongly random scenario or arise from the entropic perturbations during the onset of (p)reheating if the background inflaton trajectory exhibits particular properties. The formalism developed in this paper can be applied to a wide class of multi-field inflation models including, e.g. the N-flation scenario.

  16. The Variance of Intraclass Correlations in Three- and Four-Level Models

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.

    2012-01-01

    Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…

  17. The Variance of Intraclass Correlations in Three and Four Level

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.

    2012-01-01

    Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…

  18. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  19. Robustness of Controllability for Networks Based on Edge-Attack

    PubMed Central

    Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong

    2014-01-01

    We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components. PMID:24586507

  20. Robustness of controllability for networks based on edge-attack.

    PubMed

    Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong

    2014-01-01

    We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components.

  1. Attention bias modification augments cognitive-behavioral group therapy for social anxiety disorder: a randomized controlled trial.

    PubMed

    Lazarov, Amit; Marom, Sofi; Yahalom, Naomi; Pine, Daniel S; Hermesh, Haggai; Bar-Haim, Yair

    2017-12-20

    Cognitive-behavioral group therapy (CBGT) is a first-line treatment for social anxiety disorder (SAD). However, since many patients remain symptomatic post-treatment, there is a need for augmenting procedures. This randomized controlled trial (RCT) examined the potential augmentation effect of attention bias modification (ABM) for CBGT. Fifty patients with SAD from three therapy groups were randomized to receive an 18-week standard CBGT with either ABM designed to shift attention away from threat (CBGT + ABM), or a placebo protocol not designed to modify threat-related attention (CBGT + placebo). Therapy groups took place in a large mental health center. Clinician and self-report measures of social anxiety and depression were acquired pre-treatment, post-treatment, and at 3-month follow-up. Attention bias was assessed at pre- and post-treatment. Patients randomized to the CBGT + ABM group, relative to those randomized to the CBGT + placebo group, showed greater reductions in clinician-rated SAD symptoms post-treatment, with effects maintained at 3-month follow-up. Group differences were not evident for self-report or attention-bias measures, with similar reductions in both groups. Finally, reduction in attention bias did not mediate the association between group and reduction in Liebowitz Social Anxiety Scale Structured Interview (LSAS) scores. This is the first RCT to examine the possible augmenting effect of ABM added to group-based cognitive-behavioral therapy for adult SAD. Training patients' attention away from threat might augment the treatment response to standard CBGT in SAD, a possibility that could be further evaluated in large-scale RCTs.

  2. Age-related Cataract in a Randomized Trial of Vitamins E and C in Men

    PubMed Central

    Christen, William G.; Glynn, Robert J.; Sesso, Howard D.; Kurth, Tobias; MacFadyen, Jean; Bubes, Vadim; Buring, Julie E.; Manson, JoAnn E.; Michael Gaziano, J.

    2010-01-01

    Objective To test whether supplementation with alternate day vitamin E or daily vitamin C affects the incidence of age-related cataract in a large-scale randomized trial of men. Design Randomized, double-masked, placebo-controlled trial. Participants Eleven thousand five hundred forty-five apparently healthy US male physicians aged 50 years or older who were without a diagnosis of cataract at baseline. Intervention Participants were randomly assigned to receive 400 IU of vitamin E or placebo on alternate days, and 500 mg of vitamin C or placebo daily. Main Outcome Measure Incident cataract responsible for a reduction in best-corrected visual acuity to 20/30 or worse based on self-report confirmed by medical record review. Results After 8 years of treatment and follow-up, a total of 1,174 incident cataracts were confirmed. There were 579 cataracts in the vitamin E treated group and 595 in the vitamin E placebo group (hazard ratio [HR], 0.99; 95 percent confidence interval [CI], 0.88 to 1.11). For vitamin C, there were 593 cataracts in the treated group and 581 in the placebo group (HR, 1.02; CI, 0.91 to 1.14). Conclusions In a large-scale randomized trial of US male physicians, long-term alternate day use of 400 IU of vitamin E and/or daily use of 500 mg of vitamin C had no significant beneficial or harmful effect on the risk of cataract. Application to Clinical Practice Long-term use of vitamin E and/or vitamin C supplements has no appreciable effect on cataract. PMID:21060040

  3. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  4. Comparing vector-based and Bayesian memory models using large-scale datasets: User-generated hashtag and tag prediction on Twitter and Stack Overflow.

    PubMed

    Stanley, Clayton; Byrne, Michael D

    2016-12-01

    The growth of social media and user-created content on online sites provides unique opportunities to study models of human declarative memory. By framing the task of choosing a hashtag for a tweet and tagging a post on Stack Overflow as a declarative memory retrieval problem, 2 cognitively plausible declarative memory models were applied to millions of posts and tweets and evaluated on how accurately they predict a user's chosen tags. An ACT-R based Bayesian model and a random permutation vector-based model were tested on the large data sets. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R's attentional weight term was linked to an entropy-weighting natural language processing method used to attenuate high-frequency words (e.g., articles and prepositions). Word order was not found to be a strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. The results of the large-scale exploration show how the architecture of the 2 memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Financial Management of a Large Multi-site Randomized Clinical Trial

    PubMed Central

    Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.

    2014-01-01

    Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748

  6. Financial management of a large multisite randomized clinical trial.

    PubMed

    Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G

    2014-08-01

    The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.

  7. Two-dimensional Ising model on random lattices with constant coordination number

    NASA Astrophysics Data System (ADS)

    Schrauth, Manuel; Richter, Julian A. J.; Portela, Jefferson S. E.

    2018-02-01

    We study the two-dimensional Ising model on networks with quenched topological (connectivity) disorder. In particular, we construct random lattices of constant coordination number and perform large-scale Monte Carlo simulations in order to obtain critical exponents using finite-size scaling relations. We find disorder-dependent effective critical exponents, similar to diluted models, showing thus no clear universal behavior. Considering the very recent results for the two-dimensional Ising model on proximity graphs and the coordination number correlation analysis suggested by Barghathi and Vojta [Phys. Rev. Lett. 113, 120602 (2014), 10.1103/PhysRevLett.113.120602], our results indicate that the planarity and connectedness of the lattice play an important role on deciding whether the phase transition is stable against quenched topological disorder.

  8. The topology of large-scale structure. V - Two-dimensional topology of sky maps

    NASA Astrophysics Data System (ADS)

    Gott, J. R., III; Mao, Shude; Park, Changbom; Lahav, Ofer

    1992-01-01

    A 2D algorithm is applied to observed sky maps and numerical simulations. It is found that when topology is studied on smoothing scales larger than the correlation length, the topology is approximately in agreement with the random phase formula for the 2D genus-threshold density relation, G2(nu) varies as nu(e) exp-nu-squared/2. Some samples show small 'meatball shifts' similar to those seen in corresponding 3D observational samples and similar to those produced by biasing in cold dark matter simulations. The observational results are thus consistent with the standard model in which the structure in the universe today has grown from small fluctuations caused by random quantum noise in the early universe.

  9. Open-label placebo treatment in chronic low back pain: a randomized controlled trial

    PubMed Central

    Carvalho, Cláudia; Caetano, Joaquim Machado; Cunha, Lidia; Rebouta, Paula; Kaptchuk, Ted J.; Kirsch, Irving

    2016-01-01

    Abstract This randomized controlled trial was performed to investigate whether placebo effects in chronic low back pain could be harnessed ethically by adding open-label placebo (OLP) treatment to treatment as usual (TAU) for 3 weeks. Pain severity was assessed on three 0- to 10-point Numeric Rating Scales, scoring maximum pain, minimum pain, and usual pain, and a composite, primary outcome, total pain score. Our other primary outcome was back-related dysfunction, assessed on the Roland–Morris Disability Questionnaire. In an exploratory follow-up, participants on TAU received placebo pills for 3 additional weeks. We randomized 97 adults reporting persistent low back pain for more than 3 months' duration and diagnosed by a board-certified pain specialist. Eighty-three adults completed the trial. Compared to TAU, OLP elicited greater pain reduction on each of the three 0- to 10-point Numeric Rating Scales and on the 0- to 10-point composite pain scale (P < 0.001), with moderate to large effect sizes. Pain reduction on the composite Numeric Rating Scales was 1.5 (95% confidence interval: 1.0-2.0) in the OLP group and 0.2 (−0.3 to 0.8) in the TAU group. Open-label placebo treatment also reduced disability compared to TAU (P < 0.001), with a large effect size. Improvement in disability scores was 2.9 (1.7-4.0) in the OLP group and 0.0 (−1.1 to 1.2) in the TAU group. After being switched to OLP, the TAU group showed significant reductions in both pain (1.5, 0.8-2.3) and disability (3.4, 2.2-4.5). Our findings suggest that OLP pills presented in a positive context may be helpful in chronic low back pain. PMID:27755279

  10. Open-label placebo treatment in chronic low back pain: a randomized controlled trial.

    PubMed

    Carvalho, Cláudia; Caetano, Joaquim Machado; Cunha, Lidia; Rebouta, Paula; Kaptchuk, Ted J; Kirsch, Irving

    2016-12-01

    This randomized controlled trial was performed to investigate whether placebo effects in chronic low back pain could be harnessed ethically by adding open-label placebo (OLP) treatment to treatment as usual (TAU) for 3 weeks. Pain severity was assessed on three 0- to 10-point Numeric Rating Scales, scoring maximum pain, minimum pain, and usual pain, and a composite, primary outcome, total pain score. Our other primary outcome was back-related dysfunction, assessed on the Roland-Morris Disability Questionnaire. In an exploratory follow-up, participants on TAU received placebo pills for 3 additional weeks. We randomized 97 adults reporting persistent low back pain for more than 3 months' duration and diagnosed by a board-certified pain specialist. Eighty-three adults completed the trial. Compared to TAU, OLP elicited greater pain reduction on each of the three 0- to 10-point Numeric Rating Scales and on the 0- to 10-point composite pain scale (P < 0.001), with moderate to large effect sizes. Pain reduction on the composite Numeric Rating Scales was 1.5 (95% confidence interval: 1.0-2.0) in the OLP group and 0.2 (-0.3 to 0.8) in the TAU group. Open-label placebo treatment also reduced disability compared to TAU (P < 0.001), with a large effect size. Improvement in disability scores was 2.9 (1.7-4.0) in the OLP group and 0.0 (-1.1 to 1.2) in the TAU group. After being switched to OLP, the TAU group showed significant reductions in both pain (1.5, 0.8-2.3) and disability (3.4, 2.2-4.5). Our findings suggest that OLP pills presented in a positive context may be helpful in chronic low back pain.

  11. Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release

    NASA Astrophysics Data System (ADS)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios

    2017-02-01

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz

    The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations ( δB / B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points aremore » acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker–Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker and Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path ( λ {sub sc}) of the particles between the scatterers inside the energization volume.« less

  13. Tortuosity of lightning return stroke channels

    NASA Technical Reports Server (NTRS)

    Levine, D. M.; Gilson, B.

    1984-01-01

    Data obtained from photographs of lightning are presented on the tortuosity of return stroke channels. The data were obtained by making piecewise linear fits to the channels, and recording the cartesian coordinates of the ends of each linear segment. The mean change between ends of the segments was nearly zero in the horizontal direction and was about eight meters in the vertical direction. Histograms of these changes are presented. These data were used to create model lightning channels and to predict the electric fields radiated during return strokes. This was done using a computer generated random walk in which linear segments were placed end-to-end to form a piecewise linear representation of the channel. The computer selected random numbers for the ends of the segments assuming a normal distribution with the measured statistics. Once the channels were simulated, the electric fields radiated during a return stroke were predicted using a transmission line model on each segment. It was found that realistic channels are obtained with this procedure, but only if the model includes two scales of tortuosity: fine scale irregularities corresponding to the local channel tortuosity which are superimposed on large scale horizontal drifts. The two scales of tortuosity are also necessary to obtain agreement between the electric fields computed mathematically from the simulated channels and the electric fields radiated from real return strokes. Without large scale drifts, the computed electric fields do not have the undulations characteristics of the data.

  14. Energetic Consistency and Coupling of the Mean and Covariance Dynamics

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2008-01-01

    The dynamical state of the ocean and atmosphere is taken to be a large dimensional random vector in a range of large-scale computational applications, including data assimilation, ensemble prediction, sensitivity analysis, and predictability studies. In each of these applications, numerical evolution of the covariance matrix of the random state plays a central role, because this matrix is used to quantify uncertainty in the state of the dynamical system. Since atmospheric and ocean dynamics are nonlinear, there is no closed evolution equation for the covariance matrix, nor for the mean state. Therefore approximate evolution equations must be used. This article studies theoretical properties of the evolution equations for the mean state and covariance matrix that arise in the second-moment closure approximation (third- and higher-order moment discard). This approximation was introduced by EPSTEIN [1969] in an early effort to introduce a stochastic element into deterministic weather forecasting, and was studied further by FLEMING [1971a,b], EPSTEIN and PITCHER [1972], and PITCHER [1977], also in the context of atmospheric predictability. It has since fallen into disuse, with a simpler one being used in current large-scale applications. The theoretical results of this article make a case that this approximation should be reconsidered for use in large-scale applications, however, because the second moment closure equations possess a property of energetic consistency that the approximate equations now in common use do not possess. A number of properties of solutions of the second-moment closure equations that result from this energetic consistency will be established.

  15. Toward a Framework for Learner Segmentation

    ERIC Educational Resources Information Center

    Azarnoush, Bahareh; Bekki, Jennifer M.; Runger, George C.; Bernstein, Bianca L.; Atkinson, Robert K.

    2013-01-01

    Effectively grouping learners in an online environment is a highly useful task. However, datasets used in this task often have large numbers of attributes of disparate types and different scales, which traditional clustering approaches cannot handle effectively. Here, a unique dissimilarity measure based on the random forest, which handles the…

  16. Inquiry in the Physical Geology Classroom: Supporting Students' Conceptual Model Development

    ERIC Educational Resources Information Center

    Miller, Heather R.; McNeal, Karen S.; Herbert, Bruce E.

    2010-01-01

    This study characterizes the impact of an inquiry-based learning (IBL) module versus a traditionally structured laboratory exercise. Laboratory sections were randomized into experimental and control groups. The experimental group was taught using IBL pedagogical techniques and included manipulation of large-scale data-sets, use of multiple…

  17. Partial Identification of Treatment Effects: Applications to Generalizability

    ERIC Educational Resources Information Center

    Chan, Wendy

    2016-01-01

    Results from large-scale evaluation studies form the foundation of evidence-based policy. The randomized experiment is often considered the gold standard among study designs because the causal impact of a treatment or intervention can be assessed without threats of confounding from external variables. Policy-makers have become increasingly…

  18. Sequence analysis reveals genomic factors affecting EST-SSR primer performance and polymorphism

    USDA-ARS?s Scientific Manuscript database

    Search for simple sequence repeat (SSR) motifs and design of flanking primers in expressed sequence tag (EST) sequences can be easily done at a large scale using bioinformatics programs. However, failed amplification and/or detection, along with lack of polymorphism, is often seen among randomly sel...

  19. 61 FR 41385 - Notice of Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    1996-08-08

    ... PRESSURE VESSEL; filed 24 February 1995; patented 21 November 1995.// Patent 5,468,356: LARGE SCALE...,477,482: ULTRA HIGH DENSITY, NON- VOLATILE FERROMAGNETIC RANDOM ACCESS MEMORY; filed 1 October 1993....// Patent 5,483,017: HIGH TEMPERATURE THERMOSETS AND CERAMICS DERIVED FROM LINEAR CARBORANE-(SILOXANE OR...

  20. Escitalopram treatment of depression in human immunodeficiency virus/acquired immunodeficiency syndrome: a randomized, double-blind, placebo-controlled study.

    PubMed

    Hoare, Jacqueline; Carey, Paul; Joska, John A; Carrara, Henri; Sorsdahl, Katherine; Stein, Dan J

    2014-02-01

    Depression can be a chronic and impairing illness in people with human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome. Large randomized studies of newer selective serotonin reuptake inhibitors such as escitalopram in the treatment of depression in HIV, examining comparative treatment efficacy and safety, have yet to be done in HIV-positive patients. This was a fixed-dose, placebo-controlled, randomized, double-blind study to investigate the efficacy of escitalopram in HIV-seropositive subjects with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, major depressive disorder. One hundred two participants were randomly assigned to either 10 mg of escitalopram or placebo for 6 weeks. An analysis of covariance of the completers found that there was no advantage for escitalopram over placebo on the Montgomery-Asberg Depression Rating Scale (p = 0.93). Sixty-two percent responded to escitalopram and 59% responded to placebo on the Clinical Global Impression Scale. Given the relatively high placebo response, future trials in this area need to be selective in participant recruitment and to be adequately powered.

  1. On the Feedback Phenomenon of an Impinging Jet

    DTIC Science & Technology

    1979-09-01

    the double-structured nature of turbulent flows: time dependent quasi- ordered large scale structures, and fine-scale random structures. Numerous ...downstream and upstream waves d Nozzle diameter f Frequency (Hz) Gf Normalized power si.c ,ur’ of i G ,(f) Normalized cr,- tr bee -en i(t) and J(t) I ,j xiv...1975) suggested that these quasi- ordered structures are deterministic, in the sense that they have a characteristic shape, size and convection motion

  2. Improving Unipolar Resistive Switching Uniformity with Cone-Shaped Conducting Filaments and Its Logic-In-Memory Application.

    PubMed

    Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei

    2018-02-21

    Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.

  3. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences

    PubMed Central

    2014-01-01

    Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292

  4. A Survey of Residents' Perceptions of the Effect of Large-Scale Economic Developments on Perceived Safety, Violence, and Economic Benefits.

    PubMed

    Fabio, Anthony; Geller, Ruth; Bazaco, Michael; Bear, Todd M; Foulds, Abigail L; Duell, Jessica; Sharma, Ravi

    2015-01-01

    Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates.

  5. Negative probability of random multiplier in turbulence

    NASA Astrophysics Data System (ADS)

    Bai, Xuan; Su, Weidong

    2017-11-01

    The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).

  6. A Randomized Controlled Exploratory Pilot Study to Evaluate the Effect of Rotigotine Transdermal Patch on Parkinson's Disease-Associated Chronic Pain.

    PubMed

    Rascol, Olivier; Zesiewicz, Theresa; Chaudhuri, K Ray; Asgharnejad, Mahnaz; Surmann, Erwin; Dohin, Elisabeth; Nilius, Sigrid; Bauer, Lars

    2016-07-01

    Pain is a troublesome nonmotor symptom of Parkinson's disease (PD). This double-blind exploratory pilot study (NCT01744496) was the first to specifically investigate the effect of a dopamine agonist on PD-associated pain as primary outcome. Patients with advanced PD (ie, receiving levodopa) and at least moderate PD-associated chronic pain (≥3 months, ≥4 points on 11-point Likert pain scale) were randomized to rotigotine (optimal/maximum dose ≤16 mg/24h) or placebo and maintained for 12 weeks. Primary efficacy variable was change in pain severity (Likert pain scale) from baseline to end of maintenance. Secondary variables included percentage of responders (≥2-point Likert pain scale reduction), King's PD Pain Scale (KPPS) domains, and PD Questionnaire (PDQ-8). Statistical analyses were exploratory. Of 68 randomized patients, 60 (rotigotine, 30; placebo, 30) were evaluable for efficacy. A numerical improvement in pain was observed in favor of rotigotine (Likert pain scale: least-squares mean [95%CI] treatment difference, -0.76 [-1.87 to 0.34]; P = .172), and proportion of responders was 18/30 (60%) rotigotine vs 14/30 (47%) placebo. An ∼2-fold numerical improvement in KPPS domain "fluctuation-related pain" was observed with rotigotine vs placebo. Rotigotine improved PDQ-8 vs placebo (-8.01 [-15.56 to -0.46]; P = .038). These results suggest rotigotine may improve PD-associated pain; a large-scale confirmatory study is needed. © 2015, The American College of Clinical Pharmacology.

  7. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  8. Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin

    2006-01-01

    This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…

  9. Promoting Handwashing Behavior: The Effects of Large-scale Community and School-level Interventions.

    PubMed

    Galiani, Sebastian; Gertler, Paul; Ajzenman, Nicolas; Orsola-Vidal, Alexandra

    2016-12-01

    This paper analyzes a randomized experiment that uses novel strategies to promote handwashing with soap at critical points in time in Peru. It evaluates a large-scale comprehensive initiative that involved both community and school activities in addition to communication campaigns. The analysis indicates that the initiative was successful in reaching the target audience and in increasing the treated population's knowledge about appropriate handwashing behavior. These improvements translated into higher self-reported and observed handwashing with soap at critical junctures. However, no significant improvements in the health of children under the age of 5 years were observed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Universality of accelerating change

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Shlesinger, Michael F.

    2018-03-01

    On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.

  11. Large-scale variation in subsurface stream biofilms: a cross-regional comparison of metabolic function and community similarity.

    PubMed

    Findlay, S; Sinsabaugh, R L

    2006-10-01

    We examined bacterial metabolic activity and community similarity in shallow subsurface stream sediments distributed across three regions of the eastern United States to assess whether there were parallel changes in functional and structural attributes at this large scale. Bacterial growth, oxygen consumption, and a suite of extracellular enzyme activities were assayed to describe functional variability. Community similarity was assessed using randomly amplified polymorphic DNA (RAPD) patterns. There were significant differences in streamwater chemistry, metabolic activity, and bacterial growth among regions with, for instance, twofold higher bacterial production in streams near Baltimore, MD, compared to Hubbard Brook, NH. Five of eight extracellular enzymes showed significant differences among regions. Cluster analyses of individual streams by metabolic variables showed clear groups with significant differences in representation of sites from different regions among groups. Clustering of sites based on randomly amplified polymorphic DNA banding resulted in groups with generally less internal similarity although there were still differences in distribution of regional sites. There was a marginally significant (p = 0.09) association between patterns based on functional and structural variables. There were statistically significant but weak (r2 approximately 30%) associations between landcover and measures of both structure and function. These patterns imply a large-scale organization of biofilm communities and this structure may be imposed by factor(s) such as landcover and covariates such as nutrient concentrations, which are known to also cause differences in macrobiota of stream ecosystems.

  12. A 3D model of polarized dust emission in the Milky Way

    NASA Astrophysics Data System (ADS)

    Martínez-Solaeche, Ginés; Karakci, Ata; Delabrouille, Jacques

    2018-05-01

    We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations.

  13. [No relationship between blood type and personality: evidence from large-scale surveys in Japan and the US].

    PubMed

    Nawata, Kengo

    2014-06-01

    Despite the widespread popular belief in Japan about a relationship between personality and ABO blood type, this association has not been empirically substantiated. This study provides more robust evidence that there is no relationship between blood type and personality, through a secondary analysis of large-scale survey data. Recent data (after 2000) were collected using large-scale random sampling from over 10,000 people in total from both Japan and the US. Effect sizes were calculated. Japanese datasets from 2004 (N = 2,878-2,938), and 2,005 (N = 3,618-3,692) as well as one dataset from the US in 2004 (N = 3,037-3,092) were used. In all the datasets, 65 of 68 items yielded non-significant differences between blood groups. Effect sizes (eta2) were less than .003. This means that blood type explained less than 0.3% of the total variance in personality. These results show the non-relevance of blood type for personality.

  14. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  15. Seeded hot dark matter models with inflation

    NASA Technical Reports Server (NTRS)

    Gratsias, John; Scherrer, Robert J.; Steigman, Gary; Villumsen, Jens V.

    1993-01-01

    We examine massive neutrino (hot dark matter) models for large-scale structure in which the density perturbations are produced by randomly distributed relic seeds and by inflation. Power spectra, streaming velocities, and the Sachs-Wolfe quadrupole fluctuation are derived for this model. We find that the pure seeded hot dark matter model without inflation produces Sachs-Wolfe fluctuations far smaller than those seen by COBE. With the addition of inflationary perturbations, fluctuations consistent with COBE can be produced. The COBE results set the normalization of the inflationary component, which determines the large-scale (about 50/h Mpc) streaming velocities. The normalization of the seed power spectrum is a free parameter, which can be adjusted to obtain the desired fluctuations on small scales. The power spectra produced are very similar to those seen in mixed hot and cold dark matter models.

  16. The Use of CASES-97 Observations to Assess and Parameterize the Impact of Land-Surface Heterogeneity on Area-Average Surface Heat Fluxes for Large-Scale Coupled Atmosphere-Hydrology Models

    NASA Technical Reports Server (NTRS)

    Chen, Fei; Yates, David; LeMone, Margaret

    2001-01-01

    To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.

  17. Similarities between principal components of protein dynamics and random diffusion

    NASA Astrophysics Data System (ADS)

    Hess, Berk

    2000-12-01

    Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.

  18. Power generation in random diode arrays

    NASA Astrophysics Data System (ADS)

    Shvydka, Diana; Karpov, V. G.

    2005-03-01

    We discuss nonlinear disordered systems, random diode arrays (RDAs), which can represent such objects as large-area photovoltaics and ion channels of biological membranes. Our numerical modeling has revealed several interesting properties of RDAs. In particular, the geometrical distribution of nonuniformities across a RDA has only a minor effect on its integral characteristics determined by RDA parameter statistics. In the meantime, the dispersion of integral characteristics vs system size exhibits a nontrivial scaling dependence. Our theoretical interpretation here remains limited and is based on the picture of eddy currents flowing through weak diodes in the RDA.

  19. Do Interim Assessments Reduce the Race and SES Achievement Gaps?

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros; Li, Wei; Miller, Shazia R.; van der Ploeg, Arie

    2017-01-01

    The authors examined differential effects of interim assessments on minority and low socioeconomic status students' achievement in Grades K-6. They conducted a large-scale cluster randomized experiment in 2009-2010 to evaluate the impact of Indiana's policy initiative introducing interim assessments statewide. The authors used 2-level models to…

  20. What Have Researchers Learned from Project STAR?

    ERIC Educational Resources Information Center

    Schanzenbach, Diane Whitmore

    2007-01-01

    Project STAR (Student/Teacher Achievement Ratio) was a large-scale randomized trial of reduced class sizes in kindergarten through the third grade. Because of the scope of the experiment, it has been used in many policy discussions. For example, the California statewide class-size-reduction policy was justified, in part, by the successes of…

  1. Designing Large-Scale Multisite and Cluster-Randomized Studies of Professional Development

    ERIC Educational Resources Information Center

    Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi

    2017-01-01

    We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…

  2. Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories

    ERIC Educational Resources Information Center

    Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.

    2011-01-01

    A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…

  3. Assessing Student Achievement in Large-Scale Educational Programs Using Hierarchical Propensity Scores

    ERIC Educational Resources Information Center

    Vaughan, Angela L.; Lalonde, Trent L.; Jenkins-Guarnieri, Michael A.

    2014-01-01

    Many researchers assessing the efficacy of educational programs face challenges due to issues with non-randomization and the likelihood of dependence between nested subjects. The purpose of the study was to demonstrate a rigorous research methodology using a hierarchical propensity score matching method that can be utilized in contexts where…

  4. Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025

    ERIC Educational Resources Information Center

    Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.

    2012-01-01

    This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…

  5. Integration of Technology, Curriculum, and Professional Development for Advancing Middle School Mathematics: Three Large-Scale Studies

    ERIC Educational Resources Information Center

    Roschelle, Jeremy; Shechtman, Nicole; Tatar, Deborah; Hegedus, Stephen; Hopkins, Bill; Empson, Susan; Knudsen, Jennifer; Gallagher, Lawrence P.

    2010-01-01

    The authors present three studies (two randomized controlled experiments and one embedded quasi-experiment) designed to evaluate the impact of replacement units targeting student learning of advanced middle school mathematics. The studies evaluated the SimCalc approach, which integrates an interactive representational technology, paper curriculum,…

  6. Educational Research with Real-World Data: Reducing Selection Bias with Propensity Scores

    ERIC Educational Resources Information Center

    Adelson, Jill L.

    2013-01-01

    Often it is infeasible or unethical to use random assignment in educational settings to study important constructs and questions. Hence, educational research often uses observational data, such as large-scale secondary data sets and state and school district data, and quasi-experimental designs. One method of reducing selection bias in estimations…

  7. Results and Implications of a Problem-Solving Treatment Program for Obesity.

    ERIC Educational Resources Information Center

    Mahoney, B. K.; And Others

    Data are from a large scale experimental study which was designed to evaluate a multimethod problem solving approach to obesity. Obese adult volunteers (N=90) were randomly assigned to three groups: maximal treatment, minimal treatment, and no treatment control. In the two treatment groups, subjects were exposed to bibliographic material and…

  8. The Characteristics and Quality of Pre-School Education in Spain

    ERIC Educational Resources Information Center

    Sandstrom, Heather

    2012-01-01

    We examined 25 four-year-old pre-school classrooms from a random sample of 15 schools within a large urban city in southern Spain. Observational measures of classroom quality included the Early Childhood Environment Rating Scale-Revised, the Classroom Assessment Scoring System and the Observation of Activities in Pre-school. Findings revealed…

  9. Depressive Symptoms Negate the Beneficial Effects of Physical Activity on Mortality Risk

    ERIC Educational Resources Information Center

    Lee, Pai-Lin

    2013-01-01

    The aim of this study is to: (1) compare the association between various levels of physical activity (PA) and mortality; and (2) examine the potential modifying effect of depressive symptoms on the PA-mortality associations. Previous large scale randomized studies rarely assess the association in conjunction with modifying effects of depressive…

  10. Modeling velocity space-time correlations in wind farms

    NASA Astrophysics Data System (ADS)

    Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael

    2016-11-01

    Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.

  11. Kinetics of Aggregation with Choice

    DOE PAGES

    Ben-Naim, Eli; Krapivsky, Paul

    2016-12-01

    Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less

  12. Signatures of bifurcation on quantum correlations: Case of the quantum kicked top

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.; Santhanam, M. S.

    2017-01-01

    Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.

  13. Development and Validation of a Spanish Version of the Grit-S Scale

    PubMed Central

    Arco-Tirado, Jose L.; Fernández-Martín, Francisco D.; Hoyle, Rick H.

    2018-01-01

    This paper describes the development and initial validation of a Spanish version of the Short Grit (Grit-S) Scale. The Grit-S Scale was adapted and translated into Spanish using the Translation, Review, Adjudication, Pre-testing, and Documentation model and responses to a preliminary set of items from a large sample of university students (N = 1,129). The resultant measure was validated using data from a large stratified random sample of young adults (N = 1,826). Initial validation involved evaluating the internal consistency of the adapted scale and its subscales and comparing the factor structure of the adapted version to that of the original scale. The results were comparable to results from similar analyses of the English version of the scale. Although the internal consistency of the subscales was low, the internal consistency of the full scale was well-within the acceptable range. A two-factor model offered an acceptable account of the data; however, when a single correlated error involving two highly similar items was included, a single factor model fit the data very well. The results support the use of overall scores from the Spanish Grit-S Scale in future research. PMID:29467705

  14. Creation of current filaments in the solar corona

    NASA Technical Reports Server (NTRS)

    Mikic, Z.; Schnack, D. D.; Van Hoven, G.

    1989-01-01

    It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.

  15. Large Scale Gaussian Processes for Atmospheric Parameter Retrieval and Cloud Screening

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Gomez-Chova, L.; Mateo, G.; Laparra, V.; Perez-Suay, A.; Munoz-Mari, J.

    2017-12-01

    Current Earth-observation (EO) applications for image classification have to deal with an unprecedented big amount of heterogeneous and complex data sources. Spatio-temporally explicit classification methods are a requirement in a variety of Earth system data processing applications. Upcoming missions such as the super-spectral Copernicus Sentinels EnMAP and FLEX will soon provide unprecedented data streams. Very high resolution (VHR) sensors like Worldview-3 also pose big challenges to data processing. The challenge is not only attached to optical sensors but also to infrared sounders and radar images which increased in spectral, spatial and temporal resolution. Besides, we should not forget the availability of the extremely large remote sensing data archives already collected by several past missions, such ENVISAT, Cosmo-SkyMED, Landsat, SPOT, or Seviri/MSG. These large-scale data problems require enhanced processing techniques that should be accurate, robust and fast. Standard parameter retrieval and classification algorithms cannot cope with this new scenario efficiently. In this work, we review the field of large scale kernel methods for both atmospheric parameter retrieval and cloud detection using infrared sounding IASI data and optical Seviri/MSG imagery. We propose novel Gaussian Processes (GPs) to train problems with millions of instances and high number of input features. Algorithms can cope with non-linearities efficiently, accommodate multi-output problems, and provide confidence intervals for the predictions. Several strategies to speed up algorithms are devised: random Fourier features and variational approaches for cloud classification using IASI data and Seviri/MSG, and engineered randomized kernel functions and emulation in temperature, moisture and ozone atmospheric profile retrieval from IASI as a proxy to the upcoming MTG-IRS sensor. Excellent compromise between accuracy and scalability are obtained in all applications.

  16. Solitaire™ with the Intention for Thrombectomy as Primary Endovascular Treatment for Acute Ischemic Stroke (SWIFT PRIME) trial: protocol for a randomized, controlled, multicenter study comparing the Solitaire revascularization device with IV tPA with IV tPA alone in acute ischemic stroke.

    PubMed

    Saver, Jeffrey L; Goyal, Mayank; Bonafe, Alain; Diener, Hans-Christoph; Levy, Elad I; Pereira, Vitor M; Albers, Gregory W; Cognard, Christophe; Cohen, David J; Hacke, Werner; Jansen, Olav; Jovin, Tudor G; Mattle, Heinrich P; Nogueira, Raul G; Siddiqui, Adnan H; Yavagal, Dileep R; Devlin, Thomas G; Lopes, Demetrius K; Reddy, Vivek; du Mesnil de Rochemont, Richard; Jahan, Reza

    2015-04-01

    Early reperfusion in patients experiencing acute ischemic stroke is critical, especially for patients with large vessel occlusion who have poor prognosis without revascularization. Solitaire™ stent retriever devices have been shown to immediately restore vascular perfusion safely, rapidly, and effectively in acute ischemic stroke patients with large vessel occlusions. The aim of the study was to demonstrate that, among patients with large vessel, anterior circulation occlusion who have received intravenous tissue plasminogen activator, treatment with Solitaire revascularization devices reduces degree of disability 3 months post stroke. The study is a global multicenter, two-arm, prospective, randomized, open, blinded end-point trial comparing functional outcomes in acute ischemic stroke patients who are treated with either intravenous tissue plasminogen activator alone or intravenous tissue plasminogen activator in combination with the Solitaire device. Up to 833 patients will be enrolled. Patients who have received intravenous tissue plasminogen activator are randomized to either continue with intravenous tissue plasminogen activator alone or additionally proceed to neurothrombectomy using the Solitaire device within six-hours of symptom onset. The primary end-point is 90-day global disability, assessed with the modified Rankin Scale (mRS). Secondary outcomes include mortality at 90 days, functional independence (mRS ≤ 2) at 90 days, change in National Institutes of Health Stroke Scale at 27 h, reperfusion at 27 h, and thrombolysis in cerebral infarction 2b/3 flow at the end of the procedure. Statistical analysis will be conducted using simultaneous success criteria on the overall distribution of modified Rankin Scale (Rankin shift) and proportions of subjects achieving functional independence (mRS 0-2). © 2015 The Authors. International Journal of Stroke published by John Wiley & Sons Ltd on behalf of World Stroke Organization.

  17. Semantic classification of urban buildings combining VHR image and GIS data: An improved random forest approach

    NASA Astrophysics Data System (ADS)

    Du, Shihong; Zhang, Fangli; Zhang, Xiuyuan

    2015-07-01

    While most existing studies have focused on extracting geometric information on buildings, only a few have concentrated on semantic information. The lack of semantic information cannot satisfy many demands on resolving environmental and social issues. This study presents an approach to semantically classify buildings into much finer categories than those of existing studies by learning random forest (RF) classifier from a large number of imbalanced samples with high-dimensional features. First, a two-level segmentation mechanism combining GIS and VHR image produces single image objects at a large scale and intra-object components at a small scale. Second, a semi-supervised method chooses a large number of unbiased samples by considering the spatial proximity and intra-cluster similarity of buildings. Third, two important improvements in RF classifier are made: a voting-distribution ranked rule for reducing the influences of imbalanced samples on classification accuracy and a feature importance measurement for evaluating each feature's contribution to the recognition of each category. Fourth, the semantic classification of urban buildings is practically conducted in Beijing city, and the results demonstrate that the proposed approach is effective and accurate. The seven categories used in the study are finer than those in existing work and more helpful to studying many environmental and social problems.

  18. Not a Copernican observer: biased peculiar velocity statistics in the local Universe

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej

    2017-05-01

    We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.

  19. Fragmentation under the Scaling Symmetry and Turbulent Cascade with Intermittency

    NASA Technical Reports Server (NTRS)

    Gorokhovski, M.

    2003-01-01

    Fragmentation plays an important role in a variety of physical, chemical, and geological processes. Examples include atomization in sprays, crushing of rocks, explosion and impact of solids, polymer degradation, etc. Although each individual action of fragmentation is a complex process, the number of these elementary actions is large. It is natural to abstract a simple 'effective' scenario of fragmentation and to represent its essential features. One of the models is the fragmentation under the scaling symmetry: each breakup action reduces the typical length of fragments, r (right arrow) alpha r, by an independent random multiplier alpha (0 < alpha < 1), which is governed by the fragmentation intensity spectrum q(alpha), integral(sup 1)(sub 0) q(alpha)d alpha = 1. This scenario has been proposed by Kolmogorov (1941), when he considered the breakup of solid carbon particle. Describing the breakup as a random discrete process, Kolmogorov stated that at latest times, such a process leads to the log-normal distribution. In Gorokhovski & Saveliev, the fragmentation under the scaling symmetry has been reviewed as a continuous evolution process with new features established. The objective of this paper is twofold. First, the paper synthesizes and completes theoretical part of Gorokhovski & Saveliev. Second, the paper shows a new application of the fragmentation theory under the scale invariance. This application concerns the turbulent cascade with intermittency. We formulate here a model describing the evolution of the velocity increment distribution along the progressively decreasing length scale. The model shows that when the turbulent length scale gets smaller, the velocity increment distribution has central growing peak and develops stretched tails. The intermittency in turbulence is manifested in the same way: large fluctuations of velocity provoke highest strain in narrow (dissipative) regions of flow.

  20. Why do Scale-Free Networks Emerge in Nature? From Gradient Networks to Transport Efficiency

    NASA Astrophysics Data System (ADS)

    Toroczkai, Zoltan

    2004-03-01

    It has recently been recognized [1,2,3] that a large number of complex networks are scale-free (having a power-law degree distribution). Examples include citation networks [4], the internet [5], the world-wide-web [6], cellular metabolic networks [7], protein interaction networks [8], the sex-web [9] and alliance networks in the U.S. biotechnology industry [10]. The existence of scale-free networks in such diverse systems suggests that there is a simple underlying common reason for their development. Here, we propose that scale-free networks emerge because they ensure efficient transport of some entity. We show that for flows generated by gradients of a scalar "potential'' distributed on a network, non scale-free networks, e.g., random graphs [11], will become maximally congested, while scale-free networks will ensure efficient transport in the large network size limit. [1] R. Albert and A.-L. Barabási, Rev.Mod.Phys. 74, 47 (2002). [2] M.E.J. Newman, SIAM Rev. 45, 167 (2003). [3] S.N. Dorogovtsev and J.F.F. Mendes, Evolution of Networks: From Biological Nets to the Internet and WWW, Oxford Univ. Press, Oxford, 2003. [4] S. Redner, Eur.Phys.J. B, 4, 131 (1998). [5] M. Faloutsos, P. Faloutsos and C. Faloutsos Comp.Comm.Rev. 29, 251 (1999). [6] R. Albert, H. Jeong, and A.L. Barabási, Nature 401, 130 (1999). [7] H. Jeong et.al. Nature 407, 651 (2000). [8] H. Jeong, S. Mason, A.-L. Barabási and Z. N. Oltvai, Nature 411, 41 (2001). [9] F. Liljeros et. al. Nature 411 907 (2000). [10] W. W. Powell, D. R. White, K. W. Koput and J. Owen-Smith Am.J.Soc. in press. [11] B. Bollobás, Random Graphs, Second Edition, Cambridge University Press (2001).

  1. Bilateral robotic priming before task-oriented approach in subacute stroke rehabilitation: a pilot randomized controlled trial.

    PubMed

    Hsieh, Yu-Wei; Wu, Ching-Yi; Wang, Wei-En; Lin, Keh-Chung; Chang, Ku-Chou; Chen, Chih-Chi; Liu, Chien-Ting

    2017-02-01

    To investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke. A randomized controlled trial. Occupational therapy clinics in medical centers. Thirty-one subacute stroke patients were recruited. Participants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device. Motor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale. The primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group. Bilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.

  2. Quantifying Biomass from Point Clouds by Connecting Representations of Ecosystem Structure

    NASA Astrophysics Data System (ADS)

    Hendryx, S. M.; Barron-Gafford, G.

    2017-12-01

    Quantifying terrestrial ecosystem biomass is an essential part of monitoring carbon stocks and fluxes within the global carbon cycle and optimizing natural resource management. Point cloud data such as from lidar and structure from motion can be effective for quantifying biomass over large areas, but significant challenges remain in developing effective models that allow for such predictions. Inference models that estimate biomass from point clouds are established in many environments, yet, are often scale-dependent, needing to be fitted and applied at the same spatial scale and grid size at which they were developed. Furthermore, training such models typically requires large in situ datasets that are often prohibitively costly or time-consuming to obtain. We present here a scale- and sensor-invariant framework for efficiently estimating biomass from point clouds. Central to this framework, we present a new algorithm, assignPointsToExistingClusters, that has been developed for finding matches between in situ data and clusters in remotely-sensed point clouds. The algorithm can be used for assessing canopy segmentation accuracy and for training and validating machine learning models for predicting biophysical variables. We demonstrate the algorithm's efficacy by using it to train a random forest model of above ground biomass in a shrubland environment in Southern Arizona. We show that by learning a nonlinear function to estimate biomass from segmented canopy features we can reduce error, especially in the presence of inaccurate clusterings, when compared to a traditional, deterministic technique to estimate biomass from remotely measured canopies. Our random forest on cluster features model extends established methods of training random forest regressions to predict biomass of subplots but requires significantly less training data and is scale invariant. The random forest on cluster features model reduced mean absolute error, when evaluated on all test data in leave one out cross validation, by 40.6% from deterministic mesquite allometry and 35.9% from the inferred ecosystem-state allometric function. Our framework should allow for the inference of biomass more efficiently than common subplot methods and more accurately than individual tree segmentation methods in densely vegetated environments.

  3. Energy transfer, pressure tensor, and heating of kinetic plasma

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Matthaeus, William H.; Parashar, Tulasi N.; Haggerty, Colby C.; Roytershteyn, Vadim; Daughton, William; Wan, Minping; Shi, Yipeng; Chen, Shiyi

    2017-07-01

    Kinetic plasma turbulence cascade spans multiple scales ranging from macroscopic fluid flow to sub-electron scales. Mechanisms that dissipate large scale energy, terminate the inertial range cascade, and convert kinetic energy into heat are hotly debated. Here, we revisit these puzzles using fully kinetic simulation. By performing scale-dependent spatial filtering on the Vlasov equation, we extract information at prescribed scales and introduce several energy transfer functions. This approach allows highly inhomogeneous energy cascade to be quantified as it proceeds down to kinetic scales. The pressure work, - ( P . ∇ ) . u , can trigger a channel of the energy conversion between fluid flow and random motions, which contains a collision-free generalization of the viscous dissipation in collisional fluid. Both the energy transfer and the pressure work are strongly correlated with velocity gradients.

  4. Applications of random forest feature selection for fine-scale genetic population assignment.

    PubMed

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  5. Programmability of nanowire networks

    NASA Astrophysics Data System (ADS)

    Bellew, A. T.; Bell, A. P.; McCarthy, E. K.; Fairfield, J. A.; Boland, J. J.

    2014-07-01

    Electrical connectivity in networks of nanoscale junctions must be better understood if nanowire devices are to be scaled up from single wires to functional material systems. We show that the natural connectivity behaviour found in random nanowire networks presents a new paradigm for creating multi-functional, programmable materials. In devices made from networks of Ni/NiO core-shell nanowires at different length scales, we discover the emergence of distinct behavioural regimes when networks are electrically stressed. We show that a small network, with few nanowire-nanowire junctions, acts as a unipolar resistive switch, demonstrating very high ON/OFF current ratios (>105). However, large networks of nanowires distribute an applied bias across a large number of junctions, and thus respond not by switching but instead by evolving connectivity. We demonstrate that these emergent properties lead to fault-tolerant materials whose resistance may be tuned, and which are capable of adaptively reconfiguring under stress. By combining these two behavioural regimes, we demonstrate that the same nanowire network may be programmed to act both as a metallic interconnect, and a resistive switch device with high ON/OFF ratio. These results enable the fabrication of programmable, multi-functional materials from random nanowire networks.Electrical connectivity in networks of nanoscale junctions must be better understood if nanowire devices are to be scaled up from single wires to functional material systems. We show that the natural connectivity behaviour found in random nanowire networks presents a new paradigm for creating multi-functional, programmable materials. In devices made from networks of Ni/NiO core-shell nanowires at different length scales, we discover the emergence of distinct behavioural regimes when networks are electrically stressed. We show that a small network, with few nanowire-nanowire junctions, acts as a unipolar resistive switch, demonstrating very high ON/OFF current ratios (>105). However, large networks of nanowires distribute an applied bias across a large number of junctions, and thus respond not by switching but instead by evolving connectivity. We demonstrate that these emergent properties lead to fault-tolerant materials whose resistance may be tuned, and which are capable of adaptively reconfiguring under stress. By combining these two behavioural regimes, we demonstrate that the same nanowire network may be programmed to act both as a metallic interconnect, and a resistive switch device with high ON/OFF ratio. These results enable the fabrication of programmable, multi-functional materials from random nanowire networks. Electronic supplementary information (ESI) available: Nanowire statistics (length, diameter statistics, and oxide thickness) are provided. Forming curves for single junctions and networks. Passive voltage contrast image demonstrating selectivity of conductive pathways in 100 μm network. See DOI: 10.1039/c4nr02338b

  6. Random multispace quantization as an analytic mechanism for BioHashing of biometric and random identity inputs.

    PubMed

    Teoh, Andrew B J; Goh, Alwyn; Ngo, David C L

    2006-12-01

    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.

  7. Crack surface roughness in three-dimensional random fuse networks

    NASA Astrophysics Data System (ADS)

    Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan

    2006-08-01

    Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.

  8. Void statistics, scaling, and the origins of large-scale structure

    NASA Technical Reports Server (NTRS)

    Fry, J. N.; Giovanelli, Riccardo; Haynes, Martha P.; Melott, Adrian L.; Scherrer, Robert J.

    1989-01-01

    The probability that a volume of the universe of given size and shape spaced at random will be void of galaxies is used here to study various models of the origin of cosmological structures. Numerical simulations are conducted on hot-particle and cold-particle-modulated inflationary models with and without biasing, on isothermal or initially Poisson models, and on models where structure is seeded by loops of cosmic string. For the Pisces-Perseus redshift compilation of Giovanelli and Haynes (1985), it is found that hierarchical scaling is obeyed for subsamples constructed with different limiting magnitudes and subsamples taken at random. This result confirms that the hierarchical ansatz holds valid to high order and supports the idea that structure in the observed universe evolves by a regular process from an almost Gaussian primordial state. Neutrino models without biasing show the effect of a strong feature in the initial power spectrum. Cosmic string models do not agree well with the galaxy data.

  9. Scaling of flow distance in random self-similar channel networks

    USGS Publications Warehouse

    Troutman, B.M.

    2005-01-01

    Natural river channel networks have been shown in empirical studies to exhibit power-law scaling behavior characteristic of self-similar and self-affine structures. Of particular interest is to describe how the distribution of distance to the outlet changes as a function of network size. In this paper, networks are modeled as random self-similar rooted tree graphs and scaling of distance to the root is studied using methods in stochastic branching theory. In particular, the asymptotic expectation of the width function (number of nodes as a function of distance to the outlet) is derived under conditions on the replacement generators. It is demonstrated further that the branching number describing rate of growth of node distance to the outlet is identical to the length ratio under a Horton-Strahler ordering scheme as order gets large, again under certain restrictions on the generators. These results are discussed in relation to drainage basin allometry and an application to an actual drainage network is presented. ?? World Scientific Publishing Company.

  10. Quantitative Serum Nuclear Magnetic Resonance Metabolomics in Large-Scale Epidemiology: A Primer on -Omic Technologies

    PubMed Central

    Kangas, Antti J; Soininen, Pasi; Lawlor, Debbie A; Davey Smith, George; Ala-Korpela, Mika

    2017-01-01

    Abstract Detailed metabolic profiling in large-scale epidemiologic studies has uncovered novel biomarkers for cardiometabolic diseases and clarified the molecular associations of established risk factors. A quantitative metabolomics platform based on nuclear magnetic resonance spectroscopy has found widespread use, already profiling over 400,000 blood samples. Over 200 metabolic measures are quantified per sample; in addition to many biomarkers routinely used in epidemiology, the method simultaneously provides fine-grained lipoprotein subclass profiling and quantification of circulating fatty acids, amino acids, gluconeogenesis-related metabolites, and many other molecules from multiple metabolic pathways. Here we focus on applications of magnetic resonance metabolomics for quantifying circulating biomarkers in large-scale epidemiology. We highlight the molecular characterization of risk factors, use of Mendelian randomization, and the key issues of study design and analyses of metabolic profiling for epidemiology. We also detail how integration of metabolic profiling data with genetics can enhance drug development. We discuss why quantitative metabolic profiling is becoming widespread in epidemiology and biobanking. Although large-scale applications of metabolic profiling are still novel, it seems likely that comprehensive biomarker data will contribute to etiologic understanding of various diseases and abilities to predict disease risks, with the potential to translate into multiple clinical settings. PMID:29106475

  11. A Survey of Residents' Perceptions of the Effect of Large-Scale Economic Developments on Perceived Safety, Violence, and Economic Benefits

    PubMed Central

    Geller, Ruth; Bear, Todd M.; Foulds, Abigail L.; Duell, Jessica; Sharma, Ravi

    2015-01-01

    Background. Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. Objective. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Methods. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Results. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Conclusions. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates. PMID:26273310

  12. How Can the Evidence from Global Large-scale Clinical Trials for Cardiovascular Diseases be Improved?

    PubMed

    Sawata, Hiroshi; Tsutani, Kiichiro

    2011-06-29

    Clinical investigations are important for obtaining evidence to improve medical treatment. Large-scale clinical trials with thousands of participants are particularly important for this purpose in cardiovascular diseases. Conducting large-scale clinical trials entails high research costs. This study sought to investigate global trends in large-scale clinical trials in cardiovascular diseases. We searched for trials using clinicaltrials.gov (URL: http://www.clinicaltrials.gov/) using the key words 'cardio' and 'event' in all fields on 10 April, 2010. We then selected trials with 300 or more participants examining cardiovascular diseases. The search revealed 344 trials that met our criteria. Of 344 trials, 71% were randomized controlled trials, 15% involved more than 10,000 participants, and 59% were funded by industry. In RCTs whose results were disclosed, 55% of industry-funded trials and 25% of non-industry funded trials reported statistically significant superiority over control (p = 0.012, 2-sided Fisher's exact test). Our findings highlighted concerns regarding potential bias related to funding sources, and that researchers should be aware of the importance of trial information disclosures and conflicts of interest. We should keep considering management and training regarding information disclosures and conflicts of interest for researchers. This could lead to better clinical evidence and further improvements in the development of medical treatment worldwide.

  13. Scaling earthquake ground motions for performance-based assessment of buildings

    USGS Publications Warehouse

    Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.

    2011-01-01

    The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.

  14. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  16. Large-scale randomized clinical trials of bioactives and nutrients in relation to human health and disease prevention - Lessons from the VITAL and COSMOS trials.

    PubMed

    Rautiainen, Susanne; Sesso, Howard D; Manson, JoAnn E

    2017-12-29

    Several bioactive compounds and nutrients in foods have physiological properties that are beneficial for human health. While nutrients typically have clear definitions with established levels of recommended intakes, bioactive compounds often lack such a definition. Although a food-based approach is often the optimal approach to ensure adequate intake of bioactives and nutrients, these components are also often produced as dietary supplements. However, many of these supplements are not sufficiently studied and have an unclear role in chronic disease prevention. Randomized trials are considered the gold standard of study designs, but have not been fully applied to understand the effects of bioactives and nutrients. We review the specific role of large-scale trials to test whether bioactives and nutrients have an effect on health outcomes through several crucial components of trial design, including selection of intervention, recruitment, compliance, outcome selection, and interpretation and generalizability of study findings. We will discuss these components in the context of two randomized clinical trials, the VITamin D and OmegA-3 TriaL (VITAL) and the COcoa Supplement and Multivitamin Outcomes Study (COSMOS). We will mainly focus on dietary supplements of bioactives and nutrients while also emphasizing the need for translation and integration with food-based trials that are of vital importance within nutritional research. Copyright © 2017. Published by Elsevier Ltd.

  17. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  18. Quantum Entanglement in Random Physical States

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-07-01

    Most states in the Hilbert space are maximally entangled. This fact has proven useful to investigate—among other things—the foundations of statistical mechanics. Unfortunately, most states in the Hilbert space of a quantum many-body system are not physically accessible. We define physical ensembles of states acting on random factorized states by a circuit of length k of random and independent unitaries with local support. We study the typicality of entanglement by means of the purity of the reduced state. We find that for a time k=O(1), the typical purity obeys the area law. Thus, the upper bounds for area law are actually saturated, on average, with a variance that goes to zero for large systems. Similarly, we prove that by means of local evolution a subsystem of linear dimensions L is typically entangled with a volume law when the time scales with the size of the subsystem. Moreover, we show that for large values of k the reduced state becomes very close to the completely mixed state.

  19. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-01-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  20. Variational approach to probabilistic finite elements

    NASA Astrophysics Data System (ADS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1991-08-01

    Probabilistic finite element methods (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  1. Variational approach to probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, T.; Liu, W. K.; Mani, A.; Besterfield, G.

    1987-01-01

    Probabilistic finite element method (PFEM), synthesizing the power of finite element methods with second-moment techniques, are formulated for various classes of problems in structural and solid mechanics. Time-invariant random materials, geometric properties, and loads are incorporated in terms of their fundamental statistics viz. second-moments. Analogous to the discretization of the displacement field in finite element methods, the random fields are also discretized. Preserving the conceptual simplicity, the response moments are calculated with minimal computations. By incorporating certain computational techniques, these methods are shown to be capable of handling large systems with many sources of uncertainties. By construction, these methods are applicable when the scale of randomness is not very large and when the probabilistic density functions have decaying tails. The accuracy and efficiency of these methods, along with their limitations, are demonstrated by various applications. Results obtained are compared with those of Monte Carlo simulation and it is shown that good accuracy can be obtained for both linear and nonlinear problems. The methods are amenable to implementation in deterministic FEM based computer codes.

  2. Estimating the impact of mineral aerosols on crop yields in food insecure regions using statistical crop models

    NASA Astrophysics Data System (ADS)

    Hoffman, A.; Forest, C. E.; Kemanian, A.

    2016-12-01

    A significant number of food-insecure nations exist in regions of the world where dust plays a large role in the climate system. While the impacts of common climate variables (e.g. temperature, precipitation, ozone, and carbon dioxide) on crop yields are relatively well understood, the impact of mineral aerosols on yields have not yet been thoroughly investigated. This research aims to develop the data and tools to progress our understanding of mineral aerosol impacts on crop yields. Suspended dust affects crop yields by altering the amount and type of radiation reaching the plant, modifying local temperature and precipitation. While dust events (i.e. dust storms) affect crop yields by depleting the soil of nutrients or by defoliation via particle abrasion. The impact of dust on yields is modeled statistically because we are uncertain which impacts will dominate the response on national and regional scales considered in this study. Multiple linear regression is used in a number of large-scale statistical crop modeling studies to estimate yield responses to various climate variables. In alignment with previous work, we develop linear crop models, but build upon this simple method of regression with machine-learning techniques (e.g. random forests) to identify important statistical predictors and isolate how dust affects yields on the scales of interest. To perform this analysis, we develop a crop-climate dataset for maize, soybean, groundnut, sorghum, rice, and wheat for the regions of West Africa, East Africa, South Africa, and the Sahel. Random forest regression models consistently model historic crop yields better than the linear models. In several instances, the random forest models accurately capture the temperature and precipitation threshold behavior in crops. Additionally, improving agricultural technology has caused a well-documented positive trend that dominates time series of global and regional yields. This trend is often removed before regression with traditional crop models, but likely at the cost of removing climate information. Our random forest models consistently discover the positive trend without removing any additional data. The application of random forests as a statistical crop model provides insight into understanding the impact of dust on yields in marginal food producing regions.

  3. Self-avoiding walks on scale-free networks

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.

    2005-01-01

    Several kinds of walks on complex networks are currently used to analyze search and navigation in different systems. Many analytical and computational results are known for random walks on such networks. Self-avoiding walks (SAW’s) are expected to be more suitable than unrestricted random walks to explore various kinds of real-life networks. Here we study long-range properties of random SAW’s on scale-free networks, characterized by a degree distribution P (k) ˜ k-γ . In the limit of large networks (system size N→∞ ), the average number sn of SAW’s starting from a generic site increases as μn , with μ= < k2 > / -1 . For finite N , sn is reduced due to the presence of loops in the network, which causes the emergence of attrition of the paths. For kinetic growth walks, the average maximum length increases as a power of the system size: ˜ Nα , with an exponent α increasing as the parameter γ is raised. We discuss the dependence of α on the minimum allowed degree in the network. A similar power-law dependence is found for the mean self-intersection length of nonreversal random walks. Simulation results support our approximate analytical calculations.

  4. Dynamics of hot random quantum spin chains: from anyons to Heisenberg spins

    NASA Astrophysics Data System (ADS)

    Parameswaran, Siddharth; Potter, Andrew; Vasseur, Romain

    2015-03-01

    We argue that the dynamics of the random-bond Heisenberg spin chain are ergodic at infinite temperature, in contrast to the many-body localized behavior seen in its random-field counterpart. First, we show that excited-state real-space renormalization group (RSRG-X) techniques suffer from a fatal breakdown of perturbation theory due to the proliferation of large effective spins that grow without bound. We repair this problem by deforming the SU (2) symmetry of the Heisenberg chain to its `anyonic' version, SU(2)k , where the growth of effective spins is truncated at spin S = k / 2 . This enables us to construct a self-consistent RSRG-X scheme that is particularly simple at infinite temperature. Solving the flow equations, we compute the excited-state entanglement and show that it crosses over from volume-law to logarithmic scaling at a length scale ξk ~eαk3 . This reveals that (a) anyon chains have random-singlet-like excited states for any finite k; and (b) ergodicity is restored in the Heisenberg limit k --> ∞ . We acknowledge support from the Quantum Materials program of LBNL (RV), the Gordon and Betty Moore Foundation (ACP), and UC Irvine startup funds (SAP).

  5. Research on the impacts of large-scale electric vehicles integration into power grid

    NASA Astrophysics Data System (ADS)

    Su, Chuankun; Zhang, Jian

    2018-06-01

    Because of its special energy driving mode, electric vehicles can improve the efficiency of energy utilization and reduce the pollution to the environment, which is being paid more and more attention. But the charging behavior of electric vehicles is random and intermittent. If the electric vehicle is disordered charging in a large scale, it causes great pressure on the structure and operation of the power grid and affects the safety and economic operation of the power grid. With the development of V2G technology in electric vehicle, the study of the charging and discharging characteristics of electric vehicles is of great significance for improving the safe operation of the power grid and the efficiency of energy utilization.

  6. Nucleation versus percolation: Scaling criterion for failure in disordered solids

    NASA Astrophysics Data System (ADS)

    Biswas, Soumyajyoti; Roy, Subhadeep; Ray, Purusattam

    2015-05-01

    One of the major factors governing the mode of failure in disordered solids is the effective range R over which the stress field is modified following a local rupture event. In a random fiber bundle model, considered as a prototype of disordered solids, we show that the failure mode is nucleation dominated in the large system size limit, as long as R scales slower than Lζ, with ζ =2 /3 . For a faster increase in R , the failure properties are dominated by the mean-field critical point, where the damages are uncorrelated in space. In that limit, the precursory avalanches of all sizes are obtained even in the large system size limit. We expect these results to be valid for systems with finite (normalizable) disorder.

  7. Production of large-scale, freestanding vanadium pentoxide nanobelt porous structures

    NASA Astrophysics Data System (ADS)

    Yun, Yong Ju; Kim, Byung Hoon; Hong, Won G.; Kim, Chang Hee; Kim, Yark Yeon; Jeong, Eun-Ju; Jang, Won Ick; Yu, Han Young

    2012-02-01

    Large-scale, freestanding, porous structures of vanadium pentoxide nanobelts (VPNs) were successfully prepared using the template-free freeze-drying method. The porous and multi-layered VPN macrostructures are composed of randomly oriented long nanobelts (over 100 μm) and their side length can be controlled up to a few tens of centimetres. Also, the bulk density and surface area of these macrostructures are 3-5 mg cm-3 and 40-80 m2 g-1, respectively, which are similar to those of the excellent adsorbents. In addition, the removal efficiency measurements of ammonia molecules revealed that the VPN porous structures can adsorb the ammonia molecules with the combinations of van der Waals forces and strong chemical bonding by functional groups on the VPN surface.

  8. Production of large-scale, freestanding vanadium pentoxide nanobelt porous structures.

    PubMed

    Yun, Yong Ju; Kim, Byung Hoon; Hong, Won G; Kim, Chang Hee; Kim, Yark Yeon; Jeong, Eun-ju; Jang, Won Ick; Yu, Han Young

    2012-03-07

    Large-scale, freestanding, porous structures of vanadium pentoxide nanobelts (VPNs) were successfully prepared using the template-free freeze-drying method. The porous and multi-layered VPN macrostructures are composed of randomly oriented long nanobelts (over 100 μm) and their side length can be controlled up to a few tens of centimetres. Also, the bulk density and surface area of these macrostructures are 3-5 mg cm(-3) and 40-80 m(2) g(-1), respectively, which are similar to those of the excellent adsorbents. In addition, the removal efficiency measurements of ammonia molecules revealed that the VPN porous structures can adsorb the ammonia molecules with the combinations of van der Waals forces and strong chemical bonding by functional groups on the VPN surface.

  9. Scaling Laws for NanoFET Sensors

    NASA Astrophysics Data System (ADS)

    Wei, Qi-Huo; Zhou, Fu-Shan

    2008-03-01

    In this paper, we report our numerical studies of the scaling laws for nanoplate field-effect transistor (FET) sensors by simplifying the nanoplates as random resistor networks. Nanowire/tube FETs are included as the limiting cases where the device width goes small. Computer simulations show that the field effect strength exerted by the binding molecules has significant impact on the scaling behaviors. When the field effect strength is small, nanoFETs have little size and shape dependence. In contrast, when the field-effect strength becomes stronger, there exists a lower detection threshold for charge accumulation FETs and an upper detection threshold for charge depletion FET sensors. At these thresholds, the nanoFET devices undergo a transition between low and large sensitivities. These thresholds may set the detection limits of nanoFET sensors. We propose to eliminate these detection thresholds by employing devices with very short source-drain distance and large width.

  10. The alignment of molecular cloud magnetic fields with the spiral arms in M33.

    PubMed

    Li, Hua-bai; Henning, Thomas

    2011-11-16

    The formation of molecular clouds, which serve as stellar nurseries in galaxies, is poorly understood. A class of cloud formation models suggests that a large-scale galactic magnetic field is irrelevant at the scale of individual clouds, because the turbulence and rotation of a cloud may randomize the orientation of its magnetic field. Alternatively, galactic fields could be strong enough to impose their direction upon individual clouds, thereby regulating cloud accumulation and fragmentation, and affecting the rate and efficiency of star formation. Our location in the disk of the Galaxy makes an assessment of the situation difficult. Here we report observations of the magnetic field orientation of six giant molecular cloud complexes in the nearby, almost face-on, galaxy M33. The fields are aligned with the spiral arms, suggesting that the large-scale field in M33 anchors the clouds. ©2011 Macmillan Publishers Limited. All rights reserved

  11. The two-point correlation function for groups of galaxies in the Center for Astrophysics redshift survey

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1990-01-01

    The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.

  12. Effect of inventory method on niche models: random versus systematic error

    Treesearch

    Heather E. Lintz; Andrew N. Gray; Bruce McCune

    2013-01-01

    Data from large-scale biological inventories are essential for understanding and managing Earth's ecosystems. The Forest Inventory and Analysis Program (FIA) of the U.S. Forest Service is the largest biological inventory in North America; however, the FIA inventory recently changed from an amalgam of different approaches to a nationally-standardized approach in...

  13. Ethical Behaviors and Wealth: Generation Y's Experience

    ERIC Educational Resources Information Center

    Zagorsky, Jay L.

    2017-01-01

    This research investigates if ethical behaviors and personal finances are related using a large scale U.S. random survey called the National Longitudinal Survey of Youth 1997 (NLSY97). Fifteen indicators covering both ethical and unethical behaviors are compared to net worth for people in their 20s and 30s, who are called Generation Y. Breaking…

  14. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    ERIC Educational Resources Information Center

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  15. Fidelity of Implementation in a Large-Scale, Randomized, Field Trial: Identifying the Critical Components of Values Affirmation

    ERIC Educational Resources Information Center

    Bradley, Dominique; Crawford, Evan; Dahill-Brown, Sara E.

    2015-01-01

    Several studies suggest that values-affirmation can serve as a simple, yet powerful, tool for dramatically reducing achievement gaps. Because subtle variations in implementation procedures may explain some of the variation in these findings, it is crucial for researchers to measure the fidelity with which interventions are implemented. The authors…

  16. Impact of a Large-Scale Science Intervention Focused on English Language Learners

    ERIC Educational Resources Information Center

    Llosa, Lorena; Lee, Okhee; Jiang, Feng; Haas, Alison; O'Connor, Corey; Van Booven, Christopher D.; Kieffer, Michael J.

    2016-01-01

    The authors evaluated the effects of P-SELL, a science curricular and professional development intervention for fifth-grade students with a focus on English language learners (ELLs). Using a randomized controlled trial design with 33 treatment and 33 control schools across three school districts in one state, we found significant and meaningfully…

  17. Annealed Scaling for a Charged Polymer

    NASA Astrophysics Data System (ADS)

    Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.

    2016-03-01

    This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.

  18. Experimental Study of the Effect of the Initial Spectrum Width on the Statistics of Random Wave Groups

    NASA Astrophysics Data System (ADS)

    Shemer, L.; Sergeeva, A.

    2009-12-01

    The statistics of random water wave field determines the probability of appearance of extremely high (freak) waves. This probability is strongly related to the spectral wave field characteristics. Laboratory investigation of the spatial variation of the random wave-field statistics for various initial conditions is thus of substantial practical importance. Unidirectional nonlinear random wave groups are investigated experimentally in the 300 m long Large Wave Channel (GWK) in Hannover, Germany, which is the biggest facility of its kind in Europe. Numerous realizations of a wave field with the prescribed frequency power spectrum, yet randomly-distributed initial phases of each harmonic, were generated by a computer-controlled piston-type wavemaker. Several initial spectral shapes with identical dominant wave length but different width were considered. For each spectral shape, the total duration of sampling in all realizations was long enough to yield sufficient sample size for reliable statistics. Through all experiments, an effort had been made to retain the characteristic wave height value and thus the degree of nonlinearity of the wave field. Spatial evolution of numerous statistical wave field parameters (skewness, kurtosis and probability distributions) is studied using about 25 wave gauges distributed along the tank. It is found that, depending on the initial spectral shape, the frequency spectrum of the wave field may undergo significant modification in the course of its evolution along the tank; the values of all statistical wave parameters are strongly related to the local spectral width. A sample of the measured wave height probability functions (scaled by the variance of surface elevation) is plotted in Fig. 1 for the initially narrow rectangular spectrum. The results in Fig. 1 resemble findings obtained in [1] for the initial Gaussian spectral shape. The probability of large waves notably surpasses that predicted by the Rayleigh distribution and is the highest at the distance of about 100 m. Acknowledgement This study is carried out in the framework of the EC supported project "Transnational access to large-scale tests in the Large Wave Channel (GWK) of Forschungszentrum Küste (Contract HYDRALAB III - No. 022441). [1] L. Shemer and A. Sergeeva, J. Geophys. Res. Oceans 114, C01015 (2009). Figure 1. Variation along the tank of the measured wave height distribution for rectangular initial spectral shape, the carrier wave period T0=1.5 s.

  19. The statistical overlap theory of chromatography using power law (fractal) statistics.

    PubMed

    Schure, Mark R; Davis, Joe M

    2011-12-30

    The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Influence of a large-scale field on energy dissipation in magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Zhdankin, Vladimir; Boldyrev, Stanislav; Mason, Joanne

    2017-07-01

    In magnetohydrodynamic (MHD) turbulence, the large-scale magnetic field sets a preferred local direction for the small-scale dynamics, altering the statistics of turbulence from the isotropic case. This happens even in the absence of a total magnetic flux, since MHD turbulence forms randomly oriented large-scale domains of strong magnetic field. It is therefore customary to study small-scale magnetic plasma turbulence by assuming a strong background magnetic field relative to the turbulent fluctuations. This is done, for example, in reduced models of plasmas, such as reduced MHD, reduced-dimension kinetic models, gyrokinetics, etc., which make theoretical calculations easier and numerical computations cheaper. Recently, however, it has become clear that the turbulent energy dissipation is concentrated in the regions of strong magnetic field variations. A significant fraction of the energy dissipation may be localized in very small volumes corresponding to the boundaries between strongly magnetized domains. In these regions, the reduced models are not applicable. This has important implications for studies of particle heating and acceleration in magnetic plasma turbulence. The goal of this work is to systematically investigate the relationship between local magnetic field variations and magnetic energy dissipation, and to understand its implications for modelling energy dissipation in realistic turbulent plasmas.

  1. PLASMA TURBULENCE AND KINETIC INSTABILITIES AT ION SCALES IN THE EXPANDING SOLAR WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellinger, Petr; Trávnícek, Pavel M.; Matteini, Lorenzo

    The relationship between a decaying strong turbulence and kinetic instabilities in a slowly expanding plasma is investigated using two-dimensional (2D) hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and vanishing correlation between the two fields. A turbulent cascade rapidly develops; magnetic field fluctuations exhibit a power-law spectrum at large scales and a steeper spectrum at ion scales. The turbulent cascade leads to an overall anisotropic proton heating, protons are heatedmore » in the perpendicular direction, and, initially, also in the parallel direction. The imposed expansion leads to generation of a large parallel proton temperature anisotropy which is at later stages partly reduced by turbulence. The turbulent heating is not sufficient to overcome the expansion-driven perpendicular cooling and the system eventually drives the oblique firehose instability in a form of localized nonlinear wave packets which efficiently reduce the parallel temperature anisotropy. This work demonstrates that kinetic instabilities may coexist with strong plasma turbulence even in a constrained 2D regime.« less

  2. Random codon re-encoding induces stable reduction of replicative fitness of Chikungunya virus in primate and mosquito cells.

    PubMed

    Nougairede, Antoine; De Fabritus, Lauriane; Aubry, Fabien; Gould, Ernest A; Holmes, Edward C; de Lamballerie, Xavier

    2013-02-01

    Large-scale codon re-encoding represents a powerful method of attenuating viruses to generate safe and cost-effective vaccines. In contrast to specific approaches of codon re-encoding which modify genome-scale properties, we evaluated the effects of random codon re-encoding on the re-emerging human pathogen Chikungunya virus (CHIKV), and assessed the stability of the resultant viruses during serial in cellulo passage. Using different combinations of three 1.4 kb randomly re-encoded regions located throughout the CHIKV genome six codon re-encoded viruses were obtained. Introducing a large number of slightly deleterious synonymous mutations reduced the replicative fitness of CHIKV in both primate and arthropod cells, demonstrating the impact of synonymous mutations on fitness. Decrease of replicative fitness correlated with the extent of re-encoding, an observation that may assist in the modulation of viral attenuation. The wild-type and two re-encoded viruses were passaged 50 times either in primate or insect cells, or in each cell line alternately. These viruses were analyzed using detailed fitness assays, complete genome sequences and the analysis of intra-population genetic diversity. The response to codon re-encoding and adaptation to culture conditions occurred simultaneously, resulting in significant replicative fitness increases for both re-encoded and wild type viruses. Importantly, however, the most re-encoded virus failed to recover its replicative fitness. Evolution of these viruses in response to codon re-encoding was largely characterized by the emergence of both synonymous and non-synonymous mutations, sometimes located in genomic regions other than those involving re-encoding, and multiple convergent and compensatory mutations. However, there was a striking absence of codon reversion (<0.4%). Finally, multiple mutations were rapidly fixed in primate cells, whereas mosquito cells acted as a brake on evolution. In conclusion, random codon re-encoding provides important information on the evolution and genetic stability of CHIKV viruses and could be exploited to develop a safe, live attenuated CHIKV vaccine.

  3. Effects of a lifestyle intervention on psychosocial well-being of severe mentally ill residential patients: ELIPS, a cluster randomized controlled pragmatic trial.

    PubMed

    Stiekema, Annemarie P M; Looijmans, Anne; van der Meer, Lisette; Bruggeman, Richard; Schoevers, Robert A; Corpeleijn, Eva; Jörg, Frederike

    2018-03-01

    Large studies investigating the psychosocial effects of lifestyle interventions in patients with a severe mental illness (SMI) are scarce, especially in residential patients. This large, randomized controlled, multicentre pragmatic trial assessed the psychosocial effects of a combined diet-and-exercise lifestyle intervention targeting the obesogenic environment of SMI residential patients. Twenty-nine sheltered and clinical care teams were randomized into intervention (n=15) or control (n=14) arm. Team tailored diet-and-exercise lifestyle plans were set up to change the obesogenic environment into a healthier setting, and team members were trained in supporting patients to make healthier choices. The control group received care-as-usual. The Calgary Depression Scale for Schizophrenia (CDSS), Positive and Negative Syndrome Scale (PANSS), Health of the Nation Outcome Scales (HoNOS) and the Manchester Short Assessment of Quality of Life (MANSA) were assessed at baseline and after three and twelve months. Data were available for 384 intervention and 386 control patients (48.6±12.5years old, 62.7% males, 73.7% psychotic disorder). Linear mixed model analysis showed no psychosocial improvements in the intervention group compared to care-as-usual; the intervention group showed a slightly reduced quality of life (overall) and a small increase in depressive symptoms (clinical care facilities) and psychotic symptoms (sheltered facilities). This may be due to difficulties with implementation, the intervention not being specifically designed for improvements in mental well-being, or the small change approach, which may take longer to reach an effect. Further research might elucidate what type of lifestyle intervention under what circumstances positively affects psychosocial outcomes in this population. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Stochastic dynamics of genetic broadcasting networks

    NASA Astrophysics Data System (ADS)

    Potoyan, Davit; Wolynes, Peter

    The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a ''time-scale crisis'' of master genes that broadcast their signals to large number of binding sites. We demonstrate that this ''time-scale crisis'' can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκB which broadcasts its signals to many downstream genes that regulate immune response, apoptosis etc.

  5. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks.

    PubMed

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk

    2016-07-18

    Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  6. Aging in the three-dimensional random-field Ising model

    NASA Astrophysics Data System (ADS)

    von Ohr, Sebastian; Manssen, Markus; Hartmann, Alexander K.

    2017-07-01

    We studied the nonequilibrium aging behavior of the random-field Ising model in three dimensions for various values of the disorder strength. This allowed us to investigate how the aging behavior changes across the ferromagnetic-paramagnetic phase transition. We investigated a large system size of N =2563 spins and up to 108 Monte Carlo sweeps. To reach these necessary long simulation times, we employed an implementation running on Intel Xeon Phi coprocessors, reaching single-spin-flip times as short as 6 ps. We measured typical correlation functions in space and time to extract a growing length scale and corresponding exponents.

  7. Stochastic dynamics of genetic broadcasting networks

    NASA Astrophysics Data System (ADS)

    Potoyan, Davit A.; Wolynes, Peter G.

    2017-11-01

    The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a "time-scale crisis" for master genes that broadcast their signals to a large number of binding sites. We demonstrate that this time-scale crisis for clearance in a large broadcasting network can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying a model of the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκ B which broadcasts its signals to many downstream genes that regulate immune response, apoptosis, etc.

  8. Cascade phenomenon against subsequent failures in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng

    2018-06-01

    Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.

  9. Limited accessibility to designs and results of Japanese large-scale clinical trials for cardiovascular diseases.

    PubMed

    Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro

    2011-04-14

    Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.

  10. Hierarchical random walks in trace fossils and the origin of optimal search behavior

    PubMed Central

    Sims, David W.; Reynolds, Andrew M.; Humphries, Nicolas E.; Southall, Emily J.; Wearmouth, Victoria J.; Metcalfe, Brett; Twitchett, Richard J.

    2014-01-01

    Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa. PMID:25024221

  11. The effects of streamwise concave curvature on turbulent boundary layer structure

    NASA Astrophysics Data System (ADS)

    Jeans, A. H.; Johnston, J. P.

    1982-06-01

    Concave curvature has a relatively large, unpredictable effect on turbulent boundary layers. Some, but not all previous studies suggest that a large-scale, stationary array of counter-rotating vortices exists within the turbulent boundary layer on a concave wall. The objective of the present study was to obtain a qualitative model of the flow field in order to increase our understanding of the underlying physics. A large free-surface water channel was constructed in order to perform a visual study of the flow. Streamwise components of mean velocity and turbulence intensity were measured using a hot film anemometer. The upstream boundary was spanwise uniform with a momentum thickness to radius of curvature of 0.05. Compared to flat wall flow, large-scale, randomly distributed sweeps and ejections were seen in the boundary layer on the concave wall. The sweeps appear to suppress the normal mechanism for turbulence production near the wall by inhibiting the bursting process. The ejections appear to enhance turbulence production in the outer layers as the low speed fluid convected from regions near the wall interacts with the higher speed fluid farther out. The large-scale structures did not occur at fixed spanwise locations, and could not be called roll cells or vortices.

  12. Anomalies in the GRBs' distribution

    NASA Astrophysics Data System (ADS)

    Bagoly, Zsolt; Horvath, Istvan; Hakkila, Jon; Toth, Viktor

    2015-08-01

    Gamma-ray bursts (GRBs) are the most luminous objects known: they outshine their host galaxies making them ideal candidates for probing large-scale structure. Earlier, the angular distribution of different GRBs (long, intermediate and short) has been studied in detail with different methods and it has been found that the short and intermediate groups showed deviation from the full randomness at different levels (e.g. Vavrek, R., et al. 2008). However these result based only angular measurements of the BATSE experiment, without any spatial distance indicator involved.Currently we have more than 361 GRBs with measured precise position, optical afterglow and redshift, mainly due to the observations of the Swift mission. This sample is getting large enough that it its homogeneous and isotropic distribution a large scale can be checked. We have recently (Horvath, I. et al., 2014) identified a large clustering of gamma-ray bursts at redshift z ~ 2 in the general direction of the constellations of Hercules and Corona Borealis. This angular excess cannot be entirely attributed to known selection biases, making its existence due to chance unlikely. The scale on which the clustering occurs is disturbingly large, about 2-3 Gpc: the underlying distribution of matter suggested by this cluster is big enough to question standard assumptions about Universal homogeneity and isotropy.

  13. Evaluating large-scale health programmes at a district level in resource-limited countries.

    PubMed

    Svoronos, Theodore; Mate, Kedar S

    2011-11-01

    Recent experience in evaluating large-scale global health programmes has highlighted the need to consider contextual differences between sites implementing the same intervention. Traditional randomized controlled trials are ill-suited for this purpose, as they are designed to identify whether an intervention works, not how, when and why it works. In this paper we review several evaluation designs that attempt to account for contextual factors that contribute to intervention effectiveness. Using these designs as a base, we propose a set of principles that may help to capture information on context. Finally, we propose a tool, called a driver diagram, traditionally used in implementation that would allow evaluators to systematically monitor changing dynamics in project implementation and identify contextual variation across sites. We describe an implementation-related example from South Africa to underline the strengths of the tool. If used across multiple sites and multiple projects, the resulting driver diagrams could be pooled together to form a generalized theory for how, when and why a widely-used intervention works. Mechanisms similar to the driver diagram are urgently needed to complement existing evaluations of large-scale implementation efforts.

  14. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    PubMed

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  15. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction

    PubMed Central

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-01-01

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270

  16. The effects of clinical aromatherapy for anxiety and depression in the high risk postpartum woman - a pilot study.

    PubMed

    Conrad, Pam; Adams, Cindy

    2012-08-01

    The aim of this study was to determine if aromatherapy improves anxiety and/or depression in the high risk postpartum woman and to provide a complementary therapy tool for healthcare practitioners. The pilot study was observational with repeated measures. Private consultation room in a Women's center of a large Indianapolis hospital. 28 women, 0-18 months postpartum. The treatment groups were randomized to either the inhalation group or the aromatherapy hand m'technique. Treatment consisted of 15 min sessions, twice a week for four consecutive weeks. An essential oil blend of rose otto and lavandula angustifolia @ 2% dilution was used in all treatments. The non-randomized control group, comprised of volunteers, was instructed to avoid aromatherapy use during the 4 week study period. Allopathic medical treatment continued for all participants. All subjects completed the Edinburgh Postnatal Depression Scale (EPDS) and Generalized Anxiety Disorder Scale (GAD-7) at the beginning of the study. The scales were then repeated at the midway point (two weeks), and at the end of all treatments (four weeks). Analysis of Variance (ANOVA) was utilized to determine differences in EPDS and/or GAD-7 scores between the aromatherapy and control groups at baseline, midpoint and end of study. No significant differences were found between aromatherapy and control groups at baseline. The midpoint and final scores indicated that aromatherapy had significant improvements greater than the control group on both EPDS and GAD-7 scores. There were no adverse effects reported. The pilot study indicates positive findings with minimal risk for the use of aromatherapy as a complementary therapy in both anxiety and depression scales with the postpartum woman. Future large scale research in aromatherapy with this population is recommended. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Helicity dynamics in stratified turbulence in the absence of forcing.

    PubMed

    Rorai, C; Rosenberg, D; Pouquet, A; Mininni, P D

    2013-06-01

    A numerical study of decaying stably stratified flows is performed. Relatively high stratification (Froude number ≈10(-2)-10(-1)) and moderate Reynolds (Re) numbers (Re≈ 3-6×10(3)) are considered and a particular emphasis is placed on the role of helicity (velocity-vorticity correlations), which is not an invariant of the nondissipative equations. The problem is tackled by integrating the Boussinesq equations in a periodic cubical domain using different initial conditions: a nonhelical Taylor-Green (TG) flow, a fully helical Beltrami [Arnold-Beltrami-Childress (ABC)] flow, and random flows with a tunable helicity. We show that for stratified ABC flows helicity undergoes a substantially slower decay than for unstratified ABC flows. This fact is likely associated to the combined effect of stratification and large-scale coherent structures. Indeed, when the latter are missing, as in random flows, helicity is rapidly destroyed by the onset of gravitational waves. A type of large-scale dissipative "cyclostrophic" balance can be invoked to explain this behavior. No production of helicity is observed, contrary to the case of rotating and stratified flows. When helicity survives in the system, it strongly affects the temporal energy decay and the energy distribution among Fourier modes. We discover in fact that the decay rate of energy for stratified helical flows is much slower than for stratified nonhelical flows and can be considered with a phenomenological model in a way similar to what is done for unstratified rotating flows. We also show that helicity, when strong, has a measurable effect on the Fourier spectra, in particular at scales larger than the buoyancy scale, for which it displays a rather flat scaling associated with vertical shear, as observed in the planetary boundary layer.

  18. Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes

    NASA Astrophysics Data System (ADS)

    Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.

    2016-12-01

    The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.

  19. On the Effectiveness of Pop-Up English Language Glossary Accommodations for EL Students in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Cohen, Dale; Tracy, Ryan; Cohen, Jon

    2017-01-01

    This study examined the effectiveness and influence on validity of a computer-based pop-up English glossary accommodation for English learners (ELs) in grades 3 and 7. In a randomized controlled trial, we administered pop-up English glossaries with audio to students taking a statewide accountability English language arts (ELA) and mathematics…

  20. Teacher Aides, Class Size and Academic Achievement: A Preliminary Evaluation of Indiana's Prime Time.

    ERIC Educational Resources Information Center

    Lapsley, Daniel K.; Daytner, Katrina M.; Kelly, Ken; Maxwell, Scott E.

    This large-scale evaluation of Indiana's Prime Time, a funding mechanism designed to reduce class size or pupil-teacher ratio (PTR) in grades K-3 examined the academic performance of nearly 11,000 randomly selected third graders on the state mandated standardized achievement test as a function of class size, PTR, and presence of an instructional…

  1. How Generalizable Is Your Experiment? An Index for Comparing Experimental Samples and Populations

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2014-01-01

    Although a large-scale experiment can provide an estimate of the average causal impact for a program, the sample of sites included in the experiment is often not drawn randomly from the inference population of interest. In this article, we provide a generalizability index that can be used to assess the degree of similarity between the sample of…

  2. The Effect of Integrated Basic Education Programs on Women's Social and Economic Well-Being in Bolivia.

    ERIC Educational Resources Information Center

    Hua, Haiyan; Burchfield, Shirley

    A large-scale longitudinal study in Bolivia examined the relationship between adult women's basic education and their social and economic well-being and development. A random sample of 1,600 participants and 600 nonparticipants, aged 15-45, was tracked for 3 years (the final sample included 717 participants and 224 controls). The four adult…

  3. The Practical Impact of Recent Computer Advances on the Analysis and Design of Large Scale Networks

    DTIC Science & Technology

    1974-12-01

    Communications, ICC-74, June 17-19, Minneapolis, Minnesota, pp. 31C-1-21C-5. 28. Gitman , I., R, M. Van Slvke and H. Frank, "On Splitting Random Access Broadcast...1974. 29. Gitman , I., "On the Capacity of Slotted ALOHA Network and Some Design Problems," IEEE Transactions on Communications, Maren, 1975. 30

  4. Organizational Communication in Emergencies: Using Multiple Channels and Sources to Combat Noise and Capture Attention

    ERIC Educational Resources Information Center

    Stephens, Keri K.; Barrett, Ashley K.; Mahometa, Michael J.

    2013-01-01

    This study relies on information theory, social presence, and source credibility to uncover what best helps people grasp the urgency of an emergency. We surveyed a random sample of 1,318 organizational members who received multiple notifications about a large-scale emergency. We found that people who received 3 redundant messages coming through at…

  5. Effects of Two Scientific Inquiry Professional Development Interventions on Teaching Practice

    ERIC Educational Resources Information Center

    Grigg, Jeffrey; Kelly, Kimberle A.; Gamoran, Adam; Borman, Geoffrey D.

    2013-01-01

    In this article, we examine classroom observations from a 3-year large-scale randomized trial in the Los Angeles Unified School District (LAUSD) to investigate the extent to which a professional development initiative in inquiry science influenced teaching practices in in 4th and 5th grade classrooms in 73 schools. During the course of the study,…

  6. Assessing change in large-scale forest area by visually interpreting Landsat images

    Treesearch

    Jerry D. Greer; Frederick P. Weber; Raymond L. Czaplewski

    2000-01-01

    As part of the Forest Resources Assessment 1990, the Food and Agriculture Organization of the United Nations visually interpreted a stratified random sample of 117 Landsat scenes to estimate global status and change in tropical forest area. Images from 1980 and 1990 were interpreted by a group of widely experienced technical people in many different tropical countries...

  7. A Prospectus on Restoring Late Successional Forest Structure to Eastside Pine Ecosystems Through Large-Scale, Interdisciplinary Research

    Treesearch

    Steve Zack; William F. Laudenslayer; Luke George; Carl Skinner; William Oliver

    1999-01-01

    At two different locations in northeast California, an interdisciplinary team of scientists is initiating long-term studies to quantify the effects of forest manipulations intended to accelerate andlor enhance late-successional structure of eastside pine forest ecosystems. One study, at Blacks Mountain Experimental Forest, uses a split-plot, factorial, randomized block...

  8. Signaling in large-scale neural networks.

    PubMed

    Berg, Rune W; Hounsgaard, Jørn

    2009-02-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons.

  9. Survey Response in a Statewide Social Experiment: Differences in Being Located and Collaborating, by Race and Hispanic Origin

    ERIC Educational Resources Information Center

    Nam, Yunju; Mason, Lisa Reyes; Kim, Youngmi; Clancy, Margaret; Sherraden, Michael

    2013-01-01

    This study examined whether and how survey response differs by race and Hispanic origin, using data from birth certificates and survey administrative data for a large-scale statewide experiment. The sample consisted of mothers of infants selected from Oklahoma birth certificates using a stratified random sampling method (N = 7,111). This study…

  10. DeepDeath: Learning to predict the underlying cause of death with Big Data.

    PubMed

    Hassanzadeh, Hamid Reza; Ying Sha; Wang, May D

    2017-07-01

    Multiple cause-of-death data provides a valuable source of information that can be used to enhance health standards by predicting health related trajectories in societies with large populations. These data are often available in large quantities across U.S. states and require Big Data techniques to uncover complex hidden patterns. We design two different classes of models suitable for large-scale analysis of mortality data, a Hadoop-based ensemble of random forests trained over N-grams, and the DeepDeath, a deep classifier based on the recurrent neural network (RNN). We apply both classes to the mortality data provided by the National Center for Health Statistics and show that while both perform significantly better than the random classifier, the deep model that utilizes long short-term memory networks (LSTMs), surpasses the N-gram based models and is capable of learning the temporal aspect of the data without a need for building ad-hoc, expert-driven features.

  11. Enhancing superconducting critical current by randomness

    DOE PAGES

    Wang, Y. L.; Thoutam, L. R.; Xiao, Z. L.; ...

    2016-01-11

    The key ingredient of high critical currents in a type-II superconductor is defect sites that pin vortices. Here, we demonstrate that a random pinscape, an overlooked pinning system in nanopatterned superconductors, can lead to a substantially larger critical current enhancement at high magnetic fields than an ordered array of vortex pin sites. We reveal that the better performance of a random pinscape is due to the variation of the local density of its pinning sites, which mitigates the motion of vortices. This is confirmed by achieving even higher enhancement of the critical current through a conformally mapped random pinscape, wheremore » the distribution of the local density of pinning sites is further enlarged. Our findings highlight the potential of random pinscapes in enhancing the superconducting critical currents of applied superconductors in which random pin sites of nanoscale defects emerging in the materials synthesis process or through ex-situ irradiation are the only practical choice for large-scale production. Our results may also stimulate research on effects of a random pinscape in other complementary systems such as colloidal crystals, Bose-Einstein condensates, and Luttinger liquids.« less

  12. Exfoliation of the tungsten fibreform nanostructure by unipolar arcing in the LHD divertor plasma

    NASA Astrophysics Data System (ADS)

    Tokitani, M.; Kajita, S.; Masuzaki, S.; Hirahata, Y.; Ohno, N.; Tanabe, T.; LHD Experiment Group

    2011-10-01

    The tungsten nanostructure (W-fuzz) created in the linear divertor simulator (NAGDIS) was exposed to the Large Helical Device (LHD) divertor plasma for only 2 s (1 shot) to study exfoliation/erosion and microscopic modifications due to the high heat/particle loading under high magnetic field conditions. Very fine and randomly moved unipolar arc trails were clearly observed on about half of the W-fuzz area (6 × 10 mm2). The fuzzy surface was exfoliated by continuously moving arc spots even for the very short exposure time. This is the first observation of unipolar arcing and exfoliation of some areas of the W-fuzz structure itself in a large plasma confinement device with a high magnetic field. The typical width and depth of each arc trail were about 8 µm and 1 µm, respectively, and the arc spots moved randomly on the micrometre scale. The fractality of the arc trails was analysed using a box-counting method, and the fractal dimension (D) of the arc trails was estimated to be D ≈ 1.922. This value indicated that the arc spots moved in Brownian motion, and were scarcely influenced by the magnetic field. One should note that such a large scale exfoliation due to unipolar arcing may enhance the surface erosion of the tungsten armour and act as a serious impurity source for fusion plasmas.

  13. Accurate prediction of personalized olfactory perception from large-scale chemoinformatic features.

    PubMed

    Li, Hongyang; Panwar, Bharat; Omenn, Gilbert S; Guan, Yuanfang

    2018-02-01

    The olfactory stimulus-percept problem has been studied for more than a century, yet it is still hard to precisely predict the odor given the large-scale chemoinformatic features of an odorant molecule. A major challenge is that the perceived qualities vary greatly among individuals due to different genetic and cultural backgrounds. Moreover, the combinatorial interactions between multiple odorant receptors and diverse molecules significantly complicate the olfaction prediction. Many attempts have been made to establish structure-odor relationships for intensity and pleasantness, but no models are available to predict the personalized multi-odor attributes of molecules. In this study, we describe our winning algorithm for predicting individual and population perceptual responses to various odorants in the DREAM Olfaction Prediction Challenge. We find that random forest model consisting of multiple decision trees is well suited to this prediction problem, given the large feature spaces and high variability of perceptual ratings among individuals. Integrating both population and individual perceptions into our model effectively reduces the influence of noise and outliers. By analyzing the importance of each chemical feature, we find that a small set of low- and nondegenerative features is sufficient for accurate prediction. Our random forest model successfully predicts personalized odor attributes of structurally diverse molecules. This model together with the top discriminative features has the potential to extend our understanding of olfactory perception mechanisms and provide an alternative for rational odorant design.

  14. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data.

    PubMed

    Ikegami, Takashi; Mototake, Yoh-Ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-12-28

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  15. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data

    NASA Astrophysics Data System (ADS)

    Ikegami, Takashi; Mototake, Yoh-ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-11-01

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  16. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  17. Conditions where random phase approximation becomes exact in the high-density limit

    NASA Astrophysics Data System (ADS)

    Morawetz, Klaus; Ashokan, Vinod; Bala, Renu; Pathak, Kare Narain

    2018-04-01

    It is shown that, in d -dimensional systems, the vertex corrections beyond the random phase approximation (RPA) or G W approximation scales with the power d -β -α of the Fermi momentum if the relation between Fermi energy and Fermi momentum is ɛf˜pfβ and the interacting potential possesses a momentum power law of ˜p-α . The condition d -β -α <0 specifies systems where RPA is exact in the high-density limit. The one-dimensional structure factor is found to be the interaction-free one in the high-density limit for contact interaction. A cancellation of RPA and vertex corrections render this result valid up to second order in contact interaction. For finite-range potentials of cylindrical wires a large-scale cancellation appears and is found to be independent of the width parameter of the wire. The proposed high-density expansion agrees with the quantum Monte Carlo simulations.

  18. On the theory of Lorentz gases with long range interactions

    NASA Astrophysics Data System (ADS)

    Nota, Alessia; Simonella, Sergio; Velázquez, Juan J. L.

    We construct and study the stochastic force field generated by a Poisson distribution of sources at finite density, x1,x2,…, in ℝ3 each of them yielding a long range potential QiΦ(x - xi) with possibly different charges Qi ∈ ℝ. The potential Φ is assumed to behave typically as |x|-s for large |x|, with s > 1/2. We will denote the resulting random field as “generalized Holtsmark field”. We then consider the dynamics of one tagged particle in such random force fields, in several scaling limits where the mean free path is much larger than the average distance between the scatterers. We estimate the diffusive time scale and identify conditions for the vanishing of correlations. These results are used to obtain appropriate kinetic descriptions in terms of a linear Boltzmann or Landau evolution equation depending on the specific choices of the interaction potential.

  19. From random microstructures to representative volume elements

    NASA Astrophysics Data System (ADS)

    Zeman, J.; Šejnoha, M.

    2007-06-01

    A unified treatment of random microstructures proposed in this contribution opens the way to efficient solutions of large-scale real world problems. The paper introduces a notion of statistically equivalent periodic unit cell (SEPUC) that replaces in a computational step the actual complex geometries on an arbitrary scale. A SEPUC is constructed such that its morphology conforms with images of real microstructures. Here, the appreciated two-point probability function and the lineal path function are employed to classify, from the statistical point of view, the geometrical arrangement of various material systems. Examples of statistically equivalent unit cells constructed for a unidirectional fibre tow, a plain weave textile composite and an irregular-coursed masonry wall are given. A specific result promoting the applicability of the SEPUC as a tool for the derivation of homogenized effective properties that are subsequently used in an independent macroscopic analysis is also presented.

  20. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  1. Fully synchronous solutions and the synchronization phase transition for the finite-N Kuramoto model

    NASA Astrophysics Data System (ADS)

    Bronski, Jared C.; DeVille, Lee; Jip Park, Moon

    2012-09-01

    We present a detailed analysis of the stability of phase-locked solutions to the Kuramoto system of oscillators. We derive an analytical expression counting the dimension of the unstable manifold associated to a given stationary solution. From this we are able to derive a number of consequences, including analytic expressions for the first and last frequency vectors to phase-lock, upper and lower bounds on the probability that a randomly chosen frequency vector will phase-lock, and very sharp results on the large N limit of this model. One of the surprises in this calculation is that for frequencies that are Gaussian distributed, the correct scaling for full synchrony is not the one commonly studied in the literature; rather, there is a logarithmic correction to the scaling which is related to the extremal value statistics of the random frequency vector.

  2. Optimizing Implementation of Obesity Prevention Programs: A Qualitative Investigation Within a Large-Scale Randomized Controlled Trial.

    PubMed

    Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B

    2016-01-01

    The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.

  3. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  4. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  5. Culturally adaptive storytelling intervention versus didactic intervention to improve hypertension control in Vietnam: a cluster-randomized controlled feasibility trial.

    PubMed

    Nguyen, Hoa L; Allison, Jeroan J; Ha, Duc A; Chiriboga, Germán; Ly, Ha N; Tran, Hanh T; Nguyen, Cuong K; Dang, Diem M; Phan, Ngoc T; Vu, Nguyen C; Nguyen, Quang P; Goldberg, Robert J

    2017-01-01

    Vietnam is experiencing an epidemiologic transition with an increased prevalence of non-communicable diseases. Novel, large-scale, effective, and sustainable interventions to control hypertension in Vietnam are needed. We report the results of a cluster-randomized feasibility trial at 3 months follow-up conducted in Hung Yen province, Vietnam, designed to evaluate the feasibility and acceptability of two community-based interventions to improve hypertension control: a "storytelling" intervention, "We Talk about Our Hypertension," and a didactic intervention. The storytelling intervention included stories about strategies for coping with hypertension, with patients speaking in their own words, and didactic content about the importance of healthy lifestyle behaviors including salt reduction and exercise. The didactic intervention included only didactic content. The storytelling intervention was delivered by two DVDs at 3-month intervals; the didactic intervention included only one installment. The trial was conducted in four communes, equally randomized to the two interventions. The mean age of the 160 study patients was 66 years, and 54% were men. Most participants described both interventions as understandable, informative, and motivational. Between baseline and 3 months, mean systolic blood pressure declined by 8.2 mmHg (95% CI 4.1-12.2) in the storytelling group and by 5.5 mmHg (95% CI 1.4-9.5) in the didactic group. The storytelling group also reported a significant increase in hypertension medication adherence. Both interventions were well accepted in several rural communities and were shown to be potentially effective in lowering blood pressure. A large-scale randomized trial is needed to compare the effectiveness of the two interventions in controlling hypertension. ClinicalTrials.gov, NCT02483780.

  6. Temporal behavior of the effective diffusion coefficients for transport in heterogeneous saturated aquifers

    NASA Astrophysics Data System (ADS)

    Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.

    2003-04-01

    When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.

  7. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    PubMed Central

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information processing. PMID:21852971

  8. Non-stationarities in the relationships of heavy precipitation events in the Mediterranean area and the large-scale circulation in the second half of the 20th century

    NASA Astrophysics Data System (ADS)

    Merkenschlager, Christian; Hertig, Elke; Jacobeit, Jucundus

    2017-04-01

    In the context of analyzing temporal varying relationships of heavy precipitation events in the Mediterranean area and associated anomalies of the large-scale circulation, quantile regression models were established. The models were calibrated using different circulation and thermodynamic variables at the 700 hPa and 850 hPa levels as predictors as well as daily precipitation time series at different stations in the Mediterranean area as predictand. Analyses were done for the second half of the 20th century. In the scope of assessing non-stationarities in the predictor-predictand relationships the time series were divided into calibration and validation periods. 100 randomized subsamples were used to calibrate/validate the models under stationary conditions. The highest and lowest skill score of the 100 random samples was used to determine the range of random variability. The model performance under non-stationary conditions was derived from the skill scores of cross-validated running subintervals. If the skill scores of several consecutive years are outside the range of random variability a non-stationarity was declaimed. Particularly the Iberian Peninsula and the Levant region were affected by non-stationarities, the former with significant positive deviations of the skill scores, the latter with significant negative deviations. By means of a case study for the Levant region we determined three possible reasons for non-stationary behavior in the predictor-predictand relationships. The Mediterranean Oscillation as a superordinate system affects the cyclone activity in the Mediterranean basin and the location and intensity of the Cyprus low. Overall, it is demonstrated that non-stationarities have to be taken into account within statistical downscaling model development.

  9. Randomized Trials Built on Sand: Examples from COPD, Hormone Therapy, and Cancer

    PubMed Central

    Suissa, Samy

    2012-01-01

    The randomized controlled trial is the fundamental study design to evaluate the effectiveness of medications and receive regulatory approval. Observational studies, on the other hand, are essential to address post-marketing drug safety issues but have also been used to uncover new indications or new benefits for already marketed drugs. Hormone replacement therapy (HRT) for instance, effective for menopausal symptoms, was reported in several observational studies during the 1980s and 1990s to also significantly reduce the incidence of coronary heart disease. This claim was refuted in 2002 by the large-scale Women’s Health Initiative randomized trial. An example of a new indication for an old drug is that of metformin, an anti-diabetic medication, which is being hailed as a potential anti-cancer agent, primarily on the basis of several recent observational studies that reported impressive reductions in cancer incidence and mortality with its use. These observational studies have now sparked the conduct of large-scale randomized controlled trials currently ongoing in cancer. We show in this paper that the spectacular effects on new indications or new outcomes reported in many observational studies in chronic obstructive pulmonary disease (COPD), HRT, and cancer are the result of time-related biases, such as immortal time bias, that tend to seriously exaggerate the benefits of a drug and that eventually disappear with the proper statistical analysis. In all, while observational studies are central to assess the effects of drugs, their proper design and analysis are essential to avoid bias. The scientific evidence on the potential beneficial effects in new indications of existing drugs will need to be more carefully assessed before embarking on long and expensive unsubstantiated trials. PMID:23908838

  10. An analytically soluble problem in fully nonlinear statistical gravitational lensing

    NASA Technical Reports Server (NTRS)

    Schneider, P.

    1987-01-01

    The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.

  11. Comment on "Heterodyne Lidar Returns in the Turbulent Atmosphere: Performance Evaluation of Simulated Systems"

    NASA Technical Reports Server (NTRS)

    Frehlich, Rod; Kavaya, Michael J.

    2000-01-01

    The explanation for the difference between simulation and the zero-order theory for heterodyne lidar returns in a turbulent atmosphere proposed by Belmonte and Rye is incorrect. The theoretical expansion is not developed under a square- law-structure function approximation (random wedge atmosphere). Agreement between the simulations and the zero-order term of the theoretical expansion is produced for the limit of statistically independent paths (bi-static operation with large transmitter-receiver separation) when the simulations correctly include the large-scale gradients of the turbulent atmosphere.

  12. Redshift Survey Strategies

    NASA Astrophysics Data System (ADS)

    Jones, A. W.; Bland-Hawthorn, J.; Kaiser, N.

    1994-12-01

    In the first half of 1995, the Anglo-Australian Observatory is due to commission a wide field (2.1(deg) ), 400-fiber, double spectrograph system (2dF) at the f/3.3 prime focus of the AAT 3.9m bi-national facility. The instrument should be able to measure ~ 4000 galaxy redshifts (assuming a magnitude limit of b_J ~\\ 20) in a single dark night and is therefore ideally suited to studies of large-scale structure. We have carried out simple 3D numerical simulations to judge the relative merits of sparse surveys and contiguous surveys. We generate a survey volume and fill it randomly with particles according to a selection function which mimics a magnitude-limited survey at b_J = 19.7. Each of the particles is perturbed by a gaussian random field according to the dimensionless power spectrum k(3) P(k) / 2pi (2) determined by Feldman, Kaiser & Peacock (1994) from the IRAS QDOT survey. We introduce some redshift-space distortion as described by Kaiser (1987), a `thermal' component measured from pairwise velocities (Davis & Peebles 1983), and `fingers of god' due to rich clusters at random density enhancements. Our particular concern is to understand how the window function W(2(k)) of the survey geometry compromises the accuracy of statistical measures [e.g., P(k), xi (r), xi (r_sigma ,r_pi )] commonly used in the study of large-scale structure. We also examine the reliability of various tools (e.g. genus) for describing the topological structure within a contiguous region of the survey.

  13. QuickMap: a public tool for large-scale gene therapy vector insertion site mapping and analysis.

    PubMed

    Appelt, J-U; Giordano, F A; Ecker, M; Roeder, I; Grund, N; Hotz-Wagenblatt, A; Opelz, G; Zeller, W J; Allgayer, H; Fruehauf, S; Laufs, S

    2009-07-01

    Several events of insertional mutagenesis in pre-clinical and clinical gene therapy studies have created intense interest in assessing the genomic insertion profiles of gene therapy vectors. For the construction of such profiles, vector-flanking sequences detected by inverse PCR, linear amplification-mediated-PCR or ligation-mediated-PCR need to be mapped to the host cell's genome and compared to a reference set. Although remarkable progress has been achieved in mapping gene therapy vector insertion sites, public reference sets are lacking, as are the possibilities to quickly detect non-random patterns in experimental data. We developed a tool termed QuickMap, which uniformly maps and analyzes human and murine vector-flanking sequences within seconds (available at www.gtsg.org). Besides information about hits in chromosomes and fragile sites, QuickMap automatically determines insertion frequencies in +/- 250 kb adjacency to genes, cancer genes, pseudogenes, transcription factor and (post-transcriptional) miRNA binding sites, CpG islands and repetitive elements (short interspersed nuclear elements (SINE), long interspersed nuclear elements (LINE), Type II elements and LTR elements). Additionally, all experimental frequencies are compared with the data obtained from a reference set, containing 1 000 000 random integrations ('random set'). Thus, for the first time a tool allowing high-throughput profiling of gene therapy vector insertion sites is available. It provides a basis for large-scale insertion site analyses, which is now urgently needed to discover novel gene therapy vectors with 'safe' insertion profiles.

  14. [Spatial point pattern analysis of main trees and flowering Fargesia qinlingensis in Abies fargesii forests in Mt Taibai of the Qinling Mountains, China].

    PubMed

    Li, Guo Chun; Song, Hua Dong; Li, Qi; Bu, Shu Hai

    2017-11-01

    In Abies fargesii forests of the giant panda's habitats in Mt. Taibai, the spatial distribution patterns and interspecific associations of main tree species and their spatial associations with the understory flowering Fargesia qinlingensis were analyzed at multiple scales by univariate and bivaria-te O-ring function in point pattern analysis. The results showed that in the A. fargesii forest, the number of A. fargesii was largest but its population structure was in decline. The population of Betula platyphylla was relatively young, with a stable population structure, while the population of B. albo-sinensis declined. The three populations showed aggregated distributions at small scales and gradually showed random distributions with increasing spatial scales. Spatial associations among tree species were mainly showed at small scales and gradually became not spatially associated with increasing scale. A. fargesii and B. platyphylla were positively associated with flowering F. qinlingensis at large and medium scales, whereas B. albo-sinensis showed negatively associated with flowering F. qinlingensis at large and medium scales. The interaction between trees and F. qinlingensis in the habitats of giant panda promoted the dynamic succession and development of forests, which changed the environment of giant panda's habitats in Qinling.

  15. Scaling laws of passive-scalar diffusion in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan

    2017-05-01

    Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.

  16. Assessment of safety and immunogenicity of two different lots of diphtheria, tetanus, pertussis, hepatitis B and Haemophilus influenzae type b vaccine manufactured using small and large scale manufacturing process.

    PubMed

    Sharma, Hitt J; Patil, Vishwanath D; Lalwani, Sanjay K; Manglani, Mamta V; Ravichandran, Latha; Kapre, Subhash V; Jadhav, Suresh S; Parekh, Sameer S; Ashtagi, Girija; Malshe, Nandini; Palkar, Sonali; Wade, Minal; Arunprasath, T K; Kumar, Dinesh; Shewale, Sunil D

    2012-01-11

    Hib vaccine can be easily incorporated in EPI vaccination schedule as the immunization schedule of Hib is similar to that of DTP vaccine. To meet the global demand of Hib vaccine, SIIL scaled up the Hib conjugate manufacturing process. This study was conducted in Indian infants to assess and compare the immunogenicity and safety of DTwP-HB+Hib (Pentavac(®)) vaccine of SIIL manufactured at large scale with the 'same vaccine' manufactured at a smaller scale. 720 infants aged 6-8 weeks were randomized (2:1 ratio) to receive 0.5 ml of Pentavac(®) vaccine from two different lots one produced at scaled up process and the other at a small scale process. Serum samples obtained before and at one month after the 3rd dose of vaccine from both the groups were tested for IgG antibody response by ELISA and compared to assess non-inferiority. Neither immunological interference nor increased reactogenicity was observed in either of the vaccine groups. All infants developed protective antibody titres to diphtheria, tetanus and Hib disease. For hepatitis B antigen, one child from each group remained sero-negative. The response to pertussis was 88% in large scale group vis-à-vis 87% in small scale group. Non-inferiority was concluded for all five components of the vaccine. No serious adverse event was reported in the study. The scale up vaccine achieved comparable response in terms of the safety and immunogenicity to small scale vaccine and therefore can be easily incorporated in the routine childhood vaccination programme. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Psychometric Properties of the Perceived Wellness Culture and Environment Support Scale.

    PubMed

    Melnyk, Bernadette Mazurek; Szalacha, Laura A; Amaya, Megan

    2018-05-01

    This study reports on the psychometric properties of the 11-item Perceived Wellness Culture and Environment Support Scale (PWCESS) and its relationship with employee healthy lifestyle beliefs and behaviors. Faculty and staff (N = 3959) at a large public university in the United States mid-west completed the PWCESS along with healthy lifestyle beliefs and behaviors scales. Data were randomly split into 2 halves to explore the PWCESS' validity and reliability and the second half to confirm findings. Principal components analysis indicated a unidimensional construct. The PWCESS was positively related to healthy lifestyle beliefs and behaviors supporting the scale's validity. Confirmatory factor analysis supported the unidimensional construct (Cronbach's α = .92). Strong evidence supports the validity and reliability of the PWCESS. Future use of this scale could guide workplace intervention strategies to improve organizational wellness culture and employee health outcomes.

  18. Analgesic effects of treatments for non-specific low back pain: a meta-analysis of placebo-controlled randomized trials.

    PubMed

    Machado, L A C; Kamper, S J; Herbert, R D; Maher, C G; McAuley, J H

    2009-05-01

    Estimates of treatment effects reported in placebo-controlled randomized trials are less subject to bias than those estimates provided by other study designs. The objective of this meta-analysis was to estimate the analgesic effects of treatments for non-specific low back pain reported in placebo-controlled randomized trials. Medline, Embase, Cinahl, PsychInfo and Cochrane Central Register of Controlled Trials databases were searched for eligible trials from earliest records to November 2006. Continuous pain outcomes were converted to a common 0-100 scale and pooled using a random effects model. A total of 76 trials reporting on 34 treatments were included. Fifty percent of the investigated treatments had statistically significant effects, but for most the effects were small or moderate: 47% had point estimates of effects of <10 points on the 100-point scale, 38% had point estimates from 10 to 20 points and 15% had point estimates of >20 points. Treatments reported to have large effects (>20 points) had been investigated only in a single trial. This meta-analysis revealed that the analgesic effects of many treatments for non-specific low back pain are small and that they do not differ in populations with acute or chronic symptoms.

  19. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  20. Brownian motion on random dynamical landscapes

    NASA Astrophysics Data System (ADS)

    Suñé Simon, Marc; Sancho, José María; Lindenberg, Katja

    2016-03-01

    We present a study of overdamped Brownian particles moving on a random landscape of dynamic and deformable obstacles (spatio-temporal disorder). The obstacles move randomly, assemble, and dissociate following their own dynamics. This landscape may account for a soft matter or liquid environment in which large obstacles, such as macromolecules and organelles in the cytoplasm of a living cell, or colloids or polymers in a liquid, move slowly leading to crowding effects. This representation also constitutes a novel approach to the macroscopic dynamics exhibited by active matter media. We present numerical results on the transport and diffusion properties of Brownian particles under this disorder biased by a constant external force. The landscape dynamics are characterized by a Gaussian spatio-temporal correlation, with fixed time and spatial scales, and controlled obstacle concentrations.

  1. Cutaneous lichen planus: A systematic review of treatments.

    PubMed

    Fazel, Nasim

    2015-06-01

    Various treatment modalities are available for cutaneous lichen planus. Pubmed, EMBASE, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Database of Abstracts of Reviews of Effects, and Health Technology Assessment Database were searched for all the systematic reviews and randomized controlled trials related to cutaneous lichen planus. Two systematic reviews and nine relevant randomized controlled trials were identified. Acitretin, griseofulvin, hydroxychloroquine and narrow band ultraviolet B are demonstrated to be effective in the treatment of cutaneous lichen planus. Sulfasalazine is effective, but has an unfavorable safety profile. KH1060, a vitamin D analogue, is not beneficial in the management of cutaneous lichen planus. Evidence from large scale randomized trials demonstrating the safety and efficacy for many other treatment modalities used to treat cutaneous lichen planus is simply not available.

  2. New probes of Cosmic Microwave Background large-scale anomalies

    NASA Astrophysics Data System (ADS)

    Aiola, Simone

    Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the tensions between Planck, WMAP, and SPT temperature data and how the upcoming data release of the ACTpol experiment will contribute to this matter. We provide a description of the current status of the data-analysis pipeline and discuss its ability to recover large-scale modes.

  3. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  4. The topology of large-scale structure. III - Analysis of observations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  5. The topology of large-scale structure. III - Analysis of observations. [in universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III; Weinberg, David H.; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  6. Evolution of the magnetorotational instability on initially tangled magnetic fields

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.; Subramanian, Kandaswamy

    2017-12-01

    The initial magnetic field of previous magnetorotational instability (MRI) simulations has always included a significant system-scale component, even if stochastic. However, it is of conceptual and practical interest to assess whether the MRI can grow when the initial field is turbulent. The ubiquitous presence of turbulent or random flows in astrophysical plasmas generically leads to a small-scale dynamo (SSD), which would provide initial seed turbulent velocity and magnetic fields in the plasma that becomes an accretion disc. Can the MRI grow from these more realistic initial conditions? To address this, we supply a standard shearing box with isotropically forced SSD generated magnetic and velocity fields as initial conditions and remove the forcing. We find that if the initially supplied fields are too weak or too incoherent, they decay from the initial turbulent cascade faster than they can grow via the MRI. When the initially supplied fields are sufficient to allow MRI growth and sustenance, the saturated stresses, large-scale fields and power spectra match those of the standard zero net flux MRI simulation with an initial large-scale vertical field.

  7. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  8. Landscape-scale spatial abundance distributions discriminate core from random components of boreal lake bacterioplankton.

    PubMed

    Niño-García, Juan Pablo; Ruiz-González, Clara; Del Giorgio, Paul A

    2016-12-01

    Aquatic bacterial communities harbour thousands of coexisting taxa. To meet the challenge of discriminating between a 'core' and a sporadically occurring 'random' component of these communities, we explored the spatial abundance distribution of individual bacterioplankton taxa across 198 boreal lakes and their associated fluvial networks (188 rivers). We found that all taxa could be grouped into four distinct categories based on model statistical distributions (normal like, bimodal, logistic and lognormal). The distribution patterns across lakes and their associated river networks showed that lake communities are composed of a core of taxa whose distribution appears to be linked to in-lake environmental sorting (normal-like and bimodal categories), and a large fraction of mostly rare bacteria (94% of all taxa) whose presence appears to be largely random and linked to downstream transport in aquatic networks (logistic and lognormal categories). These rare taxa are thus likely to reflect species sorting at upstream locations, providing a perspective of the conditions prevailing in entire aquatic networks rather than only in lakes. © 2016 John Wiley & Sons Ltd/CNRS.

  9. Opening Doors to Student Success: A Synthesis of Findings from an Evaluation at Six Community Colleges. Policy Brief

    ERIC Educational Resources Information Center

    Scrivener, Susan; Coghlan, Erin

    2011-01-01

    Only one-third of all students who enter community colleges with the intent to earn a degree or certificate actually meet this goal within six years. MDRC launched the Opening Doors Demonstration in 2003--the first large-scale random assignment study in a community college setting--to tackle this problem. Partnering with six community colleges,…

  10. Conducting Causal Effects Studies in Science Education: Considering Methodological Trade-Offs in the Context of Policies Affecting Research in Schools

    ERIC Educational Resources Information Center

    Taylor, Joseph; Kowalski, Susan; Wilson, Christopher; Getty, Stephen; Carlson, Janet

    2013-01-01

    This paper focuses on the trade-offs that lie at the intersection of methodological requirements for causal effect studies and policies that affect how and to what extent schools engage in such studies. More specifically, current federal funding priorities encourage large-scale randomized studies of interventions in authentic settings. At the same…

  11. Cluster Randomized Trial of a Large-Scale Education Initiative in the Democratic Republic of Congo: Pilot Year Impacts on Teacher Development

    ERIC Educational Resources Information Center

    Wolf, Sharon; Aber, John Lawrence; Torrente, Catalina; Rasheed, Damira; McCoy, Marissa

    2014-01-01

    A wealth of research, primarily in high income countries, has accumulated in recent years evaluating teacher effectiveness and the processes through which teachers' performance and job satisfaction can be improved (e.g., Pianta, Mashburn, Downer, Hamre & Justice, 2008; Ross, 1992; 1995). Much less is known about how these processes operate for…

  12. Class Size Effects on Literacy Skills and Literacy Interest in First Grade: A Large-Scale Investigation

    ERIC Educational Resources Information Center

    Ecalle, Jean; Magnan, Annie; Gibert, Fabienne

    2006-01-01

    This article examines the impact of class size on literacy skills and on literacy interest in beginning readers from zones with specific educational needs in France. The data came from an experiment involving first graders in which teachers and pupils were randomly assigned to the different class types (small classes of 10-12 pupils vs. regular…

  13. The Impact of Teacher Study Groups in Vocabulary on Teaching Practice, Teacher Knowledge, and Student Vocabulary Knowledge: A Large-Scale Replication Study

    ERIC Educational Resources Information Center

    Jayanthi, Madhavi; Dimino, Joseph; Gersten, Russell; Taylor, Mary Jo; Haymond, Kelly; Smolkowski, Keith; Newman-Gonchar, Rebecca

    2018-01-01

    The purpose of this replication study was to examine the impact of the Teacher Study Group (TSG) professional development in vocabulary on first-grade teachers' knowledge of vocabulary instruction and observed teaching practice, and on students' vocabulary knowledge. Sixty-two schools from 16 districts in four states were randomly assigned to…

  14. Bringing Effective Instructional Practice to Scale in American Schools: Lessons from the Long Beach Unified School District

    ERIC Educational Resources Information Center

    Zavadsky, Heather

    2016-01-01

    Workforce and societal needs have changed significantly over the past few decades while educational approaches have remained largely the same over the past 50 years. Walk into any random classroom in the United States and you will likely see instruction being delivered to students in straight rows by teachers through lecture style. It is possible…

  15. Becoming a Manual Occupation? The Construction of a Therapy Manual for Use with Language Impaired Children in Mainstream Primary Schools

    ERIC Educational Resources Information Center

    McCartney, Elspeth; Boyle, James; Bannatyne, Susan; Jessiman, Emma; Campbell, Cathy; Kelsey, Cherry; Smith, Jennifer; O'Hare, Anne

    2003-01-01

    The construction of therapy protocols for a large-scale randomized controlled trial comparing speech and language therapists and assistants, and group and individual therapy approaches for children aged 6-11 in mainstream schools is outlined. The aim was to outline the decision-making processes that led to the construction of the research therapy…

  16. Thinking, Fast and Slow? Some Field Experiments to Reduce Crime and Dropout in Chicago. NBER Working Paper 21178

    ERIC Educational Resources Information Center

    Heller, Sara B.; Shah, Anuj K.; Guryan, Jonathan; Ludwig, Jens; Mullainathan, Sendhil; Pollack, Harold A.

    2015-01-01

    We present the results of three large-scale randomized controlled trials (RCTs) carried out in Chicago, testing interventions to reduce crime and dropout by changing the decision-making of economically disadvantaged youth. We study a program called Becoming a Man (BAM), developed by the non-profit Youth Guidance, in two RCTs implemented in 2009-10…

  17. A "Politically Robust" Experimental Design for Public Policy Evaluation, with Application to the Mexican Universal Health Insurance Program

    ERIC Educational Resources Information Center

    King, Gary; Gakidou, Emmanuela; Ravishankar, Nirmala; Moore, Ryan T.; Lakin, Jason; Vargas, Manett; Tellez-Rojo, Martha Maria; Avila, Juan Eugenio Hernandez; Avila, Mauricio Hernandez; Llamas, Hector Hernandez

    2007-01-01

    We develop an approach to conducting large-scale randomized public policy experiments intended to be more robust to the political interventions that have ruined some or all parts of many similar previous efforts. Our proposed design is insulated from selection bias in some circumstances even if we lose observations; our inferences can still be…

  18. Valuing the Recreational Benefits from the Creation of Nature Reserves in Irish Forests

    Treesearch

    Riccardo Scarpa; Susan M. Chilton; W. George Hutchinson; Joseph Buongiorno

    2000-01-01

    Data from a large-scale contingent valuation study are used to investigate the effects of forest attribum on willingness to pay for forest recreation in Ireland. In particular, the presence of a nature reserve in the forest is found to significantly increase the visitors' willingness to pay. A random utility model is used to estimate the welfare change associated...

  19. Probing large-scale magnetism with the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Giovannini, Massimo

    2018-04-01

    Prior to photon decoupling magnetic random fields of comoving intensity in the nano-Gauss range distort the temperature and the polarization anisotropies of the microwave background, potentially induce a peculiar B-mode power spectrum and may even generate a frequency-dependent circularly polarized V-mode. We critically analyze the theoretical foundations and the recent achievements of an interesting trialogue involving plasma physics, general relativity and astrophysics.

  20. Cluster Randomized Trial of a Large-Scale Education Initiative in the Democratic Republic of Congo: Baseline Findings and Lessons

    ERIC Educational Resources Information Center

    Aber, John Lawrence; Torrente, Catalina; Annan, Jeannie; Bundervoet, Tom; Shivshanker, Anjuli

    2012-01-01

    The main purpose of the current paper is to describe and discuss the scientific and practical implications of pursuing rigorous developmental research in a low-income, war-afflicted country such as DRC. In addition, the paper aims to explore the individual, household and school correlates of children's academic performance and mental health and…

  1. Non-equilibrium Phase Transitions: Activated Random Walks at Criticality

    NASA Astrophysics Data System (ADS)

    Cabezas, M.; Rolla, L. T.; Sidoravicius, V.

    2014-06-01

    In this paper we present rigorous results on the critical behavior of the Activated Random Walk model. We conjecture that on a general class of graphs, including , and under general initial conditions, the system at the critical point does not reach an absorbing state. We prove this for the case where the sleep rate is infinite. Moreover, for the one-dimensional asymmetric system, we identify the scaling limit of the flow through the origin at criticality. The case remains largely open, with the exception of the one-dimensional totally-asymmetric case, for which it is known that there is no fixation at criticality.

  2. The Multi-Orientable Random Tensor Model, a Review

    NASA Astrophysics Data System (ADS)

    Tanasa, Adrian

    2016-06-01

    After its introduction (initially within a group field theory framework) in [Tanasa A., J. Phys. A: Math. Theor. 45 (2012), 165401, 19 pages, arXiv:1109.0694], the multi-orientable (MO) tensor model grew over the last years into a solid alternative of the celebrated colored (and colored-like) random tensor model. In this paper we review the most important results of the study of this MO model: the implementation of the 1/N expansion and of the large N limit (N being the size of the tensor), the combinatorial analysis of the various terms of this expansion and finally, the recent implementation of a double scaling limit.

  3. Micro-Loans, Insecticide-Treated Bednets, and Malaria: Evidence from a Randomized Controlled Trial in Orissa, India.

    PubMed

    Tarozzi, Alessandro; Mahajan, Aprajit; Blackburn, Brian; Kopf, Dan; Krishnan, Lakshmi; Yoong, Joanne

    2014-07-01

    We describe findings from the first large-scale cluster randomized controlled trial in a developing country that evaluates the uptake of a health-protecting technology, insecticide-treated bednets (ITNs), through micro-consumer loans, as compared to free distribution and control conditions. Despite a relatively high price, 52 percent of sample households purchased ITNs, highlighting the role of liquidity constraints in explaining earlier low adoption rates. We find mixed evidence of improvements in malaria indices. We interpret the results and their implications within the debate about cost sharing, sustainability and liquidity constraints in public health initiatives in developing countries.

  4. Trapping and assembling of particles and live cells on large-scale random gold nano-island substrates

    PubMed Central

    Kang, Zhiwen; Chen, Jiajie; Wu, Shu-Yuen; Chen, Kun; Kong, Siu-Kai; Yong, Ken-Tye; Ho, Ho-Pui

    2015-01-01

    We experimentally demonstrated the use of random plasmonic nano-islands for optical trapping and assembling of particles and live cells into highly organized pattern with low power density. The observed trapping effect is attributed to the net contribution due to near-field optical trapping force and long-range thermophoretic force, which overcomes the axial convective drag force, while the lateral convection pushes the target objects into the trapping zone. Our work provides a simple platform for on-chip optical manipulation of nano- and micro-sized objects, and may find applications in physical and life sciences. PMID:25928045

  5. Large wood transport and jam formation in a series of flume experiments

    NASA Astrophysics Data System (ADS)

    Davidson, S. L.; MacKenzie, L. G.; Eaton, B. C.

    2015-12-01

    Large wood has historically been removed from streams, resulting in the depletion of in-stream wood in waterways worldwide. As wood increases morphological and hydraulic complexity, the addition of large wood is commonly employed as a means to rehabilitate in-stream habitat. At present, however, the scientific understanding of wood mobilization and transport is incomplete. This paper presents results from a series of four flume experiments in which wood was added to a reach to investigate the piece and reach characteristics that determine wood stability and transport, as well as the time scale required for newly recruited wood to self-organize into stable jams. Our results show that wood transitions from a randomly distributed newly recruited state to a self-organized, or jam-stabilized state, over the course of a single bankfull flow event. Statistical analyses of piece mobility during this transitional period indicate that piece irregularities, especially rootwads, dictate the stability of individual wood pieces; rootwad presence or absence accounts for up to 80% of the variance explained by linear regression models for transport distance. Furthermore, small pieces containing rootwads are especially stable. Large ramped pieces provide nuclei for the formation of persistent wood jams, and the frequency of these pieces in the reach impacts the travel distance of mobile wood. This research shows that the simulation of realistic wood dynamics is possible using a simplified physical model, and also has management implications, as it suggests that randomly added wood may organize into persistent, stable jams, and characterizes the time scale for this transition.

  6. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  7. Peer Assessment Enhances Student Learning: The Results of a Matched Randomized Crossover Experiment in a College Statistics Class.

    PubMed

    Sun, Dennis L; Harris, Naftali; Walther, Guenther; Baiocchi, Michael

    2015-01-01

    Feedback has a powerful influence on learning, but it is also expensive to provide. In large classes it may even be impossible for instructors to provide individualized feedback. Peer assessment is one way to provide personalized feedback that scales to large classes. Besides these obvious logistical benefits, it has been conjectured that students also learn from the practice of peer assessment. However, this has never been conclusively demonstrated. Using an online educational platform that we developed, we conducted an in-class matched-set, randomized crossover experiment with high power to detect small effects. We establish that peer assessment causes a small but significant gain in student achievement. Our study also demonstrates the potential of web-based platforms to facilitate the design of high-quality experiments to identify small effects that were previously not detectable.

  8. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    PubMed

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  9. Building up the spin - orbit alignment of interacting galaxy pairs

    NASA Astrophysics Data System (ADS)

    Moon, Jun-Sung; Yoon, Suk-Jin

    2018-01-01

    Galaxies are not just randomly distributed throughout space. Instead, they are in alignment over a wide range of scales from the cosmic web down to a pair of galaxies. Motivated by recent findings that the spin and the orbital angular momentum vectors of galaxy pairs tend to be parallel, we here investigate the spin - orbit orientation in close pairs using the Illustris cosmological simulation. We find that since z ~ 1, the parallel alignment has become progressively stronger with time through repetitive encounters. The pair Interactions are preferentially in prograde at z = 0 (over 5 sigma significance). The prograde fraction at z = 0 is larger for the pairs influenced more heavily by each other during their evolution. We find no correlation between the spin - orbit orientation and the surrounding large-scale structure. Our results favor the scenario in which the alignment in close pairs is caused by tidal interactions later on, rather than the primordial torquing by the large-scale structures.

  10. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  11. Anomalous diffusion and dynamics of fluorescence recovery after photobleaching in the random-comb model

    NASA Astrophysics Data System (ADS)

    Yuste, S. B.; Abad, E.; Baumgaertner, A.

    2016-07-01

    We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.

  12. CoDuSe group exercise programme improves balance and reduces falls in people with multiple sclerosis: A multi-centre, randomized, controlled pilot study.

    PubMed

    Carling, Anna; Forsberg, Anette; Gunnarsson, Martin; Nilsagård, Ylva

    2017-09-01

    Imbalance leading to falls is common in people with multiple sclerosis (PwMS). To evaluate the effects of a balance group exercise programme (CoDuSe) on balance and walking in PwMS (Expanded Disability Status Scale, 4.0-7.5). A multi-centre, randomized, controlled single-blinded pilot study with random allocation to early or late start of exercise, with the latter group serving as control group for the physical function measures. In total, 14 supervised 60-minute exercise sessions were delivered over 7 weeks. Pretest-posttest analyses were conducted for self-reported near falls and falls in the group starting late. Primary outcome was Berg Balance Scale (BBS). A total of 51 participants were initially enrolled; three were lost to follow-up. Post-intervention, the exercise group showed statistically significant improvement ( p = 0.015) in BBS and borderline significant improvement in MS Walking Scale ( p = 0.051), both with large effect sizes (3.66; -2.89). No other significant differences were found between groups. In the group starting late, numbers of falls and near falls were statistically significantly reduced after exercise compared to before ( p < 0.001; p < 0.004). This pilot study suggests that the CoDuSe exercise improved balance and reduced perceived walking limitations, compared to no exercise. The intervention reduced falls and near falls frequency.

  13. Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.

    PubMed

    Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees

    2016-02-01

    As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.

  14. Treatment effect of methylphenidate on intrinsic functional brain network in medication-naïve ADHD children: A multivariate analysis.

    PubMed

    Yoo, Jae Hyun; Kim, Dohyun; Choi, Jeewook; Jeong, Bumseok

    2018-04-01

    Methylphenidate is a first-line therapeutic option for treating attention-deficit/hyperactivity disorder (ADHD); however, elicited changes on resting-state functional networks (RSFNs) are not well understood. This study investigated the treatment effect of methylphenidate using a variety of RSFN analyses and explored the collaborative influences of treatment-relevant RSFN changes in children with ADHD. Resting-state functional magnetic resonance imaging was acquired from 20 medication-naïve ADHD children before methylphenidate treatment and twelve weeks later. Changes in large-scale functional connectivity were defined using independent component analysis with dual regression and graph theoretical analysis. The amplitude of low frequency fluctuation (ALFF) was measured to investigate local spontaneous activity alteration. Finally, significant findings were recruited to random forest regression to identify the feature subset that best explains symptom improvement. After twelve weeks of methylphenidate administration, large-scale connectivity was increased between the left fronto-parietal RSFN and the left insula cortex and the right fronto-parietal and the brainstem, while the clustering coefficient (CC) of the global network and nodes, the left fronto-parietal, cerebellum, and occipital pole-visual network, were decreased. ALFF was increased in the bilateral superior parietal cortex and decreased in the right inferior fronto-temporal area. The subset of the local and large-scale RSFN changes, including widespread ALFF changes, the CC of the global network and the cerebellum, could explain the 27.1% variance of the ADHD Rating Scale and 13.72% of the Conner's Parent Rating Scale. Our multivariate approach suggests that the neural mechanism of methylphenidate treatment could be associated with alteration of spontaneous activity in the superior parietal cortex or widespread brain regions as well as functional segregation of the large-scale intrinsic functional network.

  15. Internet-Assisted Parent Training Intervention for Disruptive Behavior in 4-Year-Old Children: A Randomized Clinical Trial.

    PubMed

    Sourander, Andre; McGrath, Patrick J; Ristkari, Terja; Cunningham, Charles; Huttunen, Jukka; Lingley-Pottie, Patricia; Hinkka-Yli-Salomäki, Susanna; Kinnunen, Malin; Vuorio, Jenni; Sinokki, Atte; Fossum, Sturla; Unruh, Anita

    2016-04-01

    There is a large gap worldwide in the provision of evidence-based early treatment of children with disruptive behavioral problems. To determine whether an Internet-assisted intervention using whole-population screening that targets the most symptomatic 4-year-old children is effective at 6 and 12 months after the start of treatment. This 2-parallel-group randomized clinical trial was performed from October 1, 2011, through November 30, 2013, at a primary health care clinic in Southwest Finland. Data analysis was performed from August 6, 2015, to December 11, 2015. Of a screened population of 4656 children, 730 met the screening criteria indicating a high level of disruptive behavioral problems. A total of 464 parents of 4-year-old children were randomized into the Strongest Families Smart Website (SFSW) intervention group (n = 232) or an education control (EC) group (n = 232). The SFSW intervention, an 11-session Internet-assisted parent training program that included weekly telephone coaching. Child Behavior Checklist version for preschool children (CBCL/1.5-5) externalizing scale (primary outcome), other CBCL/1.5-5 scales and subscores, Parenting Scale, Inventory of Callous-Unemotional Traits, and the 21-item Depression, Anxiety, and Stress Scale. All data were analyzed by intention to treat and per protocol. The assessments were made before randomization and 6 and 12 months after randomization. Of the children randomized, 287 (61.9%) were male and 79 (17.1%) lived in other than a family with 2 biological parents. At 12-month follow-up, improvement in the SFSW intervention group was significantly greater compared with the control group on the following measures: CBCL/1.5-5 externalizing scale (effect size, 0.34; P < .001), internalizing scale (effect size, 0.35; P < .001), and total scores (effect size, 0.37; P < .001); 5 of 7 syndrome scales, including aggression (effect size, 0.36; P < .001), sleep (effect size, 0.24; P = .002), withdrawal (effect size, 0.25; P = .005), anxiety (effect size, 0.26; P = .003), and emotional problems (effect size, 0.31; P = .001); Inventory of Callous-Unemotional Traits callousness scores (effect size, 0.19; P = .03); and self-reported parenting skills (effect size, 0.53; P < .001). The study reveals the effectiveness and feasibility of an Internet-assisted parent training intervention offered for parents of preschool children with disruptive behavioral problems screened from the whole population. The strategy of population-based screening of children at an early age to offering parent training using digital technology and telephone coaching is a promising public health strategy for providing early intervention for a variety of child mental health problems. clinicaltrials.gov Identifier: NCT01750996.

  16. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  17. Low-dose aspirin in polycythaemia vera: a pilot study. Gruppo Italiano Studio Policitemia (GISP).

    PubMed

    1997-05-01

    In this pilot study, aimed at exploring the feasibility of a large-scale trial of low-dose aspirin in polycythaemia vera (PV), 112 PV patients (42 females, 70 males. aged 17-80 years) were selected for not having a clear indication for, or contraindication to, aspirin treatment and randomized to receive oral aspirin (40 mg/d) or placebo. Follow-up duration was 16 +/- 6 months. Measurements of thromboxane A2 production during whole blood clotting demonstrated complete inhibition of platelet cyclooxygenase activity in patients receiving aspirin. Aspirin administration was not associated with any bleeding complication. Within the limitations of the small sample size, this study indicates that a biochemically effective regimen of antiplatelet therapy is well tolerated in patients with polycythaemia vera and that a large-scale placebo-controlled trial is feasible.

  18. Late-time cosmological phase transitions

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.

  19. Many-body localization in Ising models with random long-range interactions

    NASA Astrophysics Data System (ADS)

    Li, Haoyuan; Wang, Jia; Liu, Xia-Ji; Hu, Hui

    2016-12-01

    We theoretically investigate the many-body localization phase transition in a one-dimensional Ising spin chain with random long-range spin-spin interactions, Vi j∝|i-j |-α , where the exponent of the interaction range α can be tuned from zero to infinitely large. By using exact diagonalization, we calculate the half-chain entanglement entropy and the energy spectral statistics and use them to characterize the phase transition towards the many-body localization phase at infinite temperature and at sufficiently large disorder strength. We perform finite-size scaling to extract the critical disorder strength and the critical exponent of the divergent localization length. With increasing α , the critical exponent experiences a sharp increase at about αc≃1.2 and then gradually decreases to a value found earlier in a disordered short-ranged interacting spin chain. For α <αc , we find that the system is mostly localized and the increase in the disorder strength may drive a transition between two many-body localized phases. In contrast, for α >αc , the transition is from a thermalized phase to the many-body localization phase. Our predictions could be experimentally tested with an ion-trap quantum emulator with programmable random long-range interactions, or with randomly distributed Rydberg atoms or polar molecules in lattices.

  20. On simulating large earthquakes by Green's-function addition of smaller earthquakes

    NASA Astrophysics Data System (ADS)

    Joyner, William B.; Boore, David M.

    Simulation of ground motion from large earthquakes has been attempted by a number of authors using small earthquakes (subevents) as Green's functions and summing them, generally in a random way. We present a simple model for the random summation of subevents to illustrate how seismic scaling relations can be used to constrain methods of summation. In the model η identical subevents are added together with their start times randomly distributed over the source duration T and their waveforms scaled by a factor κ. The subevents can be considered to be distributed on a fault with later start times at progressively greater distances from the focus, simulating the irregular propagation of a coherent rupture front. For simplicity the distance between source and observer is assumed large compared to the source dimensions of the simulated event. By proper choice of η and κ the spectrum of the simulated event deduced from these assumptions can be made to conform at both low- and high-frequency limits to any arbitrary seismic scaling law. For the ω -squared model with similarity (that is, with constant Moƒ3o scaling, where ƒo is the corner frequency), the required values are η = (Mo/Moe)4/3 and κ = (Mo/Moe)-1/3, where Mo is moment of the simulated event and Moe is the moment of the subevent. The spectra resulting from other choices of η and κ, will not conform at both high and low frequency. If η is determined by the ratio of the rupture area of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at high frequency to the ω-squared model with similarity, but not at low frequency. Because the high-frequency part of the spectrum is generally the important part for engineering applications, however, this choice of values for η and κ may be satisfactory in many cases. If η is determined by the ratio of the moment of the simulated event to that of the subevent and κ = 1, the simulated spectrum will conform at low frequency to the ω-squared model with similarity, but not at high frequency. Interestingly, the high-frequency scaling implied by this latter choice of η and κ corresponds to an ω-squared model with constant Moƒ4o—a scaling law proposed by Nuttli, although questioned recently by Haar and others. Simple scaling with κ equal to unity and η equal to the moment ratio would work if the high-frequency spectral decay were ω-1.5 instead of ω-2. Just the required decay is exhibited by the stochastic source model recently proposed by Joynet, if the dislocation-time function is deconvolved out of the spectrum. Simulated motions derived from such source models could be used as subevents rather than recorded motions as is usually done. This strategy is a promising approach to simulation of ground motion from an extended rupture.

  1. Pilot study of large-scale production of mutant pigs by ENU mutagenesis.

    PubMed

    Hai, Tang; Cao, Chunwei; Shang, Haitao; Guo, Weiwei; Mu, Yanshuang; Yang, Shulin; Zhang, Ying; Zheng, Qiantao; Zhang, Tao; Wang, Xianlong; Liu, Yu; Kong, Qingran; Li, Kui; Wang, Dayu; Qi, Meng; Hong, Qianlong; Zhang, Rui; Wang, Xiupeng; Jia, Qitao; Wang, Xiao; Qin, Guosong; Li, Yongshun; Luo, Ailing; Jin, Weiwu; Yao, Jing; Huang, Jiaojiao; Zhang, Hongyong; Li, Menghua; Xie, Xiangmo; Zheng, Xuejuan; Guo, Kenan; Wang, Qinghua; Zhang, Shibin; Li, Liang; Xie, Fei; Zhang, Yu; Weng, Xiaogang; Yin, Zhi; Hu, Kui; Cong, Yimei; Zheng, Peng; Zou, Hailong; Xin, Leilei; Xia, Jihan; Ruan, Jinxue; Li, Hegang; Zhao, Weiming; Yuan, Jing; Liu, Zizhan; Gu, Weiwang; Li, Ming; Wang, Yong; Wang, Hongmei; Yang, Shiming; Liu, Zhonghua; Wei, Hong; Zhao, Jianguo; Zhou, Qi; Meng, Anming

    2017-06-22

    N-ethyl-N-nitrosourea (ENU) mutagenesis is a powerful tool to generate mutants on a large scale efficiently, and to discover genes with novel functions at the whole-genome level in Caenorhabditis elegans, flies, zebrafish and mice, but it has never been tried in large model animals. We describe a successful systematic three-generation ENU mutagenesis screening in pigs with the establishment of the Chinese Swine Mutagenesis Consortium. A total of 6,770 G1 and 6,800 G3 pigs were screened, 36 dominant and 91 recessive novel pig families with various phenotypes were established. The causative mutations in 10 mutant families were further mapped. As examples, the mutation of SOX10 (R109W) in pig causes inner ear malfunctions and mimics human Mondini dysplasia, and upregulated expression of FBXO32 is associated with congenital splay legs. This study demonstrates the feasibility of artificial random mutagenesis in pigs and opens an avenue for generating a reservoir of mutants for agricultural production and biomedical research.

  2. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  3. Population size effects in evolutionary dynamics on neutral networks and toy landscapes

    NASA Astrophysics Data System (ADS)

    Sumedha; Martin, Olivier C.; Peliti, Luca

    2007-05-01

    We study the dynamics of a population subject to selective pressures, evolving either on RNA neutral networks or on toy fitness landscapes. We discuss the spread and the neutrality of the population in the steady state. Different limits arise depending on whether selection or random drift is dominant. In the presence of strong drift we show that the observables depend mainly on Mμ, M being the population size and μ the mutation rate, while corrections to this scaling go as 1/M: such corrections can be quite large in the presence of selection if there are barriers in the fitness landscape. Also we find that the convergence to the large-Mμ limit is linear in 1/Mμ. Finally we introduce a protocol that minimizes drift; then observables scale like 1/M rather than 1/(Mμ), allowing one to determine the large-M limit more quickly when μ is small; furthermore the genotypic diversity increases from O(lnM) to O(M).

  4. Using First Differences to Reduce Inhomogeneity in Radiosonde Temperature Datasets.

    NASA Astrophysics Data System (ADS)

    Free, Melissa; Angell, James K.; Durre, Imke; Lanzante, John; Peterson, Thomas C.; Seidel, Dian J.

    2004-11-01

    The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade-1 for 1960 97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade-1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.


  5. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale Bayesian networks by composition. This compositional approach reflects how (often redundant) subsystems are architected to form systems such as electrical power systems. We develop high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems. The largest among these 24 Bayesian networks contains over 1,000 random variables. Another BN represents the real-world electrical power system ADAPT, which is representative of electrical power systems deployed in aerospace vehicles. In addition to demonstrating the scalability of the compositional approach, we briefly report on experimental results from the diagnostic competition DXC, where the ProADAPT team, using techniques discussed here, obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the industrial track. While we consider diagnosis of power systems specifically, we believe this work is relevant to other system health management problems, in particular in dependable systems such as aircraft and spacecraft. (See CASI ID 20100021910 for supplemental data disk.)

  6. Literature Review: Herbal Medicine Treatment after Large-Scale Disasters.

    PubMed

    Takayama, Shin; Kaneko, Soichiro; Numata, Takehiro; Kamiya, Tetsuharu; Arita, Ryutaro; Saito, Natsumi; Kikuchi, Akiko; Ohsawa, Minoru; Kohayagawa, Yoshitaka; Ishii, Tadashi

    2017-01-01

    Large-scale natural disasters, such as earthquakes, tsunamis, volcanic eruptions, and typhoons, occur worldwide. After the Great East Japan earthquake and tsunami, our medical support operation's experiences suggested that traditional medicine might be useful for treating the various symptoms of the survivors. However, little information is available regarding herbal medicine treatment in such situations. Considering that further disasters will occur, we performed a literature review and summarized the traditional medicine approaches for treatment after large-scale disasters. We searched PubMed and Cochrane Library for articles written in English, and Ichushi for those written in Japanese. Articles published before 31 March 2016 were included. Keywords "disaster" and "herbal medicine" were used in our search. Among studies involving herbal medicine after a disaster, we found two randomized controlled trials investigating post-traumatic stress disorder (PTSD), three retrospective investigations of trauma or common diseases, and seven case series or case reports of dizziness, pain, and psychosomatic symptoms. In conclusion, herbal medicine has been used to treat trauma, PTSD, and other symptoms after disasters. However, few articles have been published, likely due to the difficulty in designing high quality studies in such situations. Further study will be needed to clarify the usefulness of herbal medicine after disasters.

  7. Asymmetric noise-induced large fluctuations in coupled systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira B.; Szwaykowska, Klimka; Carr, Thomas W.

    2017-10-01

    Networks of interacting, communicating subsystems are common in many fields, from ecology, biology, and epidemiology to engineering and robotics. In the presence of noise and uncertainty, interactions between the individual components can lead to unexpected complex system-wide behaviors. In this paper, we consider a generic model of two weakly coupled dynamical systems, and we show how noise in one part of the system is transmitted through the coupling interface. Working synergistically with the coupling, the noise on one system drives a large fluctuation in the other, even when there is no noise in the second system. Moreover, the large fluctuation happens while the first system exhibits only small random oscillations. Uncertainty effects are quantified by showing how characteristic time scales of noise-induced switching scale as a function of the coupling between the two coupled parts of the experiment. In addition, our results show that the probability of switching in the noise-free system scales inversely as the square of reduced noise intensity amplitude, rendering the virtual probability of switching an extremely rare event. Our results showing the interplay between transmitted noise and coupling are also confirmed through simulations, which agree quite well with analytic theory.

  8. Universality from disorder in the random-bond Blume-Capel model

    NASA Astrophysics Data System (ADS)

    Fytas, N. G.; Zierenberg, J.; Theodorakis, P. E.; Weigel, M.; Janke, W.; Malakis, A.

    2018-04-01

    Using high-precision Monte Carlo simulations and finite-size scaling we study the effect of quenched disorder in the exchange couplings on the Blume-Capel model on the square lattice. The first-order transition for large crystal-field coupling is softened to become continuous, with a divergent correlation length. An analysis of the scaling of the correlation length as well as the susceptibility and specific heat reveals that it belongs to the universality class of the Ising model with additional logarithmic corrections which is also observed for the Ising model itself if coupled to weak disorder. While the leading scaling behavior of the disordered system is therefore identical between the second-order and first-order segments of the phase diagram of the pure model, the finite-size scaling in the ex-first-order regime is affected by strong transient effects with a crossover length scale L*≈32 for the chosen parameters.

  9. The feasibility of using a universal Random Forest model to map tree height across different locations and vegetation types

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Jin, S.; Gao, S.; Hu, T.; Liu, J.; Xue, B. L.

    2017-12-01

    Tree height is an important forest structure parameter for understanding forest ecosystem and improving the accuracy of global carbon stock quantification. Light detection and ranging (LiDAR) can provide accurate tree height measurements, but its use in large-scale tree height mapping is limited by the spatial availability. Random Forest (RF) has been one of the most commonly used algorithms for mapping large-scale tree height through the fusion of LiDAR and other remotely sensed datasets. However, how the variances in vegetation types, geolocations and spatial scales of different study sites influence the RF results is still a question that needs to be addressed. In this study, we selected 16 study sites across four vegetation types in United States (U.S.) fully covered by airborne LiDAR data, and the area of each site was 100 km2. The LiDAR-derived canopy height models (CHMs) were used as the ground truth to train the RF algorithm to predict canopy height from other remotely sensed variables, such as Landsat TM imagery, terrain information and climate surfaces. To address the abovementioned question, 22 models were run under different combinations of vegetation types, geolocations and spatial scales. The results show that the RF model trained at one specific location or vegetation type cannot be used to predict tree height in other locations or vegetation types. However, by training the RF model using samples from all locations and vegetation types, a universal model can be achieved for predicting canopy height across different locations and vegetation types. Moreover, the number of training samples and the targeted spatial resolution of the canopy height product have noticeable influence on the RF prediction accuracy.

  10. Recurrent patterning in the daily foraging routes of hamadryas baboons (Papio hamadryas): spatial memory in large-scale versus small-scale space.

    PubMed

    Schreier, Amy L; Grove, Matt

    2014-05-01

    The benefits of spatial memory for foraging animals can be assessed on two distinct spatial scales: small-scale space (travel within patches) and large-scale space (travel between patches). While the patches themselves may be distributed at low density, within patches resources are likely densely distributed. We propose, therefore, that spatial memory for recalling the particular locations of previously visited feeding sites will be more advantageous during between-patch movement, where it may reduce the distances traveled by animals that possess this ability compared to those that must rely on random search. We address this hypothesis by employing descriptive statistics and spectral analyses to characterize the daily foraging routes of a band of wild hamadryas baboons in Filoha, Ethiopia. The baboons slept on two main cliffs--the Filoha cliff and the Wasaro cliff--and daily travel began and ended on a cliff; thus four daily travel routes exist: Filoha-Filoha, Filoha-Wasaro, Wasaro-Wasaro, Wasaro-Filoha. We use newly developed partial sum methods and distribution-fitting analyses to distinguish periods of area-restricted search from more extensive movements. The results indicate a single peak in travel activity in the Filoha-Filoha and Wasaro-Filoha routes, three peaks of travel activity in the Filoha-Wasaro routes, and two peaks in the Wasaro-Wasaro routes; and are consistent with on-the-ground observations of foraging and ranging behavior of the baboons. In each of the four daily travel routes the "tipping points" identified by the partial sum analyses indicate transitions between travel in small- versus large-scale space. The correspondence between the quantitative analyses and the field observations suggest great utility for using these types of analyses to examine primate travel patterns and especially in distinguishing between movement in small versus large-scale space. Only the distribution-fitting analyses are inconsistent with the field observations, which may be due to the scale at which these analyses were conducted. © 2013 Wiley Periodicals, Inc.

  11. Recent advances in scalable non-Gaussian geostatistics: The generalized sub-Gaussian model

    NASA Astrophysics Data System (ADS)

    Guadagnini, Alberto; Riva, Monica; Neuman, Shlomo P.

    2018-07-01

    Geostatistical analysis has been introduced over half a century ago to allow quantifying seemingly random spatial variations in earth quantities such as rock mineral content or permeability. The traditional approach has been to view such quantities as multivariate Gaussian random functions characterized by one or a few well-defined spatial correlation scales. There is, however, mounting evidence that many spatially varying quantities exhibit non-Gaussian behavior over a multiplicity of scales. The purpose of this minireview is not to paint a broad picture of the subject and its treatment in the literature. Instead, we focus on very recent advances in the recognition and analysis of this ubiquitous phenomenon, which transcends hydrology and the Earth sciences, brought about largely by our own work. In particular, we use porosity data from a deep borehole to illustrate typical aspects of such scalable non-Gaussian behavior, describe a very recent theoretical model that (for the first time) captures all these behavioral aspects in a comprehensive manner, show how this allows generating random realizations of the quantity conditional on sampled values, point toward ways of incorporating scalable non-Gaussian behavior in hydrologic analysis, highlight the significance of doing so, and list open questions requiring further research.

  12. Transcriptome characterization and SSR discovery in large-scale loach Paramisgurnus dabryanus (Cobitidae, Cypriniformes).

    PubMed

    Li, Caijuan; Ling, Qufei; Ge, Chen; Ye, Zhuqing; Han, Xiaofei

    2015-02-25

    The large-scale loach (Paramisgurnus dabryanus, Cypriniformes) is a bottom-dwelling freshwater species of fish found mainly in eastern Asia. The natural germplasm resources of this important aquaculture species has been recently threatened due to overfishing and artificial propagation. The objective of this study is to obtain the first functional genomic resource and candidate molecular markers for future conservation and breeding research. Illumina paired-end sequencing generated over one hundred million reads that resulted in 71,887 assembled transcripts, with an average length of 1465bp. 42,093 (58.56%) protein-coding sequences were predicted; and 43,837 transcripts had significant matches to NCBI nonredundant protein (Nr) database. 29,389 and 14,419 transcripts were assigned into gene ontology (GO) categories and Eukaryotic Orthologous Groups (KOG), respectively. 22,102 (31.14%) transcripts were mapped to 302 KEGG pathways. In addition, 15,106 candidate SSR markers were identified, with 11,037 pairs of PCR primers designed. 400 primers pairs of SSR selected randomly were validated, of which 364 (91%) pairs of primers were able to produce PCR products. Further test with 41 loci and 20 large-scale loach specimens collected from the four largest lakes in China showed that 36 (87.8%) loci were polymorphic. The transcriptomic profile and SSR repertoire obtained in this study will facilitate population genetic studies and selective breeding of large-scale loach in the future. Copyright © 2015. Published by Elsevier B.V.

  13. Role of special cross-links in structure formation of bacterial DNA polymer

    NASA Astrophysics Data System (ADS)

    Agarwal, Tejal; Manjunath, G. P.; Habib, Farhat; Lakshmi Vaddavalli, Pavana; Chatterji, Apratim

    2018-01-01

    Using data from contact maps of the DNA-polymer of Escherichia coli (E. Coli) (at kilobase pair resolution) as an input to our model, we introduce cross-links between monomers in a bead-spring model of a ring polymer at very specific points along the chain. Via suitable Monte Carlo simulations, we show that the presence of these cross-links leads to a particular organization of the chain at large (micron) length scales of the DNA. We also investigate the structure of a ring polymer with an equal number of cross-links at random positions along the chain. We find that though the polymer does get organized at the large length scales, the nature of the organization is quite different from the organization observed with cross-links at specific biologically determined positions. We used the contact map of E. Coli bacteria which has around 4.6 million base pairs in a single circular chromosome. In our coarse-grained flexible ring polymer model, we used 4642 monomer beads and observed that around 80 cross-links are enough to induce the large-scale organization of the molecule accounting for statistical fluctuations caused by thermal energy. The length of a DNA chain even of a simple bacterial cell such as E. Coli is much longer than typical proteins, hence we avoided methods used to tackle protein folding problems. We define new suitable quantities to identify the large scale structure of a polymer chain with a few cross-links.

  14. Stereotypical modulations in dynamic functional connectivity explained by changes in BOLD variance.

    PubMed

    Glomb, Katharina; Ponce-Alvarez, Adrián; Gilson, Matthieu; Ritter, Petra; Deco, Gustavo

    2018-05-01

    Spontaneous activity measured in human subject under the absence of any task exhibits complex patterns of correlation that largely correspond to large-scale functional topographies obtained with a wide variety of cognitive and perceptual tasks. These "resting state networks" (RSNs) fluctuate over time, forming and dissolving on the scale of seconds to minutes. While these fluctuations, most prominently those of the default mode network, have been linked to cognitive function, it remains unclear whether they result from random noise or whether they index a nonstationary process which could be described as state switching. In this study, we use a sliding windows-approach to relate temporal dynamics of RSNs to global modulations in correlation and BOLD variance. We compare empirical data, phase-randomized surrogate data, and data simulated with a stationary model. We find that RSN time courses exhibit a large amount of coactivation in all three cases, and that the modulations in their activity are closely linked to global dynamics of the underlying BOLD signal. We find that many properties of the observed fluctuations in FC and BOLD, including their ranges and their correlations amongst each other, are explained by fluctuations around the average FC structure. However, we also report some interesting characteristics that clearly support nonstationary features in the data. In particular, we find that the brain spends more time in the troughs of modulations than can be expected from stationary dynamics. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  16. Qigong Exercises for the Management of Type 2 Diabetes Mellitus

    PubMed Central

    Close, Jacqueline R.; Lilly, Harold Ryan; Guillaume, Nathalie; Sun, Guan-Cheng

    2017-01-01

    Background: The purpose of this article is to clarify and define medical qigong and to identify an appropriate study design and methodology for a large-scale study looking at the effects of qigong in patients with type 2 diabetes mellitus (T2DM), specifically subject enrollment criteria, selection of the control group and study duration. Methods: A comprehensive literature review of English databases was used to locate articles from 1980–May 2017 involving qigong and T2DM. Control groups, subject criteria and the results of major diabetic markers were reviewed and compared within each study. Definitions of qigong and its differentiation from physical exercise were also considered. Results: After a thorough review, it was found that qigong shows positive effects on T2DM; however, there were inconsistencies in control groups, research subjects and diabetic markers analyzed. It was also discovered that there is a large variation in styles and definitions of qigong. Conclusions: Qigong exercise has shown promising results in clinical experience and in randomized, controlled pilot studies for affecting aspects of T2DM including blood glucose, triglycerides, total cholesterol, weight, BMI and insulin resistance. Due to the inconsistencies in study design and methods and the lack of large-scale studies, further well-designed randomized control trials (RCT) are needed to evaluate the ‘vital energy’ or qi aspect of internal medical qigong in people who have been diagnosed with T2DM. PMID:28930273

  17. Reading Skill Transfer across Languages: Outcomes from Longitudinal Bilingual Randomized Control Trials in Kenya and Haiti

    ERIC Educational Resources Information Center

    Piper, Benjamin; Bulat, Jennae; Johnston, Andrew

    2015-01-01

    If children do not learn how to read in the first few years of primary school, they will struggle to complete the cycle, and are at greater risk of dropping out. It is therefore crucial to identify and test interventions that have the potential of making a large impact, can be implemented quickly, and are affordable to be taken to scale. This is…

  18. Fault-Tolerant Sequencer Using FPGA-Based Logic Designs for Space Applications

    DTIC Science & Technology

    2013-12-01

    Prototype Board SBU single bit upset SDK software development kit SDRAM synchronous dynamic random-access memory SEB single-event burnout ...current VHDL VHSIC hardware description language VHSIC very-high-speed integrated circuits VLSI very-large- scale integration VQFP very...transient pulse, called a single-event transient (SET), or even cause permanent damage to the device in the form of a burnout or gate rupture. The SEE

  19. Size of Coarse Woody Debris 5 Years After Girdling and Removal Treatments in 50-Year-Old Loblolly Pine Plantations

    Treesearch

    M. Boyd Edwards

    2004-01-01

    In 1996, a study began at Savannah River Site to investigate large-scale replicated forest areas to control coarse woody debris for integrated biodiversity objectives. Research design was a randomized complete block with four treatments replicated in four blocks, resulting in 16 plots. The treatments applied to 50-year-old loblolly pine stands were (1) control, (2)...

  20. Spin Glass Patch Planting

    NASA Technical Reports Server (NTRS)

    Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.

    2016-01-01

    In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.

  1. Impacts of Social-Emotional Curricula on Three-Year-Olds: Exploratory Findings from the Head Start CARES Demonstration. Research Snapshot. OPRE Report 2014-78

    ERIC Educational Resources Information Center

    Hsueh, JoAnn; Lowenstein, Amy E.; Morris, Pamela; Mattera, Shira K.; Bangser, Michael

    2014-01-01

    This report presents exploratory impact findings for 3-year-olds from the Head Start CARES demonstration, a large-scale randomized controlled trial implemented in Head Start centers for one academic year across the country. The study was designed primarily to test the effects of the enhancements on 4-year-olds, but it also provides an opportunity…

  2. Building rooftop classification using random forests for large-scale PV deployment

    NASA Astrophysics Data System (ADS)

    Assouline, Dan; Mohajeri, Nahid; Scartezzini, Jean-Louis

    2017-10-01

    Large scale solar Photovoltaic (PV) deployment on existing building rooftops has proven to be one of the most efficient and viable sources of renewable energy in urban areas. As it usually requires a potential analysis over the area of interest, a crucial step is to estimate the geometric characteristics of the building rooftops. In this paper, we introduce a multi-layer machine learning methodology to classify 6 roof types, 9 aspect (azimuth) classes and 5 slope (tilt) classes for all building rooftops in Switzerland, using GIS processing. We train Random Forests (RF), an ensemble learning algorithm, to build the classifiers. We use (2 × 2) [m2 ] LiDAR data (considering buildings and vegetation) to extract several rooftop features, and a generalised footprint polygon data to localize buildings. The roof classifier is trained and tested with 1252 labeled roofs from three different urban areas, namely Baden, Luzern, and Winterthur. The results for roof type classification show an average accuracy of 67%. The aspect and slope classifiers are trained and tested with 11449 labeled roofs in the Zurich periphery area. The results for aspect and slope classification show different accuracies depending on the classes: while some classes are well identified, other under-represented classes remain challenging to detect.

  3. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    PubMed Central

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277

  4. Differential effects of antipsychotic drugs on insight in first episode schizophrenia: Data from the European First-Episode Schizophrenia Trial (EUFEST).

    PubMed

    Pijnenborg, G H M; Timmerman, M E; Derks, E M; Fleischhacker, W W; Kahn, R S; Aleman, A

    2015-06-01

    Although antipsychotics are widely prescribed, their effect of on improving poor illness insight in schizophrenia has seldom been investigated and therefore remains uncertain. This paper examines the effects of low dose haloperidol, amisulpride, olanzapine, quetiapine, and ziprasidone on insight in first-episode schizophrenia, schizoaffective disorder, or schizophreniform disorder. The effects of five antipsychotic drugs in first episode psychosis on insight were compared in a large scale open randomized controlled trial conducted in 14 European countries: the European First-Episode Schizophrenia Trial (EUFEST). Patients with at least minimal impairments in insight were included in the present study (n=455). Insight was assessed with item G12 of the Positive and Negative Syndrome Scale (PANSS), administered at baseline and at 1, 3, 6, 9, and 12 months after randomization. The use of antipsychotics was associated with clear improvements in insight over and above improvements in other symptoms. This effect was most pronounced in the first three months of treatment, with quetiapine being significantly less effective than other drugs. Effects of spontaneous improvement cannot be ruled out due to the lack of a placebo control group, although such a large spontaneous improvement of insight would seem unlikely. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.

  5. Experiments in randomly agitated granular assemblies close to the jamming transition

    NASA Astrophysics Data System (ADS)

    Caballero, Gabriel; Lindner, Anke; Ovarlez, Guillaume; Reydellet, Guillaume; Lanuza, José; Clément, Eric

    2004-11-01

    We present the results obtained for two experiments on randomly agitated granular assemblies using a novel way of shaking. First we discuss the transport properties of a 2D model system undergoing classical shaking that show the importance of large scale dynamics for this type of agitation and offer a local view of the microscopic motions of a grain. We then develop a new way of vibrating the system allowing for random accelerations smaller than gravity. Using this method we study the evolution of the free surface as well as results from a light scattering method for a 3D model system. The final aim of these experiments is to investigate the ideas of effective temperature on the one hand as a function of inherent states and on the other hand using fluctuation dissipation relations.

  6. Experiments in randomly agitated granular assemblies close to the jamming transition

    NASA Astrophysics Data System (ADS)

    Caballero, Gabriel; Lindner, Anke; Ovarlez, Guillaume; Reydellet, Guillaume; Lanuza, José; Clément, Eric

    2004-03-01

    We present the results obtained for two experiments on randomly agitated granular assemblies using a novel way of shaking. First we discuss the transport properties of a 2D model system undergoing classical shaking that show the importance of large scale dynamics for this type of agitation and offer a local view of the microscopic motions of a grain. We then develop a new way of vibrating the system allowing for random accelerations smaller than gravity. Using this method we study the evolution of the free surface as well as results from a light scattering method for a 3D model system. The final aim of these experiments is to investigate the ideas of effective temperature on the one hand as a function of inherent states and on the other hand using fluctuation dissipation relations.

  7. Anomalous scaling of passive scalar fields advected by the Navier-Stokes velocity ensemble: effects of strong compressibility and large-scale anisotropy.

    PubMed

    Antonov, N V; Kostenko, M M

    2014-12-01

    The field theoretic renormalization group and the operator product expansion are applied to two models of passive scalar quantities (the density and the tracer fields) advected by a random turbulent velocity field. The latter is governed by the Navier-Stokes equation for compressible fluid, subject to external random force with the covariance ∝δ(t-t')k(4-d-y), where d is the dimension of space and y is an arbitrary exponent. The original stochastic problems are reformulated as multiplicatively renormalizable field theoretic models; the corresponding renormalization group equations possess infrared attractive fixed points. It is shown that various correlation functions of the scalar field, its powers and gradients, demonstrate anomalous scaling behavior in the inertial-convective range already for small values of y. The corresponding anomalous exponents, identified with scaling (critical) dimensions of certain composite fields ("operators" in the quantum-field terminology), can be systematically calculated as series in y. The practical calculation is performed in the leading one-loop approximation, including exponents in anisotropic contributions. It should be emphasized that, in contrast to Gaussian ensembles with finite correlation time, the model and the perturbation theory presented here are manifestly Galilean covariant. The validity of the one-loop approximation and comparison with Gaussian models are briefly discussed.

  8. Validity of Random Short Forms: III. Wechsler's Intelligence Scales.

    ERIC Educational Resources Information Center

    Silverstein, A. B.

    1983-01-01

    Formulas for estimating the validity of random short forms were applied to the standardization data for the Wechsler Adult Intelligence Scale-Revised, the Minnesota Multiphasic Personality Inventory, and the Marlowe-Crowne Social Desirability Scale. These formulas demonstrated how much "better than random" the best short forms of these…

  9. Topology of Neutral Hydrogen within the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Chepurnov, A.; Gordon, J.; Lazarian, A.; Stanimirovic, S.

    2008-12-01

    In this paper, genus statistics have been applied to an H I column density map of the Small Magellanic Cloud in order to study its topology. To learn how topology changes with the scale of the system, we provide topology studies for column density maps at varying resolutions. To evaluate the statistical error of the genus, we randomly reassign the phases of the Fourier modes while keeping the amplitudes. We find that at the smallest scales studied (40 pc <= λ <= 80 pc), the genus shift is negative in all regions, implying a clump topology. At the larger scales (110 pc <= λ <= 250 pc), the topology shift is detected to be negative (a "meatball" topology) in four cases and positive (a "swiss cheese" topology) in two cases. In four regions, there is no statistically significant topology shift at large scales.

  10. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  11. Numerical simulation of turbulence and terahertz magnetosonic waves generation in collisionless plasmas

    NASA Astrophysics Data System (ADS)

    Kumar, Narender; Singh, Ram Kishor; Sharma, Swati; Uma, R.; Sharma, R. P.

    2018-01-01

    This paper presents numerical simulations of laser beam (x-mode) coupling with a magnetosonic wave (MSW) in a collisionless plasma. The coupling arises through ponderomotive non-linearity. The pump beam has been perturbed by a periodic perturbation that leads to the nonlinear evolution of the laser beam. It is observed that the frequency spectra of the MSW have peaks at terahertz frequencies. The simulation results show quite complex localized structures that grow with time. The ensemble averaged power spectrum has also been studied which indicates that the spectral index follows an approximate scaling of the order of ˜ k-2.1 at large scales and scaling of the order of ˜ k-3.6 at smaller scales. The results indicate considerable randomness in the spatial structure of the magnetic field profile which gives sufficient indication of turbulence.

  12. Large-scale Estimates of Leaf Area Index from Active Remote Sensing Laser Altimetry

    NASA Astrophysics Data System (ADS)

    Hopkinson, C.; Mahoney, C.

    2016-12-01

    Leaf area index (LAI) is a key parameter that describes the spatial distribution of foliage within forest canopies which in turn control numerous relationships between the ground, canopy, and atmosphere. The retrieval of LAI has demonstrated success by in-situ (digital) hemispherical photography (DHP) and airborne laser scanning (ALS) data; however, field and ALS acquisitions are often spatially limited (100's km2) and costly. Large-scale (>1000's km2) retrievals have been demonstrated by optical sensors, however, accuracies remain uncertain due to the sensor's inability to penetrate the canopy. The spaceborne Geoscience Laser Altimeter System (GLAS) provides a possible solution in retrieving large-scale derivations whilst simultaneously penetrating the canopy. LAI retrieved by multiple DHP from 6 Australian sites, representing a cross-section of Australian ecosystems, were employed to model ALS LAI, which in turn were used to infer LAI from GLAS data at 5 other sites. An optimally filtered GLAS dataset was then employed in conjunction with a host of supplementary data to build a Random Forest (RF) model to infer predictions (and uncertainties) of LAI at a 250 m resolution across the forested regions of Australia. Predictions were validated against ALS-based LAI from 20 sites (R2=0.64, RMSE=1.1 m2m-2); MODIS-based LAI were also assessed against these sites (R2=0.30, RMSE=1.78 m2m-2) to demonstrate the strength of GLAS-based predictions. The large-scale nature of current predictions was also leveraged to demonstrate large-scale relationships of LAI with other environmental characteristics, such as: canopy height, elevation, and slope. The need for such wide-scale quantification of LAI is key in the assessment and modification of forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network, in fulfilling their government issued mandates.

  13. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    NASA Astrophysics Data System (ADS)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.

  14. Finite-Difference Modeling of Seismic Wave Scattering in 3D Heterogeneous Media: Generation of Tangential Motion from an Explosion Source

    NASA Astrophysics Data System (ADS)

    Hirakawa, E. T.; Pitarka, A.; Mellors, R. J.

    2015-12-01

    Evan Hirakawa, Arben Pitarka, and Robert Mellors One challenging task in explosion seismology is development of physical models for explaining the generation of S-waves during underground explosions. Pitarka et al. (2015) used finite difference simulations of SPE-3 (part of Source Physics Experiment, SPE, an ongoing series of underground chemical explosions at the Nevada National Security Site) and found that while a large component of shear motion was generated directly at the source, additional scattering from heterogeneous velocity structure and topography are necessary to better match the data. Large-scale features in the velocity model used in the SPE simulations are well constrained, however, small-scale heterogeneity is poorly constrained. In our study we used a stochastic representation of small-scale variability in order to produce additional high-frequency scattering. Two methods for generating the distributions of random scatterers are tested. The first is done in the spatial domain by essentially smoothing a set of random numbers over an ellipsoidal volume using a Gaussian weighting function. The second method consists of filtering a set of random numbers in the wavenumber domain to obtain a set of heterogeneities with a desired statistical distribution (Frankel and Clayton, 1986). This method is capable of generating distributions with either Gaussian or von Karman autocorrelation functions. The key parameters that affect scattering are the correlation length, the standard deviation of velocity for the heterogeneities, and the Hurst exponent, which is only present in the von Karman media. Overall, we find that shorter correlation lengths as well as higher standard deviations result in increased tangential motion in the frequency band of interest (0 - 10 Hz). This occurs partially through S-wave refraction, but mostly by P-S and Rg-S waves conversions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  15. A randomization approach to handling data scaling in nuclear medicine.

    PubMed

    Bai, Chuanyong; Conwell, Richard; Kindem, Joel

    2010-06-01

    In medical imaging, data scaling is sometimes desired to handle the system complexity, such as uniformity calibration. Since the data are usually saved in short integer, conventional data scaling will first scale the data in floating point format and then truncate or round the floating point data to short integer data. For example, when using truncation, scaling of 9 by 1.1 results in 9 and scaling of 10 by 1.1 results in 11. When the count level is low, such scaling may change the local data distribution and affect the intended application of the data. In this work, the authors use an example gated cardiac SPECT study to illustrate the effect of conventional scaling by factors of 1.1 and 1.2. The authors then scaled the data with the same scaling factors using a randomization approach, in which a random number evenly distributed between 0 and 1 is generated to determine how the floating point data will be saved as short integer data. If the random number is between 0 and 0.9, then 9.9 will be saved as 10, otherwise 9. In other words, the floating point value 9.9 will be saved in short integer value as 10 with 90% probability or 9 with 10% probability. For statistical analysis of the performance, the authors applied the conventional approach with rounding and the randomization approach to 50 consecutive gated studies from a clinical site. For the example study, the image reconstructed from the original data showed an apparent perfusion defect at the apex of the myocardium. The defect size was noticeably changed by scaling with 1.1 and 1.2 using the conventional approaches with truncation and rounding. Using the randomization approach, in contrast, the images from the scaled data appeared identical to the original image. Line profile analysis of the scaled data showed that the randomization approach introduced the least change to the data as compared to the conventional approaches. For the 50 gated data sets, significantly more studies showed quantitative differences between the original images and the images from the data scaled by 1.2 using the rounding approach than the randomization approach [46/50 (92%) versus 3/50 (6%), p < 0.05]. Likewise, significantly more studies showed visually noticeable differences between the original images and the images from the data scaled by 1.2 using the rounding approach than randomization [29/50 (58%) versus 1/50 (2%), p < 0.05]. In conclusion, the proposed randomization approach minimizes the scaling-introduced local data change as compared to the conventional approaches. It is preferred for nuclear medicine data scaling.

  16. From Cores to Envelopes to Disks: A Multi-scale View of Magnetized Star Formation

    NASA Astrophysics Data System (ADS)

    Hull, Charles L. H.

    2014-12-01

    Observations of polarization in star forming regions have been made across many wavelengths, many size scales, and many stages of stellar evolution. One of the overarching goals of these observations has been to determine the importance of magnetic fields -- which are the cause of the polarization -- in the star formation process. We begin by describing the commissioning and the calibration of the 1.3 mm dual-polarization receiver system we built for CARMA (the Combined Array for Research in Millimeter-wave Astronomy), a radio telescope in the eastern Sierra region of California. One of the primary science drivers behind the polarization system is to observe polarized thermal emission from dust grains in the dense clumps of dust and gas where the youngest, Class 0 protostars are forming. We go on to describe the CARMA TADPOL survey -- the largest high-resolution (~1000 AU scale) survey to date of dust polarization in low-mass protostellar cores -- and discuss our main findings: (1) Magnetic fields (B-fields) on scales of ~1000 AU are not tightly aligned with protostellar outflows. Rather, the data are consistent both with scenarios where outflows and magnetic fields are preferentially misaligned (perpendicular) and where they are randomly aligned. (2) Sources with high CARMA polarization fractions have consistent B-field orientations on large scales (~20'', measured using single-dish submillimeter telescopes) and small scales (~2.5'', measured by CARMA). We interpret this to mean that in at least some cases B-fields play a role in regulating the infall of material all the way down to the ~1000 AU scales of protostellar envelopes. Finally, (3) While on the whole outflows appear to be randomly aligned with B-fields, in sources with low polarization fractions there is a hint that outflows are preferentially perpendicular to small-scale B-fields, which suggests that in these sources the fields have been wrapped up by envelope rotation. This work shows that the ~1000 AU protostellar envelope may be a turning point: at larger scales B-fields may still retain the memory of the global B-field drawn in from the ambient medium; but at smaller scales the B-fields may be affected by the dynamics of both envelope and disk rotation. This sets the stage for ALMA (the Atacama Large Millimeter/submillimeter Array), which will soon reveal the morphology of B-fields in circumstellar disks themselves.

  17. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  18. Price of anarchy is maximized at the percolation threshold.

    PubMed

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  19. Principle of Parsimony, Fake Science, and Scales

    NASA Astrophysics Data System (ADS)

    Yeh, T. C. J.; Wan, L.; Wang, X. S.

    2017-12-01

    Considering difficulties in predicting exact motions of water molecules, and the scale of our interests (bulk behaviors of many molecules), Fick's law (diffusion concept) has been created to predict solute diffusion process in space and time. G.I. Taylor (1921) demonstrated that random motion of the molecules reach the Fickian regime in less a second if our sampling scale is large enough to reach ergodic condition. Fick's law is widely accepted for describing molecular diffusion as such. This fits the definition of the parsimony principle at the scale of our concern. Similarly, advection-dispersion or convection-dispersion equation (ADE or CDE) has been found quite satisfactory for analysis of concentration breakthroughs of solute transport in uniformly packed soil columns. This is attributed to the solute is often released over the entire cross-section of the column, which has sampled many pore-scale heterogeneities and met the ergodicity assumption. Further, the uniformly packed column contains a large number of stationary pore-size heterogeneity. The solute thus reaches the Fickian regime after traveling a short distance along the column. Moreover, breakthrough curves are concentrations integrated over the column cross-section (the scale of our interest), and they meet the ergodicity assumption embedded in the ADE and CDE. To the contrary, scales of heterogeneity in most groundwater pollution problems evolve as contaminants travel. They are much larger than the scale of our observations and our interests so that the ergodic and the Fickian conditions are difficult. Upscaling the Fick's law for solution dispersion, and deriving universal rules of the dispersion to the field- or basin-scale pollution migrations are merely misuse of the parsimony principle and lead to a fake science ( i.e., the development of theories for predicting processes that can not be observed.) The appropriate principle of parsimony for these situations dictates mapping of large-scale heterogeneities as detailed as possible and adapting the Fick's law for effects of small-scale heterogeneity resulting from our inability to characterize them in detail.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prinja, A. K.

    The Karhunen-Loeve stochastic spectral expansion of a random binary mixture of immiscible fluids in planar geometry is used to explore asymptotic limits of radiation transport in such mixtures. Under appropriate scalings of mixing parameters - correlation length, volume fraction, and material cross sections - and employing multiple- scale expansion of the angular flux, previously established atomic mix and diffusion limits are reproduced. When applied to highly contrasting material properties in the small cor- relation length limit, the methodology yields a nonstandard reflective medium transport equation that merits further investigation. Finally, a hybrid closure is proposed that produces both small andmore » large correlation length limits of the closure condition for the material averaged equations.« less

  1. Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness

    NASA Technical Reports Server (NTRS)

    Rossbacher, Lisa A.

    1987-01-01

    One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.

  2. Pulsar recoil by large-scale anisotropies in supernova explosions.

    PubMed

    Scheck, L; Plewa, T; Janka, H-Th; Kifonidis, K; Müller, E

    2004-01-09

    Assuming that the neutrino luminosity from the neutron star core is sufficiently high to drive supernova explosions by the neutrino-heating mechanism, we show that low-mode (l=1,2) convection can develop from random seed perturbations behind the shock. A slow onset of the explosion is crucial, requiring the core luminosity to vary slowly with time, in contrast to the burstlike exponential decay assumed in previous work. Gravitational and hydrodynamic forces by the globally asymmetric supernova ejecta were found to accelerate the remnant neutron star on a time scale of more than a second to velocities above 500 km s(-1), in agreement with observed pulsar proper motions.

  3. Conducting pilot and feasibility studies.

    PubMed

    Cope, Diane G

    2015-03-01

    Planning a well-designed research study can be tedious and laborious work. However, this process is critical and ultimately can produce valid, reliable study findings. Designing a large-scale randomized, controlled trial (RCT)-the gold standard in quantitative research-can be even more challenging. Even the most well-planned study potentially can result in issues with research procedures and design, such as recruitment, retention, or methodology. One strategy that may facilitate sound study design is the completion of a pilot or feasibility study prior to the initiation of a larger-scale trial. This article will discuss pilot and feasibility studies, their advantages and disadvantages, and implications for oncology nursing research. 
.

  4. Controllable lasing performance in solution-processed organic-inorganic hybrid perovskites.

    PubMed

    Kao, Tsung Sheng; Chou, Yu-Hsun; Hong, Kuo-Bin; Huang, Jiong-Fu; Chou, Chun-Hsien; Kuo, Hao-Chung; Chen, Fang-Chung; Lu, Tien-Chang

    2016-11-03

    Solution-processed organic-inorganic perovskites are fascinating due to their remarkable photo-conversion efficiency and great potential in the cost-effective, versatile and large-scale manufacturing of optoelectronic devices. In this paper, we demonstrate that the perovskite nanocrystal sizes can be simply controlled by manipulating the precursor solution concentrations in a two-step sequential deposition process, thus achieving the feasible tunability of excitonic properties and lasing performance in hybrid metal-halide perovskites. The lasing threshold is at around 230 μJ cm -2 in this solution-processed organic-inorganic lead-halide material, which is comparable to the colloidal quantum dot lasers. The efficient stimulated emission originates from the multiple random scattering provided by the micro-meter scale rugged morphology and polycrystalline grain boundaries. Thus the excitonic properties in perovskites exhibit high correlation with the formed morphology of the perovskite nanocrystals. Compared to the conventional lasers normally serving as a coherent light source, the perovskite random lasers are promising in making low-cost thin-film lasing devices for flexible and speckle-free imaging applications.

  5. Site- and bond-percolation thresholds in K_{n,n}-based lattices: Vulnerability of quantum annealers to random qubit and coupler failures on chimera topologies.

    PubMed

    Melchert, O; Katzgraber, Helmut G; Novotny, M A

    2016-04-01

    We estimate the critical thresholds of bond and site percolation on nonplanar, effectively two-dimensional graphs with chimeralike topology. The building blocks of these graphs are complete and symmetric bipartite subgraphs of size 2n, referred to as K_{n,n} graphs. For the numerical simulations we use an efficient union-find-based algorithm and employ a finite-size scaling analysis to obtain the critical properties for both bond and site percolation. We report the respective percolation thresholds for different sizes of the bipartite subgraph and verify that the associated universality class is that of standard two-dimensional percolation. For the canonical chimera graph used in the D-Wave Systems Inc. quantum annealer (n=4), we discuss device failure in terms of network vulnerability, i.e., we determine the critical fraction of qubits and couplers that can be absent due to random failures prior to losing large-scale connectivity throughout the device.

  6. Using Field Data and GIS-Derived Variables to Model Occurrence of Williamson's Sapsucker Nesting Habitat at Multiple Spatial Scales.

    PubMed

    Drever, Mark C; Gyug, Les W; Nielsen, Jennifer; Stuart-Smith, A Kari; Ohanjanian, I Penny; Martin, Kathy

    2015-01-01

    Williamson's sapsucker (Sphyrapicus thyroideus) is a migratory woodpecker that breeds in mixed coniferous forests in western North America. In Canada, the range of this woodpecker is restricted to three small populations in southern British Columbia, precipitating a national listing as 'Endangered' in 2005, and the need to characterize critical habitat for its survival and recovery. We compared habitat attributes between Williamson's sapsucker nest territories and random points without nests or detections of this sapsucker as part of a resource selection analysis to identify the habitat features that best explain the probability of nest occurrence in two separate geographic regions in British Columbia. We compared the relative explanatory power of generalized linear models based on field-derived and Geographic Information System (GIS) data within both a 225 m and 800 m radius of a nest or random point. The model based on field-derived variables explained the most variation in nest occurrence in the Okanagan-East Kootenay Region, whereas nest occurrence was best explained by GIS information at the 800 m scale in the Western Region. Probability of nest occurrence was strongly tied to densities of potential nest trees, which included open forests with very large (diameter at breast height, DBH, ≥57.5 cm) western larch (Larix occidentalis) trees in the Okanagan-East Kootenay Region, and very large ponderosa pine (Pinus ponderosa) and large (DBH 17.5-57.5 cm) trembling aspen (Populus tremuloides) trees in the Western Region. Our results have the potential to guide identification and protection of critical habitat as required by the Species at Risk Act in Canada, and to better manage Williamson's sapsucker habitat overall in North America. In particular, management should focus on the maintenance and recruitment of very large western larch and ponderosa pine trees.

  7. A New Zealand pilot randomized controlled trial of a web-based interactive self-management programme (MSInvigor8) with and without email support for the treatment of multiple sclerosis fatigue.

    PubMed

    van Kessel, Kirsten; Wouldes, Trecia; Moss-Morris, Rona

    2016-05-01

    To pilot and compare the efficacy of an internet-based cognitive behavioural therapy self-management programme with (MSInvigor8-Plus) and without (MSInvigor8-Only) the use of email support in reducing fatigue severity and impact (primary outcomes), and depressed and anxious mood (secondary outcomes). Randomized controlled trial using an independent randomization system built into the website and intention-to-treat analysis. Participants were recruited through the local Multiple Sclerosis Society and hospital neurological services in New Zealand. A total of 39 people (aged 31-63 years), experiencing multiple sclerosis fatigue, able to walk with and without walking aids, were randomized to MSInvigor8-Only (n = 20) or to MSInvigor8-Plus (n = 19). MSInvigor8 is an eight-session programme based on cognitive behaviour therapy principles including psycho-education, self-monitoring, and changing unhelpful activity and thought patterns. Outcome measures included fatigue severity (Chalder Fatigue Scale) and impact (Modified Fatigue Impact Scale), and anxiety and depression (Hospital Anxiety and Depression Scale). Assessments were performed at baseline and at 10 weeks. The MSInvigor8-Plus condition resulted in significantly greater reductions in fatigue severity (F [1,36] = 9.09, p < 0.01) and impact (F [1,36] = 6.03, p < 0.02) compared with the MSInvigor8-Only condition. Large between-group effect sizes for fatigue severity (d = 0.99) and fatigue impact (d = 0.81) were obtained. No significant differences were found between the groups on changes in anxiety and depression. MSInvigor8 delivered with email-based support is a potentially promising, acceptable, and cost-effective approach to treating fatigue in people with multiple sclerosis in New Zealand. © The Author(s) 2015.

  8. Lack of a thermodynamic finite-temperature spin-glass phase in the two-dimensional randomly coupled ferromagnet

    NASA Astrophysics Data System (ADS)

    Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.

    2018-05-01

    The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.

  9. Collisionless relaxation in spiral galaxy models

    NASA Technical Reports Server (NTRS)

    Hohl, F.

    1974-01-01

    The increase in random kinetic energy of stars by rapidly fluctuating gravitational fields (collisionless or violent relaxation) in disk galaxy models is investigated for three interaction potentials of the stars corresponding to (1) point stars, (2) rod stars of length 2 kpc, and (3) uniform density spherical stars of radius 2 kpc. To stabilize the galaxy against the large scale bar forming instability, a fixed field corresponding to a central core or halo component of stars was added with the stars containing at most 20 percent of the total mass of the galaxy. Considerable heating occurred for both the point stars and the rod stars, whereas the use of spherical stars resulted in a very low heating rate. The use of spherical stars with the resulting low heating rate will be desirable for the study of large scale galactic stability or density wave propagation, since collective heating effects will no longer mask the phenomena under study.

  10. How global extinctions impact regional biodiversity in mammals.

    PubMed

    Huang, Shan; Davies, T Jonathan; Gittleman, John L

    2012-04-23

    Phylogenetic diversity (PD) represents the evolutionary history of a species assemblage and is a valuable measure of biodiversity because it captures not only species richness but potentially also genetic and functional diversity. Preserving PD could be critical for maintaining the functional integrity of the world's ecosystems, and species extinction will have a large impact on ecosystems in areas where the ecosystem cost per species extinction is high. Here, we show that impacts from global extinctions are linked to spatial location. Using a phylogeny of all mammals, we compare regional losses of PD against a model of random extinction. At regional scales, losses differ dramatically: several biodiversity hotspots in southern Asia and Amazonia will lose an unexpectedly large proportion of PD. Global analyses may therefore underestimate the impacts of extinction on ecosystem processes and function because they occur at finer spatial scales within the context of natural biogeography.

  11. Multidimensional quantum entanglement with large-scale integrated optics.

    PubMed

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  12. Inferring personal economic status from social network location

    NASA Astrophysics Data System (ADS)

    Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A.

    2017-05-01

    It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.

  13. Inferring personal economic status from social network location.

    PubMed

    Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A

    2017-05-16

    It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.

  14. Efficient numerical methods for the random-field Ising model: Finite-size scaling, reweighting extrapolation, and computation of response functions.

    PubMed

    Fytas, Nikolaos G; Martín-Mayor, Víctor

    2016-06-01

    It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)PRLTAO0031-900710.1103/PhysRevLett.110.227201] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero- and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.

  15. Efficacy and Safety of Sipjeondaebo-Tang for Anorexia in Patients with Cancer: A Pilot, Randomized, Double-Blind, Placebo-Controlled Trial.

    PubMed

    Cheon, Chunhoo; Yoo, Jeong-Eun; Yoo, Hwa-Seung; Cho, Chong-Kwan; Kang, Sohyeon; Kim, Mia; Jang, Bo-Hyoung; Shin, Yong-Cheol; Ko, Seong-Gyu

    2017-01-01

    Anorexia occurs in about half of cancer patients and is associated with high mortality rate. However, safe and long-term use of anorexia treatment is still an unmet need. The purpose of the present study was to examine the feasibility of Sipjeondaebo-tang (Juzen-taiho-to, Shi-Quan-Da-Bu-Tang) for cancer-related anorexia. A total of 32 participants with cancer anorexia were randomized to either Sipjeondaebo-tang group or placebo group. Participants were given 3 g of Sipjeondaebo-tang or placebo 3 times a day for 4 weeks. The primary outcome was a change in the Anorexia/Cachexia Subscale of Functional Assessment of Anorexia/Cachexia Therapy (FAACT). The secondary outcomes included Visual Analogue Scale (VAS) of anorexia, FAACT scale, and laboratory tests. Anorexia and quality of life measured by FAACT and VAS were improved after 4 weeks of Sipjeondaebo-tang treatment. However, there was no significant difference between changes of Sipjeondaebo-tang group and placebo group. Sipjeondaebo-tang appears to have potential benefit for anorexia management in patients with cancer. Further large-scale studies are needed to ensure the efficacy. This trial is registered with ClinicalTrials.gov NCT02468141.

  16. Buspirone versus methylphenidate in the treatment of children with attention- deficit/ hyperactivity disorder: randomized double-blind study.

    PubMed

    Mohammadi, Mohammad-Reza; Hafezi, Poopak; Galeiha, Ali; Hajiaghaee, Reza; Akhondzadeh, Shahin

    2012-01-01

    A recent randomized clinical trial showed buspirone efficacy in the treatment of attention-deficit/hyperactivity disorder (ADHD) in children. However, results from a recent multi-site controlled clinical trial of transdermal buspirone failed to separate it from placebo in a large sample of children with ADHD. Therefore, due to these inconsistent findings, this study was designed to assess the efficacy of buspirone in the treatment of children with ADHD compared to methylphenidate in a double blind randomized clinical trial. Forty outpatients with a DSM-IV-TR diagnosis of ADHD were study population of this trial. Subjects were recruited from an outpatient child and adolescent clinic for a 6 week double blind, randomized clinical trial. All study subjects were randomly assigned to receive treatment using tablet of buspirone at a dose of 20-30 mg/day depending on weight (20 mg/day for < 30kg and 30 mg/day for > 30kg) (group 1) or methylphenidate at a dose of 20-30 mg/day depending on weight (20 mg/day for < 30kg and 30 mg/day for > 30kg (group 2) for a 6 week double blind, randomized clinical trial. The principal measure of outcome was the Teacher and Parent ADHD Rating Scale IV. Patients were assessed at baseline and at 21 and 42 days after the medication started. Significant differences were observed between the two groups on the Parent and Teacher Rating Scale scores. The changes at the endpoint compared to baseline were: -8.95±8.73 (mean±SD) and -15.60±7.81 (mean±SD) for buspirone and methyphenidate, for Parent ADHD Rating Scale. The changes at the endpoint compared to baseline were: -9.80 ±7.06 (mean±SD) and -22.40±9.90 (mean±SD) for buspirone and methyphenidate, respectively for Teacher ADHD Rating Scale. The difference between the buspirone and methylphenidate groups in the frequency of side effects was not significant except for decreased appetite, headache and insomnia that were observed more frequently in the methylphenidate group. The results of this study suggest that administration of buspirone was less effective than methylphenidate in the treatment of ADHD.

  17. Systematic Review and Meta-analysis of Indirect Protection Afforded by Vaccinating Children Against Seasonal Influenza: Implications for Policy.

    PubMed

    Yin, J Kevin; Heywood, Anita E; Georgousakis, Melina; King, Catherine; Chiu, Clayton; Isaacs, David; Macartney, Kristine K

    2017-09-01

    Universal childhood vaccination is a potential solution to reduce seasonal influenza burden. We reviewed systematically the literature on "herd"/indirect protection from vaccinating children aged 6 months to 17 years against influenza. Of 30 studies included, 14 (including 1 cluster randomized controlled trial [cRCT]) used live attenuated influenza vaccine, 11 (7 cRCTs) used inactivated influenza vaccine, and 5 (1 cRCT) compared both vaccine types. Twenty of 30 studies reported statistically significant indirect protection effectiveness (IPE) with point estimates ranging from 4% to 66%. Meta-regression suggests that studies with high quality and/or sufficiently large sample size are more likely to report significant IPE. In meta-analyses of 6 cRCTs with full randomization (rated as moderate quality overall), significant IPE was found in 1 cRCT in closely connected communities where school-aged children were vaccinated: 60% (95% confidence interval [CI], 41%-72%; I2 = 0%; N = 2326) against laboratory-confirmed influenza, and 3 household cRCTs in which preschool-aged children were vaccinated: 22% (95% CI, 1%-38%; I2 = 0%; N = 1903) against acute respiratory infections or influenza-like illness. Significant IPE was also reported in a large-scale cRCT (N = 8510) that was not fully randomized, and 3 ecological studies (N > 10000) of moderate quality including 36% reduction in influenza-related mortality among the elderly in a Japanese school-based program. Data on IPE in other settings are heterogeneous and lacked power to draw a firm conclusion. The available evidence suggests that influenza vaccination of children confers indirect protection in some but not all settings. Robust, large-scaled studies are required to better quantify the indirect protection from vaccinating children for different settings/endpoints. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  18. Estimating the Size of a Large Network and its Communities from a Random Sample

    PubMed Central

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W.

    2017-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios. PMID:28867924

  19. Estimating the Size of a Large Network and its Communities from a Random Sample.

    PubMed

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = ( V, E ) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G ( W ) be the induced subgraph in G of the vertices in W . In addition to G ( W ), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K , and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  20. Impact of the nursing home scale on residents' social engagement in South Korea.

    PubMed

    Yoon, Ju Young; Kim, Hongsoo; Jung, Young-Il; Ha, Jung-Hwa

    2016-12-01

    This study aimed to describe the levels of social engagement and to examine the relationship between the nursing home scale groups and social engagement in nursing homes in South Korea. A total of 314 residents were randomly selected from rosters provided by 10 nursing homes located in three metropolitan areas in South Korea. The outcome variable was social engagement measured by the Revised Index of Social Engagement (RISE), and the key independent variable was the nursing home scale (small, medium, and large). Individual factors (age, gender, activities of daily living and cognitive function, and depressive symptoms) and organizational factors (location, ownership, and staffing levels) were controlled in the model as covariates. Multilevel logistic regression was used in this study. About half of the residents (46%) in this study were not socially engaged in the nursing home (RISE=0) where they resided. Controlling for individual- and organizational-level factors, the nursing home facility size was a significant factor to predict the likelihood of residents' social engagement, with that the residents in large-scale nursing homes being less likely to be socially engaged than those in medium-scale nursing homes (odds ratio = 0.457; p-value = 0.005). This study supports evidence from previous studies that smaller-scale nursing homes are likely to provide more person-centered care compared to larger-scale nursing homes. Subsequent quality studies are needed to examine how the mechanisms for how smaller-scale nursing homes can enhance residents' social engagement in terms of care delivery processes.

  1. High School Mentors in Brief: Findings from the Big Brothers Big Sisters School-Based Mentoring Impact Study. P/PV In Brief. Issue 8

    ERIC Educational Resources Information Center

    Jucovy, Linda; Herrera, Carla

    2009-01-01

    This issue of "Public/Private Ventures (P/PV) In Brief" is based on "High School Students as Mentors," a report that examined the efficacy of high school mentors using data from P/PV's large-scale random assignment impact study of Big Brothers Big Sisters school-based mentoring programs. The brief presents an overview of the findings, which…

  2. High-Accuracy Near-Surface Large-Eddy Simulation with Planar Topography

    DTIC Science & Technology

    2015-08-03

    Navier-Stokes equation, in effect randomizing the subfilter-scale (SFS) stress divergence. In the intervening years it has been discovered that this...surface stress models do introduce spurious effects that force deviations from LOTW at the first couple grid levels adjacent to the surface. Fig. 10 shows...SFS stress is sufficiently overwhelming to produce the overshoot. When the LES is moved into the HAZ so that the viscous effects causing the

  3. Differentiation in Access to, and the Use and Sharing of (Open) Educational Resources among Students and Lecturers at Kenyan Universities

    ERIC Educational Resources Information Center

    Pete, Judith; Mulder, Fred; Neto, Jose Dutra Oliveira

    2017-01-01

    In order to obtain a fair "OER picture" for the Global South a large-scale study has been carried out for a series of countries, including Kenya. In this paper we report on the Kenya study, run at four universities that have been selected with randomly sampled students and lecturers. Empirical data have been generated by the use of a…

  4. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness

    DTIC Science & Technology

    2016-06-22

    this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between

  5. Hysteresis-Free Carbon Nanotube Field-Effect Transistors.

    PubMed

    Park, Rebecca S; Hills, Gage; Sohn, Joon; Mitra, Subhasish; Shulaker, Max M; Wong, H-S Philip

    2017-05-23

    While carbon nanotube (CNT) field-effect transistors (CNFETs) promise high-performance and energy-efficient digital systems, large hysteresis degrades these potential CNFET benefits. As hysteresis is caused by traps surrounding the CNTs, previous works have shown that clean interfaces that are free of traps are important to minimize hysteresis. Our previous findings on the sources and physics of hysteresis in CNFETs enabled us to understand the influence of gate dielectric scaling on hysteresis. To begin with, we validate through simulations how scaling the gate dielectric thickness results in greater-than-expected benefits in reducing hysteresis. Leveraging this insight, we experimentally demonstrate reducing hysteresis to <0.5% of the gate-source voltage sweep range using a very large-scale integration compatible and solid-state technology, simply by fabricating CNFETs with a thin effective oxide thickness of 1.6 nm. However, even with negligible hysteresis, large subthreshold swing is still observed in the CNFETs with multiple CNTs per transistor. We show that the cause of large subthreshold swing is due to threshold voltage variation between individual CNTs. We also show that the source of this threshold voltage variation is not explained solely by variations in CNT diameters (as is often ascribed). Rather, other factors unrelated to the CNTs themselves (i.e., process variations, random fixed charges at interfaces) are a significant factor in CNT threshold voltage variations and thus need to be further improved.

  6. Cognitive Behavior Therapy to Treat Sleep Disturbance and Fatigue After Traumatic Brain Injury: A Pilot Randomized Controlled Trial.

    PubMed

    Nguyen, Sylvia; McKay, Adam; Wong, Dana; Rajaratnam, Shantha M; Spitz, Gershon; Williams, Gavin; Mansfield, Darren; Ponsford, Jennie L

    2017-08-01

    To evaluate the efficacy of adapted cognitive behavioral therapy (CBT) for sleep disturbance and fatigue in individuals with traumatic brain injury (TBI). Parallel 2-group randomized controlled trial. Outpatient therapy. Adults (N=24) with history of TBI and clinically significant sleep and/or fatigue complaints were randomly allocated to an 8-session adapted CBT intervention or a treatment as usual (TAU) condition. Cognitive behavior therapy. The primary outcome was the Pittsburgh Sleep Quality Index (PSQI) posttreatment and at 2-month follow-up. Secondary measures included the Insomnia Severity Index, Fatigue Severity Scale, Brief Fatigue Inventory (BFI), Epworth Sleepiness Scale, and Hospital Anxiety and Depression Scale. At follow-up, CBT recipients reported better sleep quality than those receiving TAU (PSQI mean difference, 4.85; 95% confidence interval [CI], 2.56-7.14). Daily fatigue levels were significantly reduced in the CBT group (BFI difference, 1.54; 95% CI, 0.66-2.42). Secondary improvements were significant for depression. Large within-group effect sizes were evident across measures (Hedges g=1.14-1.93), with maintenance of gains 2 months after therapy cessation. Adapted CBT produced greater and sustained improvements in sleep, daily fatigue levels, and depression compared with TAU. These pilot findings suggest that CBT is a promising treatment for sleep disturbance and fatigue after TBI. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. Estimating prevalence of coronary heart disease for small areas using collateral indicators of morbidity.

    PubMed

    Congdon, Peter

    2010-01-01

    Different indicators of morbidity for chronic disease may not necessarily be available at a disaggregated spatial scale (e.g., for small areas with populations under 10 thousand). Instead certain indicators may only be available at a more highly aggregated spatial scale; for example, deaths may be recorded for small areas, but disease prevalence only at a considerably higher spatial scale. Nevertheless prevalence estimates at small area level are important for assessing health need. An instance is provided by England where deaths and hospital admissions for coronary heart disease are available for small areas known as wards, but prevalence is only available for relatively large health authority areas. To estimate CHD prevalence at small area level in such a situation, a shared random effect method is proposed that pools information regarding spatial morbidity contrasts over different indicators (deaths, hospitalizations, prevalence). The shared random effect approach also incorporates differences between small areas in known risk factors (e.g., income, ethnic structure). A Poisson-multinomial equivalence may be used to ensure small area prevalence estimates sum to the known higher area total. An illustration is provided by data for London using hospital admissions and CHD deaths at ward level, together with CHD prevalence totals for considerably larger local health authority areas. The shared random effect involved a spatially correlated common factor, that accounts for clustering in latent risk factors, and also provides a summary measure of small area CHD morbidity.

  8. Estimating Prevalence of Coronary Heart Disease for Small Areas Using Collateral Indicators of Morbidity

    PubMed Central

    Congdon, Peter

    2010-01-01

    Different indicators of morbidity for chronic disease may not necessarily be available at a disaggregated spatial scale (e.g., for small areas with populations under 10 thousand). Instead certain indicators may only be available at a more highly aggregated spatial scale; for example, deaths may be recorded for small areas, but disease prevalence only at a considerably higher spatial scale. Nevertheless prevalence estimates at small area level are important for assessing health need. An instance is provided by England where deaths and hospital admissions for coronary heart disease are available for small areas known as wards, but prevalence is only available for relatively large health authority areas. To estimate CHD prevalence at small area level in such a situation, a shared random effect method is proposed that pools information regarding spatial morbidity contrasts over different indicators (deaths, hospitalizations, prevalence). The shared random effect approach also incorporates differences between small areas in known risk factors (e.g., income, ethnic structure). A Poisson-multinomial equivalence may be used to ensure small area prevalence estimates sum to the known higher area total. An illustration is provided by data for London using hospital admissions and CHD deaths at ward level, together with CHD prevalence totals for considerably larger local health authority areas. The shared random effect involved a spatially correlated common factor, that accounts for clustering in latent risk factors, and also provides a summary measure of small area CHD morbidity. PMID:20195439

  9. Frequency-dependent scaling from mesoscale to macroscale in viscoelastic random composites

    PubMed Central

    Zhang, Jun

    2016-01-01

    This paper investigates the scaling from a statistical volume element (SVE; i.e. mesoscale level) to representative volume element (RVE; i.e. macroscale level) of spatially random linear viscoelastic materials, focusing on the quasi-static properties in the frequency domain. Requiring the material statistics to be spatially homogeneous and ergodic, the mesoscale bounds on the RVE response are developed from the Hill–Mandel homogenization condition adapted to viscoelastic materials. The bounds are obtained from two stochastic initial-boundary value problems set up, respectively, under uniform kinematic and traction boundary conditions. The frequency and scale dependencies of mesoscale bounds are obtained through computational mechanics for composites with planar random chessboard microstructures. In general, the frequency-dependent scaling to RVE can be described through a complex-valued scaling function, which generalizes the concept originally developed for linear elastic random composites. This scaling function is shown to apply for all different phase combinations on random chessboards and, essentially, is only a function of the microstructure and mesoscale. PMID:27274689

  10. The effects of intermittency on statistical characteristics of turbulence and scale similarity of breakdown coefficients

    NASA Astrophysics Data System (ADS)

    Novikov, E. A.

    1990-05-01

    The influence of intermittency on turbulent diffusion is expressed in terms of the statistics of the dissipation field. The high-order moments of relative diffusion are obtained by using the concept of scale similarity of the breakdown coefficients (bdc). The method of bdc is useful for obtaining new models and general results, which then can be expressed in terms of multifractals. In particular, the concavity and other properties of spectral codimension are proved. Special attention is paid to the logarithmically periodic modulations. The parametrization of small-scale intermittent turbulence, which can be used for large-eddy simulation, is presented. The effect of molecular viscosity is taken into account in the spirit of the renorm group, but without spectral series, ɛ expansion, and fictitious random forces.

  11. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  12. Decomposing Multifractal Crossovers

    PubMed Central

    Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras

    2017-01-01

    Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694

  13. Generating and controlling homogeneous air turbulence using random jet arrays

    NASA Astrophysics Data System (ADS)

    Carter, Douglas; Petersen, Alec; Amili, Omid; Coletti, Filippo

    2016-12-01

    The use of random jet arrays, already employed in water tank facilities to generate zero-mean-flow homogeneous turbulence, is extended to air as a working fluid. A novel facility is introduced that uses two facing arrays of individually controlled jets (256 in total) to force steady homogeneous turbulence with negligible mean flow, shear, and strain. Quasi-synthetic jet pumps are created by expanding pressurized air through small straight nozzles and are actuated by fast-response low-voltage solenoid valves. Velocity fields, two-point correlations, energy spectra, and second-order structure functions are obtained from 2D PIV and are used to characterize the turbulence from the integral-to-the Kolmogorov scales. Several metrics are defined to quantify how well zero-mean-flow homogeneous turbulence is approximated for a wide range of forcing and geometric parameters. With increasing jet firing time duration, both the velocity fluctuations and the integral length scales are augmented and therefore the Reynolds number is increased. We reach a Taylor-microscale Reynolds number of 470, a large-scale Reynolds number of 74,000, and an integral-to-Kolmogorov length scale ratio of 680. The volume of the present homogeneous turbulence, the largest reported to date in a zero-mean-flow facility, is much larger than the integral length scale, allowing for the natural development of the energy cascade. The turbulence is found to be anisotropic irrespective of the distance between the jet arrays. Fine grids placed in front of the jets are effective at modulating the turbulence, reducing both velocity fluctuations and integral scales. Varying the jet-to-jet spacing within each array has no effect on the integral length scale, suggesting that this is dictated by the length scale of the jets.

  14. Elephant random walks and their connection to Pólya-type urns

    NASA Astrophysics Data System (ADS)

    Baur, Erich; Bertoin, Jean

    2016-11-01

    In this paper, we explain the connection between the elephant random walk (ERW) and an urn model à la Pólya and derive functional limit theorems for the former. The ERW model was introduced in [Phys. Rev. E 70, 045101 (2004), 10.1103/PhysRevE.70.045101] to study memory effects in a highly non-Markovian setting. More specifically, the ERW is a one-dimensional discrete-time random walk with a complete memory of its past. The influence of the memory is measured in terms of a memory parameter p between zero and one. In the past years, a considerable effort has been undertaken to understand the large-scale behavior of the ERW, depending on the choice of p . Here, we use known results on urns to explicitly solve the ERW in all memory regimes. The method works as well for ERWs in higher dimensions and is widely applicable to related models.

  15. Logistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes.

    PubMed

    Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel

    2011-05-23

    Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.

  16. a Coarse-To Model for Airplane Detection from Large Remote Sensing Images Using Saliency Modle and Deep Learning

    NASA Astrophysics Data System (ADS)

    Song, Z. N.; Sui, H. G.

    2018-04-01

    High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.

  17. Short- and Long-Term Changes in Health-Related Quality of Life with Weight Loss: Results from a Randomized Controlled Trial.

    PubMed

    Pearl, Rebecca L; Wadden, Thomas A; Tronieri, Jena Shaw; Berkowitz, Robert I; Chao, Ariana M; Alamuddin, Naji; Leonard, Sharon M; Carvajal, Raymond; Bakizada, Zayna M; Pinkasavage, Emilie; Gruber, Kathryn A; Walsh, Olivia A; Alfaris, Nasreen

    2018-06-01

    The objective of this study was to determine the effects of weight loss and weight loss maintenance (WLM) on weight-specific health-related quality of life in a 66-week trial. Adults with obesity (N = 137, 86.1% female, 68.6% black, mean age = 46.1 years) who had lost ≥ 5% of initial weight in a 14-week intensive lifestyle intervention/low-calorie diet (LCD) program were randomly assigned to lorcaserin or placebo for an additional 52-week WLM program. The Impact of Weight on Quality of Life-Lite (IWQOL-Lite) scale (including five subscales), Patient Health Questionnaire-9 (depression), and Perceived Stress Scale were administered at the start of the 14-week LCD program, randomization, and week 52 of the randomized controlled trial (i.e., 66 weeks total). Significant improvements in all outcomes, except weight-related public distress, were found following the 14-week LCD program (P values < 0.05). Improvements were largely maintained during the 52-week randomized controlled trial, despite weight regain of 2.0 to 2.5 kg across treatment groups. Participants who lost ≥ 10% of initial weight achieved greater improvements in physical function, self-esteem, sexual life, and the IWQOL-Lite total score than those who lost < 5% and did not differ from those who lost 5% to 9.9%. Improvements in weight-specific health-related quality of life were achieved with moderate weight loss and were sustained during WLM. © 2018 The Obesity Society.

  18. A Randomized Controlled Pilot Trial of Oral N-Acetylcysteine in Children with Autism

    PubMed Central

    Hardan, Antonio Y.; Fung, Lawrence K.; Libove, Robin A.; Obukhanych, Tetyana V.; Nair, Surekha; Herzenberg, Leonore A.; Frazier, Thomas W.; Tirouvanziam, Rabindra

    2016-01-01

    Background An imbalance in the excitatory/inhibitory systems with abnormalities in the glutamatergic pathways has been implicated in the pathophysiology of autism. Furthermore, chronic redox imbalance was also recently linked to this disorder. The goal of this pilot study was to assess the feasibility of using oral N-acetylcysteine (NAC), a glutamatergic modulator and an antioxidant in the treatment of behavioral disturbance in children with autism. Methods This is a 12-week, double-blind, randomized, placebo-controlled study of NAC in children with autistic disorder. Subjects randomized to NAC were initiated at 900 mg daily for 4 weeks, then 900 mg twice-daily for 4 weeks and 900 mg three-times-daily for 4 weeks. The primary behavioral measure (Aberrant Behavior Checklist – Irritability subscale) and safety measures were performed at baseline, 4, 8, and 12 weeks. Secondary measures included the ABC-Stereotypy subscale, Repetitive Behavior Scale – Revised (RBS-R), and Social Responsiveness Scale (SRS). Results Thirty-three subjects (31 males, 2 females; aged 3.2–10.7 years) were randomized in the study. Follow-up data was available on fourteen subjects in the NAC group and fifteen in the placebo group. Oral NAC was well-tolerated with limited side effects. Compared to placebo, NAC resulted in significant improvements on ABC-Irritability subscale (F=6.80; p<.001; d=.96). Conclusions Data from this pilot investigation support the potential usefulness of NAC for treating irritability in children with autistic disorder. Large randomized controlled investigations are warranted. ClinicalTrials.gov Identifier NCT00627705 PMID:22342106

  19. Scattering of Internal Tides by Irregular Bathymetry of Large Extent

    NASA Astrophysics Data System (ADS)

    Mei, C.

    2014-12-01

    We present an analytic theory of scattering of tide-generated internal gravity waves in a continuously stratified ocean with a randomly rough seabed. Based on the linearized approximation, the idealized case of constant mean sea depth and Brunt-Vaisala frequency is considered. The depth fluctuation is assumed to be a stationary random function of space characterized by small amplitude and correlation length comparable to the typical wavelength. For both one- and two-dimensional topography the effects of scattering on wave phase over long distances are derived explicitly by the method of multiple scales. For one-dimensional topography, numerical results are compared with Buhler-& Holmes-Cerfon(2011) computed by the method of characteristics. For two-dimensional topography, new results are presented for both statistically isotropic and anisotropic cases. In thi talk we shall apply the perturbation technique of multiple scales to treat analytically the random scattering of internal tides by gently sloped bathymetric irregularities.The basic assumptions are: incompressible fluid, infinitestimal wave amplitudes, constant Brunt-Vaisala frequency, and constant mean depth. In addition, the depth disorder is assumed to be a stationary random function of space with zero mean and small root-mean-square amplitude. The correlation length can be comparable in order of magnitude as the dominant wavelength. Both one- and two-dimensional disorder will be considered. Physical effects of random scattering on the mean wave phase i.e., spatial attenuation and wavenumber shift will be calculated and discussed for one mode of incident wave. For two dimensional topographies, statistically isotropic and anisotropic examples will be presented.

  20. Mentalization-based therapy for parents in entrenched conflict: A random allocation feasibility study.

    PubMed

    Hertzmann, Leezah; Target, Mary; Hewison, David; Casey, Polly; Fearon, Pasco; Lassri, Dana

    2016-12-01

    To explore the effectiveness of a mentalization-based therapeutic intervention specifically developed for parents in entrenched conflict over their children. To the best of our knowledge, this is the first randomized controlled intervention study in the United Kingdom to work with both parents postseparation, and the first to focus on mentalization in this situation. Using a mixed-methods study design, 30 parents were randomly allocated to either mentalization-based therapy for parental conflict-Parenting Together, or the Parents' Group, a psycho-educational intervention for separated parents based on elements of the Separated Parents Information Program-part of the U.K. Family Justice System and approximating to treatment as usual. Given the challenges of recruiting parents in these difficult circumstances, the sample size was small and permitted only the detection of large differences between conditions. The data, involving repeated measures of related individuals, was explored statistically, using hierarchical linear modeling, and qualitatively. Significant findings were reported on the main predicted outcomes, with clinically important trends on other measures. Qualitative findings further contributed to the understanding of parents' subjective experience, pre- and posttreatment. Findings indicate that a larger scale randomized controlled trial would be worthwhile. These encouraging findings shed light on the dynamics maintaining these high-conflict situations known to be damaging to children. We established that both forms of intervention were acceptable to most parents, and we were able to operate a random allocation design with extensive quantitative and qualitative assessments of the kind that would make a larger-scale trial feasible and productive. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Crustal evolution inferred from Apollo magnetic measurements

    NASA Technical Reports Server (NTRS)

    Dyal, P.; Daily, W. D.; Vanyan, L. L.

    1978-01-01

    Magnetic field and solar wind plasma density measurements were analyzed to determine the scale size characteristics of remanent fields at the Apollo 12, 15, and 16 landing sites. Theoretical model calculations of the field-plasma interaction, involving diffusion of the remanent field into the solar plasma, were compared to the data. The information provided by all these experiments shows that remanent fields over most of the lunar surface are characterized by spatial variations as small as a few kilometers. Large regions (50 to 100 km) of the lunar crust were probably uniformly magnetized during early crustal evolution. Bombardment and subsequent gardening of the upper layers of these magnetized regions left randomly oriented, smaller scale (5 to 10 km) magnetic sources close to the surface. The larger scale size fields of magnitude approximately 0.1 gammas are measured by the orbiting subsatellite experiments and the small scale sized remanent fields of magnitude approximately 100 gammas are measured by the surface experiments.

  2. Scaling laws for nanoFET sensors

    NASA Astrophysics Data System (ADS)

    Zhou, Fu-Shan; Wei, Qi-Huo

    2008-01-01

    The sensitive conductance change of semiconductor nanowires and carbon nanotubes in response to the binding of charged molecules provides a novel sensing modality which is generally denoted as nanoFET sensors. In this paper, we study the scaling laws of nanoplate FET sensors by simplifying nanoplates as random resistor networks with molecular receptors sitting on lattice sites. Nanowire/tube FETs are included as the limiting cases where the device width goes small. Computer simulations show that the field effect strength exerted by the binding molecules has significant impact on the scaling behaviors. When the field effect strength is small, nanoFETs have little size and shape dependence. In contrast, when the field effect strength becomes stronger, there exists a lower detection threshold for charge accumulation FETs and an upper detection threshold for charge depletion FET sensors. At these thresholds, the nanoFET devices undergo a transition between low and large sensitivities. These thresholds may set the detection limits of nanoFET sensors, while they could be eliminated by designing devices with very short source-drain distance and large width.

  3. A scale-invariant cellular-automata model for distributed seismicity

    NASA Technical Reports Server (NTRS)

    Barriere, Benoit; Turcotte, Donald L.

    1991-01-01

    In the standard cellular-automata model for a fault an element of stress is randomly added to a grid of boxes until a box has four elements, these are then redistributed to the adjacent boxes on the grid. The redistribution can result in one or more of these boxes having four or more elements in which case further redistributions are required. On the average added elements are lost from the edges of the grid. The model is modified so that the boxes have a scale-invariant distribution of sizes. The objective is to model a scale-invariant distribution of fault sizes. When a redistribution from a box occurs it is equivalent to a characteristic earthquake on the fault. A redistribution from a small box (a foreshock) can trigger an instability in a large box (the main shock). A redistribution from a large box always triggers many instabilities in the smaller boxes (aftershocks). The frequency-size statistics for both main shocks and aftershocks satisfy the Gutenberg-Richter relation with b = 0.835 for main shocks and b = 0.635 for aftershocks. Model foreshocks occur 28 percent of the time.

  4. Random sampling of elementary flux modes in large-scale metabolic networks.

    PubMed

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  5. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    NASA Astrophysics Data System (ADS)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  6. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    PubMed

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  7. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  8. Scaling exponents for ordered maxima

    DOE PAGES

    Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.

    2015-12-22

    We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less

  9. Statistical analysis of relationship between negative-bias temperature instability and random telegraph noise in small p-channel metal-oxide-semiconductor field-effect transistors

    NASA Astrophysics Data System (ADS)

    Tega, Naoki; Miki, Hiroshi; Mine, Toshiyuki; Ohmori, Kenji; Yamada, Keisaku

    2014-03-01

    It is demonstrated from a statistical perspective that the generation of random telegraph noise (RTN) changes before and after the application of negative-bias temperature instability (NBTI) stress. The NBTI stress generates a large number of permanent interface traps and, at the same time, a large number of RTN traps causing temporary RTN and one-time RTN. The interface trap and the RTN trap show different features in the recovery process. That is, a re-passivation of interface states is the minor cause of the recovery after the NBTI stress, and in contrast, rapid disappearance of the temporary RTN and the one-time RTN is the main cause of the recovery. The RTN traps are less likely to become permanent. This two-type trap, namely, the interface trap and RTN trap, model simply explains NBTI degradation and recovery in scaled p-channel metal-oxide-semiconductor field-effect transistors.

  10. Neurodevelopmental alterations of large-scale structural networks in children with new-onset epilepsy

    PubMed Central

    Bonilha, Leonardo; Tabesh, Ali; Dabbs, Kevin; Hsu, David A.; Stafstrom, Carl E.; Hermann, Bruce P.; Lin, Jack J.

    2014-01-01

    Recent neuroimaging and behavioral studies have revealed that children with new onset epilepsy already exhibit brain structural abnormalities and cognitive impairment. How the organization of large-scale brain structural networks is altered near the time of seizure onset and whether network changes are related to cognitive performances remain unclear. Recent studies also suggest that regional brain volume covariance reflects synchronized brain developmental changes. Here, we test the hypothesis that epilepsy during early-life is associated with abnormalities in brain network organization and cognition. We used graph theory to study structural brain networks based on regional volume covariance in 39 children with new-onset seizures and 28 healthy controls. Children with new-onset epilepsy showed a suboptimal topological structural organization with enhanced network segregation and reduced global integration compared to controls. At the regional level, structural reorganization was evident with redistributed nodes from the posterior to more anterior head regions. The epileptic brain network was more vulnerable to targeted but not random attacks. Finally, a subgroup of children with epilepsy, namely those with lower IQ and poorer executive function, had a reduced balance between network segregation and integration. Taken together, the findings suggest that the neurodevelopmental impact of new onset childhood epilepsies alters large-scale brain networks, resulting in greater vulnerability to network failure and cognitive impairment. PMID:24453089

  11. A resource of large-scale molecular markers for monitoring Agropyron cristatum chromatin introgression in wheat background based on transcriptome sequences.

    PubMed

    Zhang, Jinpeng; Liu, Weihua; Lu, Yuqing; Liu, Qunxing; Yang, Xinming; Li, Xiuquan; Li, Lihui

    2017-09-20

    Agropyron cristatum is a wild grass of the tribe Triticeae and serves as a gene donor for wheat improvement. However, very few markers can be used to monitor A. cristatum chromatin introgressions in wheat. Here, we reported a resource of large-scale molecular markers for tracking alien introgressions in wheat based on transcriptome sequences. By aligning A. cristatum unigenes with the Chinese Spring reference genome sequences, we designed 9602 A. cristatum expressed sequence tag-sequence-tagged site (EST-STS) markers for PCR amplification and experimental screening. As a result, 6063 polymorphic EST-STS markers were specific for the A. cristatum P genome in the single-receipt wheat background. A total of 4956 randomly selected polymorphic EST-STS markers were further tested in eight wheat variety backgrounds, and 3070 markers displaying stable and polymorphic amplification were validated. These markers covered more than 98% of the A. cristatum genome, and the marker distribution density was approximately 1.28 cM. An application case of all EST-STS markers was validated on the A. cristatum 6 P chromosome. These markers were successfully applied in the tracking of alien A. cristatum chromatin. Altogether, this study provided a universal method of large-scale molecular marker development to monitor wild relative chromatin in wheat.

  12. Chlorophyll a and inorganic suspended solids in backwaters of the upper Mississippi River system: Backwater lake effects and their associations with selected environmental predictors

    USGS Publications Warehouse

    Rogala, James T.; Gray, Brian R.

    2006-01-01

    The Long Term Resource Monitoring Program (LTRMP) uses a stratified random sampling design to obtain water quality statistics within selected study reaches of the Upper Mississippi River System (UMRS). LTRMP sampling strata are based on aquatic area types generally found in large rivers (e.g., main channel, side channel, backwater, and impounded areas). For hydrologically well-mixed strata (i.e., main channel), variance associated with spatial scales smaller than the strata scale is a relatively minor issue for many water quality parameters. However, analysis of LTRMP water quality data has shown that within-strata variability at the strata scale is high in off-channel areas (i.e., backwaters). A portion of that variability may be associated with differences among individual backwater lakes (i.e., small and large backwater regions separated by channels) that cumulatively make up the backwater stratum. The objective of the statistical modeling presented here is to determine if differences among backwater lakes account for a large portion of the variance observed in the backwater stratum for selected parameters. If variance associated with backwater lakes is high, then inclusion of backwater lake effects within statistical models is warranted. Further, lakes themselves may represent natural experimental units where associations of interest to management may be estimated.

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  14. On the Fluctuating Component of the Sun's Large-Scale Magnetic Field

    NASA Astrophysics Data System (ADS)

    Wang, Y.-M.; Sheeley, N. R., Jr.

    2003-06-01

    The Sun's large-scale magnetic field and its proxies are known to undergo substantial variations on timescales much less than a solar cycle but longer than a rotation period. Examples of such variations include the double activity maximum inferred by Gnevyshev, the large peaks in the interplanetary field strength observed in 1982 and 1991, and the 1.3-1.4 yr periodicities detected over limited time intervals in solar wind speed and geomagnetic activity. We consider the question of the extent to which these variations are stochastic in nature. For this purpose, we simulate the evolution of the Sun's equatorial dipole strength and total open flux under the assumption that the active region sources (BMRs) are distributed randomly in longitude. The results are then interpreted with the help of a simple random walk model including dissipation. We find that the equatorial dipole and open flux generally exhibit multiple peaks during each 11 yr cycle, with the highest peak as likely to occur during the declining phase as at sunspot maximum. The widths of the peaks are determined by the timescale τ~1 yr for the equatorial dipole to decay through the combined action of meridional flow, differential rotation, and supergranular diffusion. The amplitudes of the fluctuations depend on the strengths and longitudinal phase relations of the BMRs, as well as on the relative rates of flux emergence and decay. We conclude that stochastic processes provide a viable explanation for the ``Gnevyshev gaps'' and for the existence of quasi periodicities in the range ~1-3 yr.

  15. Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.

  16. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  17. An ensemble heterogeneous classification methodology for discovering health-related knowledge in social media messages.

    PubMed

    Tuarob, Suppawong; Tucker, Conrad S; Salathe, Marcel; Ram, Nilam

    2014-06-01

    The role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data. Social media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported. Three data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts. Two sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media. The small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Comparison of unitary associations and probabilistic ranking and scaling as applied to mesozoic radiolarians

    NASA Astrophysics Data System (ADS)

    Baumgartner, Peter O.

    A database on Middle Jurassic-Early Cretaceous radiolarians consisting of first and final occurrences of 110 species in 226 samples from 43 localities was used to compute Unitary Associations and probabilistic ranking and scaling (RASC), in order to test deterministic versus probabilistic quantitative biostratigraphic methods. Because the Mesozoic radiolarian fossil record is mainly dissolution-controlled, the sequence of events differs greatly from section to section. The scatter of local first and final appearances along a time scale is large compared to the species range; it is asymmetrical, with a maximum near the ends of the range and it is non-random. Thus, these data do not satisfy the statistical assumptions made in ranking and scaling. Unitary Associations produce maximum ranges of the species relative to each other by stacking cooccurrence data from all sections and therefore compensate for the local dissolution effects. Ranking and scaling, based on the assumption of a normal random distribution of the events, produces average ranges which are for most species much shorter than the maximum UA-ranges. There are, however, a number of species with similar ranges in both solutions. These species are believed to be the most dissolution-resistant and, therefore, the most reliable ones for the definition of biochronozones. The comparison of maximum and average ranges may be a powerful tool to test reliability of species for biochronology. Dissolution-controlled fossil data yield high crossover frequencies and therefore small, statistically insignificant interfossil distances. Scaling has not produced a useful sequence for this type of data.

  19. Innovation in a Learning Health Care System: Veteran-Directed Home- and Community-Based Services.

    PubMed

    Garrido, Melissa M; Allman, Richard M; Pizer, Steven D; Rudolph, James L; Thomas, Kali S; Sperber, Nina R; Van Houtven, Courtney H; Frakt, Austin B

    2017-11-01

    A path-breaking example of the interplay between geriatrics and learning healthcare systems is the Veterans Health Administration's (VHA's) planned roll-out of a program for providing participant-directed home- and community-based services to veterans with cognitive and functional limitations. We describe the design of a large-scale, stepped-wedge, cluster-randomized trial of the Veteran-Directed Home- and Community-Based Services (VD-HCBS) program. From March 2017 through December 2019, up to 77 Veterans Affairs Medical Centers will be randomized to times to begin offering VD-HCBS to veterans at risk of nursing home placement. Services will be provided to community-dwelling participants with support from Aging and Disability Network Agencies. The VHA Partnered Evidence-based Policy Resource Center (PEPReC) is coordinating the evaluation, which includes collaboration from operational stakeholders from the VHA and Administration for Community Living and interdisciplinary researchers from the Center of Innovation in Long-Term Services and Supports and the Center for Health Services Research in Primary Care. For older veterans with functional limitations who are eligible for VD-HCBS, we will evaluate health outcomes (hospitalizations, emergency department visits, nursing home admissions, days at home) and healthcare costs associated with VD-HCBS availability. Learning healthcare systems facilitate diffusion of innovation while enabling rigorous evaluation of effects on patient outcomes. The VHA's randomized rollout of VD-HCBS to veterans at risk of nursing home placement is an example of how to achieve these goals simultaneously. PEPReC's experience designing an evaluation with researchers and operations stakeholders may serve as a framework for others seeking to develop rapid, rigorous, large-scale evaluations of delivery system innovations targeted to older adults. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.

  20. The nature of the dense obscuring material in the nucleus of NGC 1068

    NASA Technical Reports Server (NTRS)

    Tacconi, L. J.; Genzel, R.; Blietz, M.; Cameron, M.; Harris, A. I.; Madden, S.

    1994-01-01

    High spatial and spectral resolution observations of the distribution, physical parameters, and kinematics of the molecular interstellar medium toward the nucleus of the Seyfert 2 galaxy NGC 1068 are reported. The data consist of 2.4 by 3.4 arcseconds resolution interferometry of the 88.6 GHz HCN J = 1 towards 0 line at 17 km/s spectral resolution, single dish observations of several mm/submm isotopic lines of CO and HCN, and 0.85 arcseconds imaging spectroscopy of the 2.12 micron H2 S(1) line at a velocity resolution of 110 km/s. The central few hundred parsecs of NGC 1068 contain a system of dense (N(H2) approximately 10(exp 5) cm(exp -3)), warm (T greater than or equal to 70 K) molecular cloud cores. The low density molecular envelopes have probably been stripped by the nuclear wind and radiation. The molecular gas layer is located in the plane of NGC 1068's large scale disk (inclination approximately 35 deg) and orbits in elliptical streamlines in response to the central stellar bar. The spatial distribution of the 2 micron H2 emission suggests that gas is shocked at the leading edge of the bar, probably resulting in gas influx into the central 100 pc at a rate of a few solar mass per year. In addition to large scale streaming (with a solid body rotation curve), the HCN velocity field requires the presence of random motions of order 100 km/s. We interpret these large random motions as implying the nuclear gas disk to be very thick (scale height/radius approximately 1), probably as the result of the impact of nuclear radiation and wind on orbiting molecular clouds. Geometry and column density of the molecular cloud layer between approximately 30 pc to 300 pc from the nucleus can plausibly account for the nuclear obscuration and anisotropy of the radiation field in the visible and UV.

  1. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  2. A systematic review of Investigator Global Assessment (IGA) in atopic dermatitis (AD) trials: Many options, no standards.

    PubMed

    Futamura, Masaki; Leshem, Yael A; Thomas, Kim S; Nankervis, Helen; Williams, Hywel C; Simpson, Eric L

    2016-02-01

    Investigators often use global assessments to provide a snapshot of overall disease severity in dermatologic clinical trials. Although easy to perform, the frequency of use and standardization of global assessments in studies of atopic dermatitis (AD) is unclear. We sought to assess the frequency, definitions, and methods of analysis of Investigator Global Assessment in randomized controlled trials of AD. We conducted a systematic review using all published randomized controlled trials of AD treatments in the Global Resource of Eczema Trials database (2000-2014). We determined the frequency of global scales application and defining features. Among 317 trials identified, 101 trials (32%) used an investigator-performed global assessment as an outcome measure. There was large variability in global assessments between studies in nomenclature, scale size, definitions, outcome description, and analysis. Both static and dynamic scales were identified that ranged from 4- to 7-point scales. North American studies used global assessments more commonly than studies from other countries. The search was restricted to the Global Resource of Eczema Trials database. Global assessments are used frequently in studies of AD, but their complete lack of standardized definitions and implementation preclude any meaningful comparisons between studies, which in turn impedes data synthesis to inform clinical decision-making. Standardization is urgently required. Copyright © 2015. Published by Elsevier Inc.

  3. Efficacy of prokinetics with a split-dose of polyethylene glycol in bowel preparation for morning colonoscopy: a randomized controlled trial.

    PubMed

    Kim, Hyoung Jun; Kim, Tae Oh; Shin, Bong Chul; Woo, Jae Gon; Seo, Eun Hee; Joo, Hee Rin; Heo, Nae-Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo; Shin, Jin-Yong; Lee, Nae Young

    2012-01-01

    Currently, a split-dose of polyethylene glycol (PEG) is the mainstay of bowel preparation due to its tolerability, bowel-cleansing action, and safety. However, bowel preparation with PEG is suboptimal because residual fluid reduces the polyp detection rate and requires a more thorough colon inspection. The aim of our study was to demonstrate the efficacy of a sufficient dose of prokinetics on bowel cleansing together with split-dose PEG. A prospective endoscopist-blinded study was conducted. Patients were randomly allocated to two groups: prokinetic with split-dose PEG or split-dose PEG alone. A prokinetic [100 mg itopride (Itomed)], was administered twice simultaneously with each split-dose of PEG. Bowel-cleansing efficacy was measured by endoscopists using the Ottawa scale and the segmental fluidity scale score. Each participant completed a bowel preparation survey. Mean scores from the Ottawa scale, segmental fluid scale, and rate of poor preparation were compared between both groups. Patients in the prokinetics with split-dose PEG group showed significantly lower total Ottawa and segmental fluid scores compared with patients in the split-dose of PEG alone group. A sufficient dose of prokinetics with a split-dose of PEG showed efficacy in bowel cleansing for morning colonoscopy, largely due to the reduction in colonic fluid. Copyright © 2012 S. Karger AG, Basel.

  4. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  5. Rapid Increase in Ownership and Use of Long-Lasting Insecticidal Nets and Decrease in Prevalence of Malaria in Three Regional States of Ethiopia (2006-2007)

    PubMed Central

    Shargie, Estifanos Biru; Ngondi, Jeremiah; Graves, Patricia M.; Getachew, Asefaw; Hwang, Jimee; Gebre, Teshome; Mosher, Aryc W.; Ceccato, Pietro; Endeshaw, Tekola; Jima, Daddi; Tadesse, Zerihun; Tenaw, Eskindir; Reithinger, Richard; Emerson, Paul M.; Richards, Frank O.; Ghebreyesus, Tedros Adhanom

    2010-01-01

    Following recent large scale-up of malaria control interventions in Ethiopia, this study aimed to compare ownership and use of long-lasting insecticidal nets (LLIN), and the change in malaria prevalence using two population-based household surveys in three regions of the country. Each survey used multistage cluster random sampling with 25 households per cluster. Household net ownership tripled from 19.6% in 2006 to 68.4% in 2007, with mean LLIN per household increasing from 0.3 to 1.2. Net use overall more than doubled from 15.3% to 34.5%, but in households owning LLIN, use declined from 71.7% to 48.3%. Parasitemia declined from 4.1% to 0.4%. Large scale-up of net ownership over a short period of time was possible. However, a large increase in net ownership was not necessarily mirrored directly by increased net use. Better targeting of nets to malaria-risk areas and sustained behavioural change communication are needed to increase and maintain net use. PMID:20936103

  6. Classical and quantum stability in putative landscapes

    DOE PAGES

    Dine, Michael

    2017-01-18

    Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less

  7. Pilot study of large-scale production of mutant pigs by ENU mutagenesis

    PubMed Central

    Hai, Tang; Cao, Chunwei; Shang, Haitao; Guo, Weiwei; Mu, Yanshuang; Yang, Shulin; Zhang, Ying; Zheng, Qiantao; Zhang, Tao; Wang, Xianlong; Liu, Yu; Kong, Qingran; Li, Kui; Wang, Dayu; Qi, Meng; Hong, Qianlong; Zhang, Rui; Wang, Xiupeng; Jia, Qitao; Wang, Xiao; Qin, Guosong; Li, Yongshun; Luo, Ailing; Jin, Weiwu; Yao, Jing; Huang, Jiaojiao; Zhang, Hongyong; Li, Menghua; Xie, Xiangmo; Zheng, Xuejuan; Guo, Kenan; Wang, Qinghua; Zhang, Shibin; Li, Liang; Xie, Fei; Zhang, Yu; Weng, Xiaogang; Yin, Zhi; Hu, Kui; Cong, Yimei; Zheng, Peng; Zou, Hailong; Xin, Leilei; Xia, Jihan; Ruan, Jinxue; Li, Hegang; Zhao, Weiming; Yuan, Jing; Liu, Zizhan; Gu, Weiwang; Li, Ming; Wang, Yong; Wang, Hongmei; Yang, Shiming; Liu, Zhonghua; Wei, Hong; Zhao, Jianguo; Zhou, Qi; Meng, Anming

    2017-01-01

    N-ethyl-N-nitrosourea (ENU) mutagenesis is a powerful tool to generate mutants on a large scale efficiently, and to discover genes with novel functions at the whole-genome level in Caenorhabditis elegans, flies, zebrafish and mice, but it has never been tried in large model animals. We describe a successful systematic three-generation ENU mutagenesis screening in pigs with the establishment of the Chinese Swine Mutagenesis Consortium. A total of 6,770 G1 and 6,800 G3 pigs were screened, 36 dominant and 91 recessive novel pig families with various phenotypes were established. The causative mutations in 10 mutant families were further mapped. As examples, the mutation of SOX10 (R109W) in pig causes inner ear malfunctions and mimics human Mondini dysplasia, and upregulated expression of FBXO32 is associated with congenital splay legs. This study demonstrates the feasibility of artificial random mutagenesis in pigs and opens an avenue for generating a reservoir of mutants for agricultural production and biomedical research. DOI: http://dx.doi.org/10.7554/eLife.26248.001 PMID:28639938

  8. Classical and quantum stability in putative landscapes

    NASA Astrophysics Data System (ADS)

    Dine, Michael

    2017-01-01

    Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.

  9. Development and Evaluation of the Sugar-Sweetened Beverages Media Literacy (SSB-ML) Scale and Its Relationship With SSB Consumption.

    PubMed

    Chen, Yvonnes; Porter, Kathleen J; Estabrooks, Paul A; Zoellner, Jamie

    2017-10-01

    Understanding how adults' media literacy skill sets impact their sugar-sweetened beverage (SSB) intake provides insight into designing effective interventions to enhance their critical analysis of marketing messages and thus improve their healthy beverage choices. However, a media literacy scale focusing on SSBs is lacking. This cross-sectional study uses baseline data from a large randomized controlled trial to (a) describe the psychometric properties of an SSB Media Literacy Scale (SSB-ML) scale and its subdomains, (b) examine how the scale varies across demographic variables, and (c) explain the scale's concurrent validity to predict SSB consumption. Results from 293 adults in rural southwestern Virginia (81.6% female, 94.0% White, 54.1% receiving SNAP and/or WIC benefits, average 410 SSB kcal daily) show that overall SSB-ML scale and its subdomains have strong internal consistencies (Cronbach's alphas ranging from 0.65 to 0.83). The Representation & Reality domain significantly predicted SSB kilocalories, after controlling for demographic variables. This study has implications for the assessment and inclusion of context-specific media literacy skills in behavioral interventions.

  10. Does application of moderately concentrated platelet-rich plasma improve clinical and structural outcome after arthroscopic repair of medium-sized to large rotator cuff tear? A randomized controlled trial.

    PubMed

    Pandey, Vivek; Bandi, Atul; Madi, Sandesh; Agarwal, Lipisha; Acharya, Kiran K V; Maddukuri, Satish; Sambhaji, Charudutt; Willems, W Jaap

    2016-08-01

    Platelet-rich plasma (PRP) has the potential to improve tendon-bone healing. The evidence is still controversial as to whether PRP application after repair of medium-sized to large cuff tears leads to superior structural and clinical outcome, especially after single-row repair. In a randomized study, 102 patients (PRP group, 52 patients; control group, 50 patients) with medium-sized and large degenerative posterosuperior tears were included for arthroscopic repair with a minimum follow-up of 2 years. Patients were evaluated with clinical scores (visual analog scale score, Constant-Murley score, University of California-Los Angeles score, and American Shoulder and Elbow Surgeons score) and ultrasound to assess retear and vascularity pattern of the cuff. Visual analog scale scores were significantly lower in the PRP group than in controls at 1 month, 3 months, and 6 months but not later. Constant-Murley scores were significantly better in the PRP group compared with controls at 12 and 24 months, whereas University of California-Los Angeles scores were significantly higher in the PRP group at 6 and 12 months (P < .05). The American Shoulder and Elbow Surgeons score in both groups was comparable at all the times. At 24 months, retear in the PRP group (n = 2; 3.8%) was significantly lower than in the control group (n = 10; 20%; P = .01). The retear difference was significant only for large tears (PRP:control group, 1:6; P = .03). Doppler ultrasound examination showed significant vascularity in the PRP group repair site at 3 months postoperatively (P < .05) and in peribursal tissue until 12 months. Application of moderately concentrated PRP improves clinical and structural outcome in large cuff tears. PRP also enhances vascularity around the repair site in the early phase. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  11. Current screening and treatments in retinopathy of prematurity in the US

    PubMed Central

    Suelves, Ana M; Shulman, Julia P

    2016-01-01

    Retinopathy of prematurity (ROP) is a complex disease characterized by an aberrant developmental retinal angiogenesis in preterm infants and can carry significant visual morbidity, including retinal detachment and blindness. Though large scale, randomized clinical trials have improved our understanding of the pathophysiology and progression of the disease, the management of ROP remains a challenge for ophthalmologists. This review addresses the up-to-date screening approach, diagnosis, and treatment guidelines for ROP in the US. PMID:28539800

  12. Current screening and treatments in retinopathy of prematurity in the US.

    PubMed

    Suelves, Ana M; Shulman, Julia P

    2016-01-01

    Retinopathy of prematurity (ROP) is a complex disease characterized by an aberrant developmental retinal angiogenesis in preterm infants and can carry significant visual morbidity, including retinal detachment and blindness. Though large scale, randomized clinical trials have improved our understanding of the pathophysiology and progression of the disease, the management of ROP remains a challenge for ophthalmologists. This review addresses the up-to-date screening approach, diagnosis, and treatment guidelines for ROP in the US.

  13. Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos

    DTIC Science & Technology

    2009-05-01

    instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors

  14. Spectral Analysis and Computation of Effective Diffusivities for Steady Random Flows

    DTIC Science & Technology

    2016-04-28

    even in the motion of sea ice floes influenced by winds and ocean currents. The long time, large scale behavior of such systems is equivalent to an...flow plays a key role in many important processes in the global climate system [55] and Earth’s ecosys- tems [14]. Advection of geophysical fluids...HOMOGENIZATION OF THE ADVECTION-DIFFUSION EQUATION The dispersion of a cloud of passive scalars with density φ diffusing with molecular dif- fusivity ε and

  15. Broken Ergodicity in MHD Turbulence in a Spherical Domain

    NASA Technical Reports Server (NTRS)

    Shebalin, John V.; wang, Yifan

    2011-01-01

    Broken ergodicity (BE) occurs in Fourier method numerical simulations of ideal, homogeneous, incompressible magnetohydrodynamic (MHD) turbulence. Although naive statistical theory predicts that Fourier coefficients of fluid velocity and magnetic field are zero-mean random variables, numerical simulations clearly show that low-wave-number coefficients have non-zero mean values that can be very large compared to the associated standard deviation. In other words, large-scale coherent structure (i.e., broken ergodicity) in homogeneous MHD turbulence can spontaneously grow out of random initial conditions. Eigenanalysis of the modal covariance matrices in the probability density functions of ideal statistical theory leads to a theoretical explanation of observed BE in homogeneous MHD turbulence. Since dissipation is minimal at the largest scales, BE is also relevant for resistive magnetofluids, as evidenced in numerical simulations. Here, we move beyond model magnetofluids confined by periodic boxes to examine BE in rotating magnetofluids in spherical domains using spherical harmonic expansions along with suitable boundary conditions. We present theoretical results for 3-D and 2-D spherical models and also present computational results from dynamical simulations of 2-D MHD turbulence on a rotating spherical surface. MHD turbulence on a 2-D sphere is affected by Coriolus forces, while MHD turbulence on a 2-D plane is not, so that 2-D spherical models are a useful (and simpler) intermediate stage on the path to understanding the much more complex 3-D spherical case.

  16. A New Stochastic Approach to Predict Peak and Residual Shear Strength of Natural Rock Discontinuities

    NASA Astrophysics Data System (ADS)

    Casagrande, D.; Buzzi, O.; Giacomini, A.; Lambert, C.; Fenton, G.

    2018-01-01

    Natural discontinuities are known to play a key role in the stability of rock masses. However, it is a non-trivial task to estimate the shear strength of large discontinuities. Because of the inherent complexity to access to the full surface of the large in situ discontinuities, researchers or engineers tend to work on small-scale specimens. As a consequence, the results are often plagued by the well-known scale effect. A new approach is here proposed to predict shear strength of discontinuities. This approach has the potential to avoid the scale effect. The rationale of the approach is as follows: a major parameter that governs the shear strength of a discontinuity within a rock mass is roughness, which can be accounted for by surveying the discontinuity surface. However, this is typically not possible for discontinuities contained within the rock mass where only traces are visible. For natural surfaces, it can be assumed that traces are, to some extent, representative of the surface. It is here proposed to use the available 2D information (from a visible trace, referred to as a seed trace) and a random field model to create a large number of synthetic surfaces (3D data sets). The shear strength of each synthetic surface can then be estimated using a semi-analytical model. By using a large number of synthetic surfaces and a Monte Carlo strategy, a meaningful shear strength distribution can be obtained. This paper presents the validation of the semi-analytical mechanistic model required to support the new approach for prediction of discontinuity shear strength. The model can predict both peak and residual shear strength. The second part of the paper lays the foundation of a random field model to support the creation of synthetic surfaces having statistical properties in line with those of the data of the seed trace. The paper concludes that it is possible to obtain a reasonable estimate of peak and residual shear strength of the discontinuities tested from the information from a single trace, without having access to the whole surface.

  17. Culturally adaptive storytelling method to improve hypertension control in Vietnam - "We talk about our hypertension": study protocol for a feasibility cluster-randomized controlled trial.

    PubMed

    Allison, Jeroan J; Nguyen, Hoa L; Ha, Duc A; Chiriboga, Germán; Ly, Ha N; Tran, Hanh T; Phan, Ngoc T; Vu, Nguyen C; Kim, Minjin; Goldberg, Robert J

    2016-01-14

    Vietnam is experiencing an epidemiologic transition with an increased prevalence of non-communicable diseases. At present, the major risk factors for cardiovascular disease (CVD) are either on the rise or at alarming levels in Vietnam; inasmuch, the burden of CVD will continue to increase in this country unless effective prevention and control measures are put in place. A national survey in 2008 found that the prevalence of hypertension (HTN) was approximately 25 % among Vietnamese adults and it increased with advancing age. Therefore, novel, large-scale, and sustainable interventions for public health education to promote engagement in the process of detecting and treating HTN in Vietnam are urgently needed. A feasibility randomized trial will be conducted in Hung Yen province, Vietnam to evaluate the feasibility and acceptability of a novel community-based intervention using the "storytelling" method to enhance the control of HTN in adults residing in four rural communities. The intervention will center on stories about living with HTN, with patients speaking in their own words. The stories will be obtained from particularly eloquent patients, or "video stars," identified during Story Development Groups. The study will involve two phases: (i) developing a HTN intervention using the storytelling method, which is designed to empower patients to facilitate changes in their lifestyle practices, and (ii) conducting a feasibility cluster-randomized trial to investigate the feasibility, acceptability, and potential efficacy of the intervention compared with usual care in HTN control among rural residents. The trial will be conducted at four communes, and within each commune, 25 individuals 50 years or older with HTN will be enrolled in the trial resulting in a total sample size of 100 patients. This feasibility trial will provide the necessary groundwork for a subsequent large-scale, fully powered, cluster-randomized controlled trial to test the efficacy of our novel community-based intervention. Results from the full-scale trial will provide health policy makers with practical evidence on how to combat a key risk factor for CVD using a feasible, sustainable, and cost-effective intervention that could be used as a national program for controlling HTN in Vietnam and other developing countries. ClinicalTrials.gov. https://clinicaltrials.gov/ct2/show/NCT02483780 (registration date June 22, 2015).

  18. A Memory-Based Programmable Logic Device Using Look-Up Table Cascade with Synchronous Static Random Access Memories

    NASA Astrophysics Data System (ADS)

    Nakamura, Kazuyuki; Sasao, Tsutomu; Matsuura, Munehiro; Tanaka, Katsumasa; Yoshizumi, Kenichi; Nakahara, Hiroki; Iguchi, Yukihiro

    2006-04-01

    A large-scale memory-technology-based programmable logic device (PLD) using a look-up table (LUT) cascade is developed in the 0.35-μm standard complementary metal oxide semiconductor (CMOS) logic process. Eight 64 K-bit synchronous SRAMs are connected to form an LUT cascade with a few additional circuits. The features of the LUT cascade include: 1) a flexible cascade connection structure, 2) multi phase pseudo asynchronous operations with synchronous static random access memory (SRAM) cores, and 3) LUT-bypass redundancy. This chip operates at 33 MHz in 8-LUT cascades at 122 mW. Benchmark results show that it achieves a comparable performance to field programmable gate array (FPGAs).

  19. Exactly solvable random graph ensemble with extensively many short cycles

    NASA Astrophysics Data System (ADS)

    Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.

    2018-02-01

    We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.

  20. Variations of characteristic time scales in rotating stratified turbulence using a large parametric numerical study.

    PubMed

    Rosenberg, D; Marino, R; Herbert, C; Pouquet, A

    2016-01-01

    We study rotating stratified turbulence (RST) making use of numerical data stemming from a large parametric study varying the Reynolds, Froude and Rossby numbers, Re, Fr and Ro in a broad range of values. The computations are performed using periodic boundary conditions on grids of 1024(3) points, with no modeling of the small scales, no forcing and with large-scale random initial conditions for the velocity field only, and there are altogether 65 runs analyzed in this paper. The buoyancy Reynolds number defined as R(B) = ReFr2 varies from negligible values to ≈ 10(5), approaching atmospheric or oceanic regimes. This preliminary analysis deals with the variation of characteristic time scales of RST with dimensionless parameters, focusing on the role played by the partition of energy between the kinetic and potential modes, as a key ingredient for modeling the dynamics of such flows. We find that neither rotation nor the ratio of the Brunt-Väisälä frequency to the inertial frequency seem to play a major role in the absence of forcing in the global dynamics of the small-scale kinetic and potential modes. Specifically, in these computations, mostly in regimes of wave turbulence, characteristic times based on the ratio of energy to dissipation of the velocity and temperature fluctuations, T(V) and T(P), vary substantially with parameters. Their ratio γ=T(V)/T(P) follows roughly a bell-shaped curve in terms of Richardson number Ri. It reaches a plateau - on which time scales become comparable, γ≈0.6 - when the turbulence has significantly strengthened, leading to numerous destabilization events together with a tendency towards an isotropization of the flow.

  1. Detecting Random, Partially Random, and Nonrandom Minnesota Multiphasic Personality Inventory-2 Protocols

    ERIC Educational Resources Information Center

    Pinsoneault, Terry B.

    2007-01-01

    The ability of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher et al., 2001) validity scales to detect random, partially random, and nonrandom MMPI-2 protocols was investigated. Investigations included the Variable Response Inconsistency scale (VRIN), F, several potentially useful new F and VRIN subscales, and F-sub(b) - F…

  2. Spontaneous symmetry breaking, conformal anomaly and incompressible fluid turbulence

    NASA Astrophysics Data System (ADS)

    Oz, Yaron

    2017-11-01

    We propose an effective conformal field theory (CFT) description of steady state incompressible fluid turbulence at the inertial range of scales in any number of spatial dimensions. We derive a KPZ-type equation for the anomalous scaling of the longitudinal velocity structure functions and relate the intermittency parameter to the boundary Euler (A-type) conformal anomaly coefficient. The proposed theory consists of a mean field CFT that exhibits Kolmogorov linear scaling (K41 theory) coupled to a dilaton. The dilaton is a Nambu-Goldstone gapless mode that arises from a spontaneous breaking due to the energy flux of the separate scale and time symmetries of the inviscid Navier-Stokes equations to a K41 scaling with a dynamical exponent z=2/3 . The dilaton acts as a random measure that dresses the K41 theory and introduces intermittency. We discuss the two, three and large number of space dimensions cases and how entanglement entropy can be used to characterize the intermittency strength.

  3. The Buildup of a Scale-free Photospheric Magnetic Network

    NASA Astrophysics Data System (ADS)

    Thibault, K.; Charbonneau, P.; Crouch, A. D.

    2012-10-01

    We use a global Monte Carlo simulation of the formation of the solar photospheric magnetic network to investigate the origin of the scale invariance characterizing magnetic flux concentrations visible on high-resolution magnetograms. The simulations include spatially and temporally homogeneous injection of small-scale magnetic elements over the whole photosphere, as well as localized episodic injection associated with the emergence and decay of active regions. Network elements form in response to cumulative pairwise aggregation or cancellation of magnetic elements, undergoing a random walk on the sphere and advected on large spatial scales by differential rotation and a poleward meridional flow. The resulting size distribution of simulated network elements is in very good agreement with observational inferences. We find that the fractal index and size distribution of network elements are determined primarily by these post-emergence surface mechanisms, and carry little or no memory of the scales at which magnetic flux is injected in the simulation. Implications for models of dynamo action in the Sun are briefly discussed.

  4. Multiscale structure of time series revealed by the monotony spectrum.

    PubMed

    Vamoş, Călin

    2017-03-01

    Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.

  5. Effects of mindfulness meditation on chronic pain: a randomized controlled trial.

    PubMed

    la Cour, Peter; Petersen, Marian

    2015-04-01

    This randomized controlled clinical trial investigated the effects of mindfulness meditation on chronic pain. A total of 109 patients with nonspecific chronic pain were randomized to either a standardized mindfulness meditation program (mindfulness-based stress reduction [MBSR]) or to a wait list control. Pain, physical function, mental function, pain acceptance, and health-related quality of life were measured. The SF36 vitality scale was chosen as the primary outcome measure; the primary end point was after completing the MBSR course. Within a 2.5-year period, 43 of the 109 randomized patients completed the mindfulness program, while 47 remained in the control group. Data were compared at three time points: at baseline, after completion of the course/waiting period, and at the 6-month follow-up. Significant effect (Cohen's d = 0.39) was found on the primary outcome measure, the SF36 vitality scale. On the secondary variables, significant medium to large size effects (Cohen's d = 0.37-0.71) were found for lower general anxiety and depression, better mental quality of life (psychological well-being), feeling in control of the pain, and higher pain acceptance. Small (nonsignificant) effect sizes were found for pain measures. There were no significant differences in the measures just after the intervention vs the 6-month follow-up. A standardized mindfulness program (MBSR) contributes positively to pain management and can exert clinically relevant effects on several important dimensions in patients with long-lasting chronic pain. © 2014 American Academy of Pain Medicine.

  6. Do e-mail alerts of new research increase knowledge translation? A "Nephrology Now" randomized control trial.

    PubMed

    Tanna, Gemini V; Sood, Manish M; Schiff, Jeffrey; Schwartz, Daniel; Naimark, David M

    2011-01-01

    As the volume of medical literature increases exponentially, maintaining current clinical practice is becoming more difficult. Multiple, Internet-based journal clubs and alert services have recently emerged. The purpose of this study is to determine whether the use of the e-mail alert service, Nephrology Now, increases knowledge translation regarding current nephrology literature. Nephrology Now is a nonprofit, monthly e-mail alert service that highlights clinically relevant articles in nephrology. In 2007-2008, the authors randomized 1,683 subscribers into two different groups receiving select intervention articles, and then they used an online survey to assess both groups on their familiarity with the articles and their acquisition of knowledge. Of the randomized subscribers, 803 (47.7%) completed surveys, and the two groups had a similar number of responses (401 and 402, respectively). The authors noted no differences in baseline characteristics between the two groups. Familiarity increased as a result of the Nephrology Now alerts (0.23 ± 0.087 units on a familiarity scale; 95% confidence interval [CI]: 0.06-0.41; P = .007) especially in physicians (multivariate odds ratio 1.83; P = .0002). No detectable improvement in knowledge occurred (0.03 ± 0.083 units on a knowledge scale; 95% CI: -0.13 to 0.20; P = .687). An e-mail alert service of new literature improved a component of knowledge translation--familiarity--but not knowledge acquisition in a large, randomized, international population.

  7. Evidence for a global seismic-moment release sequence

    USGS Publications Warehouse

    Bufe, C.G.; Perkins, D.M.

    2005-01-01

    Temporal clustering of the larger earthquakes (foreshock-mainshock-aftershock) followed by relative quiescence (stress shadow) are characteristic of seismic cycles along plate boundaries. A global seismic-moment release history, based on a little more than 100 years of instrumental earthquake data in an extended version of the catalog of Pacheco and Sykes (1992), illustrates similar behavior for Earth as a whole. Although the largest earthquakes have occurred in the circum-Pacific region, an analysis of moment release in the hemisphere antipodal to the Pacific plate shows a very similar pattern. Monte Carlo simulations confirm that the global temporal clustering of great shallow earthquakes during 1952-1964 at M ??? 9.0 is highly significant (4% random probability) as is the clustering of the events of M ??? 8.6 (0.2% random probability) during 1950-1965. We have extended the Pacheco and Sykes (1992) catalog from 1989 through 2001 using Harvard moment centroid data. Immediately after the 1950-1965 cluster, significant quiescence at and above M 8.4 begins and continues until 2001 (0.5% random probability). In alternative catalogs derived by correcting for possible random errors in magnitude estimates in the extended Pacheco-Sykes catalog, the clustering of M ??? 9 persists at a significant level. These observations indicate that, for great earthquakes, Earth behaves as a coherent seismotectonic system. A very-large-scale mechanism for global earthquake triggering and/or stress transfer is implied. There are several candidates, but so far only viscoelastic relaxation has been modeled on a global scale.

  8. Complexity Characteristics of Currency Networks

    NASA Astrophysics Data System (ADS)

    Gorski, A. Z.; Drozdz, S.; Kwapien, J.; Oswiecimka, P.

    2006-11-01

    A large set of daily FOREX time series is analyzed. The corresponding correlation matrices (CM) are constructed for USD, EUR and PLN used as the base currencies. The triangle rule is interpreted as constraints reducing the number of independent returns. The CM spectrum is computed and compared with the cases of shuffled currencies and a fictitious random currency taken as a base currency. The Minimal Spanning Tree (MST) graphs are calculated and the clustering effects for strong currencies are found. It is shown that for MSTs the node rank has power like, scale free behavior. Finally, the scaling exponents are evaluated and found in the range analogous to those identified recently for various complex networks.

  9. Effects of diversity on multiagent systems: Minority games

    NASA Astrophysics Data System (ADS)

    Wong, K. Y. Michael; Lim, S. W.; Gao, Zhuo

    2005-06-01

    We consider a version of large population games whose agents compete for resources using strategies with adaptable preferences. The games can be used to model economic markets, ecosystems, or distributed control. Diversity of initial preferences of strategies is introduced by randomly assigning biases to the strategies of different agents. We find that diversity among the agents reduces their maladaptive behavior. We find interesting scaling relations with diversity for the variance and other parameters such as the convergence time, the fraction of fickle agents, and the variance of wealth, illustrating their dynamical origin. When diversity increases, the scaling dynamics is modified by kinetic sampling and waiting effects. Analyses yield excellent agreement with simulations.

  10. The genus curve of the Abell clusters

    NASA Technical Reports Server (NTRS)

    Rhoads, James E.; Gott, J. Richard, III; Postman, Marc

    1994-01-01

    We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21(sub -0.47 sup +0.43) on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36(sub -0.17 sup +0.46).

  11. The genus curve of the Abell clusters

    NASA Astrophysics Data System (ADS)

    Rhoads, James E.; Gott, J. Richard, III; Postman, Marc

    1994-01-01

    We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21-0.47+0.43 on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36-0.17+0.46.

  12. Numerical simulation of small-scale thermal convection in the atmosphere

    NASA Technical Reports Server (NTRS)

    Somerville, R. C. J.

    1973-01-01

    A Boussinesq system is integrated numerically in three dimensions and time in a study of nonhydrostatic convection in the atmosphere. Simulation of cloud convection is achieved by the inclusion of parametrized effects of latent heat and small-scale turbulence. The results are compared with the cell structure observed in Rayleigh-Benard laboratory conversion experiments in air. At a Rayleigh number of 4000, the numerical model adequately simulates the experimentally observed evolution, including some prominent transients of a flow from a randomly perturbed initial conductive state into the final state of steady large-amplitude two-dimensional rolls. At Rayleigh number 9000, the model reproduces the experimentally observed unsteady equilibrium of vertically coherent oscillatory waves superimposed on rolls.

  13. Reducing Errors in Satellite Simulated Views of Clouds with an Improved Parameterization of Unresolved Scales

    NASA Astrophysics Data System (ADS)

    Hillman, B. R.; Marchand, R.; Ackerman, T. P.

    2016-12-01

    Satellite instrument simulators have emerged as a means to reduce errors in model evaluation by producing simulated or psuedo-retrievals from model fields, which account for limitations in the satellite retrieval process. Because of the mismatch in resolved scales between satellite retrievals and large-scale models, model cloud fields must first be downscaled to scales consistent with satellite retrievals. This downscaling is analogous to that required for model radiative transfer calculations. The assumption is often made in both model radiative transfer codes and satellite simulators that the unresolved clouds follow maximum-random overlap with horizontally homogeneous cloud condensate amounts. We examine errors in simulated MISR and CloudSat retrievals that arise due to these assumptions by applying the MISR and CloudSat simulators to cloud resolving model (CRM) output generated by the Super-parameterized Community Atmosphere Model (SP-CAM). Errors are quantified by comparing simulated retrievals performed directly on the CRM fields with those simulated by first averaging the CRM fields to approximately 2-degree resolution, applying a "subcolumn generator" to regenerate psuedo-resolved cloud and precipitation condensate fields, and then applying the MISR and CloudSat simulators on the regenerated condensate fields. We show that errors due to both assumptions of maximum-random overlap and homogeneous condensate are significant (relative to uncertainties in the observations and other simulator limitations). The treatment of precipitation is particularly problematic for CloudSat-simulated radar reflectivity. We introduce an improved subcolumn generator for use with the simulators, and show that these errors can be greatly reduced by replacing the maximum-random overlap assumption with the more realistic generalized overlap and incorporating a simple parameterization of subgrid-scale cloud and precipitation condensate heterogeneity. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND NO. SAND2016-7485 A

  14. Scaling and percolation in the small-world network model

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.; Watts, D. J.

    1999-12-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Padé approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model.

  15. The effect of India's total sanitation campaign on defecation behaviors and child health in rural Madhya Pradesh: a cluster randomized controlled trial.

    PubMed

    Patil, Sumeet R; Arnold, Benjamin F; Salvatore, Alicia L; Briceno, Bertha; Ganguly, Sandipan; Colford, John M; Gertler, Paul J

    2014-08-01

    Poor sanitation is thought to be a major cause of enteric infections among young children. However, there are no previously published randomized trials to measure the health impacts of large-scale sanitation programs. India's Total Sanitation Campaign (TSC) is one such program that seeks to end the practice of open defecation by changing social norms and behaviors, and providing technical support and financial subsidies. The objective of this study was to measure the effect of the TSC implemented with capacity building support from the World Bank's Water and Sanitation Program in Madhya Pradesh on availability of individual household latrines (IHLs), defecation behaviors, and child health (diarrhea, highly credible gastrointestinal illness [HCGI], parasitic infections, anemia, growth). We conducted a cluster-randomized, controlled trial in 80 rural villages. Field staff collected baseline measures of sanitation conditions, behaviors, and child health (May-July 2009), and revisited households 21 months later (February-April 2011) after the program was delivered. The study enrolled a random sample of 5,209 children <5 years old from 3,039 households that had at least one child <24 months at the beginning of the study. A random subsample of 1,150 children <24 months at enrollment were tested for soil transmitted helminth and protozoan infections in stool. The randomization successfully balanced intervention and control groups, and we estimated differences between groups in an intention to treat analysis. The intervention increased percentage of households in a village with improved sanitation facilities as defined by the WHO/UNICEF Joint Monitoring Programme by an average of 19% (95% CI for difference: 12%-26%; group means: 22% control versus 41% intervention), decreased open defecation among adults by an average of 10% (95% CI for difference: 4%-15%; group means: 73% intervention versus 84% control). However, the intervention did not improve child health measured in terms of multiple health outcomes (diarrhea, HCGI, helminth infections, anemia, growth). Limitations of the study included a relatively short follow-up period following implementation, evidence for contamination in ten of the 40 control villages, and bias possible in self-reported outcomes for diarrhea, HCGI, and open defecation behaviors. The intervention led to modest increases in availability of IHLs and even more modest reductions in open defecation. These improvements were insufficient to improve child health outcomes (diarrhea, HCGI, parasite infection, anemia, growth). The results underscore the difficulty of achieving adequately large improvements in sanitation levels to deliver expected health benefits within large-scale rural sanitation programs. ClinicalTrials.gov NCT01465204. Please see later in the article for the Editors' Summary.

  16. The Effect of India's Total Sanitation Campaign on Defecation Behaviors and Child Health in Rural Madhya Pradesh: A Cluster Randomized Controlled Trial

    PubMed Central

    Patil, Sumeet R.; Arnold, Benjamin F.; Salvatore, Alicia L.; Briceno, Bertha; Ganguly, Sandipan; Colford, John M.; Gertler, Paul J.

    2014-01-01

    Background Poor sanitation is thought to be a major cause of enteric infections among young children. However, there are no previously published randomized trials to measure the health impacts of large-scale sanitation programs. India's Total Sanitation Campaign (TSC) is one such program that seeks to end the practice of open defecation by changing social norms and behaviors, and providing technical support and financial subsidies. The objective of this study was to measure the effect of the TSC implemented with capacity building support from the World Bank's Water and Sanitation Program in Madhya Pradesh on availability of individual household latrines (IHLs), defecation behaviors, and child health (diarrhea, highly credible gastrointestinal illness [HCGI], parasitic infections, anemia, growth). Methods and Findings We conducted a cluster-randomized, controlled trial in 80 rural villages. Field staff collected baseline measures of sanitation conditions, behaviors, and child health (May–July 2009), and revisited households 21 months later (February–April 2011) after the program was delivered. The study enrolled a random sample of 5,209 children <5 years old from 3,039 households that had at least one child <24 months at the beginning of the study. A random subsample of 1,150 children <24 months at enrollment were tested for soil transmitted helminth and protozoan infections in stool. The randomization successfully balanced intervention and control groups, and we estimated differences between groups in an intention to treat analysis. The intervention increased percentage of households in a village with improved sanitation facilities as defined by the WHO/UNICEF Joint Monitoring Programme by an average of 19% (95% CI for difference: 12%–26%; group means: 22% control versus 41% intervention), decreased open defecation among adults by an average of 10% (95% CI for difference: 4%–15%; group means: 73% intervention versus 84% control). However, the intervention did not improve child health measured in terms of multiple health outcomes (diarrhea, HCGI, helminth infections, anemia, growth). Limitations of the study included a relatively short follow-up period following implementation, evidence for contamination in ten of the 40 control villages, and bias possible in self-reported outcomes for diarrhea, HCGI, and open defecation behaviors. Conclusions The intervention led to modest increases in availability of IHLs and even more modest reductions in open defecation. These improvements were insufficient to improve child health outcomes (diarrhea, HCGI, parasite infection, anemia, growth). The results underscore the difficulty of achieving adequately large improvements in sanitation levels to deliver expected health benefits within large-scale rural sanitation programs. Trial Registration ClinicalTrials.gov NCT01465204 Please see later in the article for the Editors' Summary PMID:25157929

  17. Distribution of velocities and acceleration for a particle in Brownian correlated disorder: Inertial case

    NASA Astrophysics Data System (ADS)

    Le Doussal, Pierre; Petković, Aleksandra; Wiese, Kay Jörg

    2012-06-01

    We study the motion of an elastic object driven in a disordered environment in presence of both dissipation and inertia. We consider random forces with the statistics of random walks and reduce the problem to a single degree of freedom. It is the extension of the mean-field Alessandro-Beatrice- Bertotti-Montorsi (ABBM) model in presence of an inertial mass m. While the ABBM model can be solved exactly, its extension to inertia exhibits complicated history dependence due to oscillations and backward motion. The characteristic scales for avalanche motion are studied from numerics and qualitative arguments. To make analytical progress, we consider two variants which coincide with the original model whenever the particle moves only forward. Using a combination of analytical and numerical methods together with simulations, we characterize the distributions of instantaneous acceleration and velocity, and compare them in these three models. We show that for large driving velocity, all three models share the same large-deviation function for positive velocities, which is obtained analytically for small and large m, as well as for m=6/25. The effect of small additional thermal and quantum fluctuations can be treated within an approximate method.

  18. Selection of core animals in the Algorithm for Proven and Young using a simulation model.

    PubMed

    Bradford, H L; Pocrnić, I; Fragomeni, B O; Lourenco, D A L; Misztal, I

    2017-12-01

    The Algorithm for Proven and Young (APY) enables the implementation of single-step genomic BLUP (ssGBLUP) in large, genotyped populations by separating genotyped animals into core and non-core subsets and creating a computationally efficient inverse for the genomic relationship matrix (G). As APY became the choice for large-scale genomic evaluations in BLUP-based methods, a common question is how to choose the animals in the core subset. We compared several core definitions to answer this question. Simulations comprised a moderately heritable trait for 95,010 animals and 50,000 genotypes for animals across five generations. Genotypes consisted of 25,500 SNP distributed across 15 chromosomes. Genotyping errors and missing pedigree were also mimicked. Core animals were defined based on individual generations, equal representation across generations, and at random. For a sufficiently large core size, core definitions had the same accuracies and biases, even if the core animals had imperfect genotypes. When genotyped animals had unknown parents, accuracy and bias were significantly better (p ≤ .05) for random and across generation core definitions. © 2017 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.

  19. Declines in moose population density at Isle Royle National Park, MI, USA and accompanied changes in landscape patterns

    USGS Publications Warehouse

    De Jager, N. R.; Pastor, J.

    2009-01-01

    Ungulate herbivores create patterns of forage availability, plant species composition, and soil fertility as they range across large landscapes and consume large quantities of plant material. Over time, herbivore populations fluctuate, producing great potential for spatio-temporal landscape dynamics. In this study, we extend the spatial and temporal extent of a long-term investigation of the relationship of landscape patterns to moose foraging behavior at Isle Royale National Park, MI. We examined how patterns of browse availability and consumption, plant basal area, and soil fertility changed during a recent decline in the moose population. We used geostatistics to examine changes in the nature of spatial patterns in two valleys over 18 years and across short-range and long-range distance scales. Landscape patterns of available and consumed browse changed from either repeated patches or randomly distributed patches in 1988-1992 to random point distributions by 2007 after a recent record high peak followed by a rapid decline in the moose population. Patterns of available and consumed browse became decoupled during the moose population low, which is in contrast to coupled patterns during the earlier high moose population. Distributions of plant basal area and soil nitrogen availability also switched from repeated patches to randomly distributed patches in one valley and to random point distributions in the other valley. Rapid declines in moose population density may release vegetation and soil fertility from browsing pressure and in turn create random landscape patterns. ?? Springer Science+Business Media B.V. 2009.

  20. Circumnuclear Structures in Megamaser Host Galaxies

    NASA Astrophysics Data System (ADS)

    Pjanka, Patryk; Greene, Jenny E.; Seth, Anil C.; Braatz, James A.; Henkel, Christian; Lo, Fred K. Y.; Läsker, Ronald

    2017-08-01

    Using the Hubble Space Telescope, we identify circumnuclear (100-500 pc scale) structures in nine new H2O megamaser host galaxies to understand the flow of matter from kpc-scale galactic structures down to the supermassive black holes (SMBHs) at galactic centers. We double the sample analyzed in a similar way by Greene et al. and consider the properties of the combined sample of 18 sources. We find that disk-like structure is virtually ubiquitous when we can resolve <200 pc scales, in support of the notion that non-axisymmetries on these scales are a necessary condition for SMBH fueling. We perform an analysis of the orientation of our identified nuclear regions and compare it with the orientation of megamaser disks and the kpc-scale disks of the hosts. We find marginal evidence that the disk-like nuclear structures show increasing misalignment from the kpc-scale host galaxy disk as the scale of the structure decreases. In turn, we find that the orientation of both the ˜100 pc scale nuclear structures and their host galaxy large-scale disks is consistent with random with respect to the orientation of their respective megamaser disks.

Top