NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
Comparison of evidence on harms of medical interventions in randomized and nonrandomized studies
Papanikolaou, Panagiotis N.; Christidi, Georgia D.; Ioannidis, John P.A.
2006-01-01
Background Information on major harms of medical interventions comes primarily from epidemiologic studies performed after licensing and marketing. Comparison with data from large-scale randomized trials is occasionally feasible. We compared evidence from randomized trials with that from epidemiologic studies to determine whether they give different estimates of risk for important harms of medical interventions. Methods We targeted well-defined, specific harms of various medical interventions for which data were already available from large-scale randomized trials (> 4000 subjects). Nonrandomized studies involving at least 4000 subjects addressing these same harms were retrieved through a search of MEDLINE. We compared the relative risks and absolute risk differences for specific harms in the randomized and nonrandomized studies. Results Eligible nonrandomized studies were found for 15 harms for which data were available from randomized trials addressing the same harms. Comparisons of relative risks between the study types were feasible for 13 of the 15 topics, and of absolute risk differences for 8 topics. The estimated increase in relative risk differed more than 2-fold between the randomized and nonrandomized studies for 7 (54%) of the 13 topics; the estimated increase in absolute risk differed more than 2-fold for 5 (62%) of the 8 topics. There was no clear predilection for randomized or nonrandomized studies to estimate greater relative risks, but usually (75% [6/8]) the randomized trials estimated larger absolute excess risks of harm than the nonrandomized studies did. Interpretation Nonrandomized studies are often conservative in estimating absolute risks of harms. It would be useful to compare and scrutinize the evidence on harms obtained from both randomized and nonrandomized studies. PMID:16505459
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches
Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand
2018-01-01
Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Genetic drift at expanding frontiers promotes gene segregation
Hallatschek, Oskar; Hersen, Pascal; Ramanathan, Sharad; Nelson, David R.
2007-01-01
Competition between random genetic drift and natural selection play a central role in evolution: Whereas nonbeneficial mutations often prevail in small populations by chance, mutations that sweep through large populations typically confer a selective advantage. Here, however, we observe chance effects during range expansions that dramatically alter the gene pool even in large microbial populations. Initially well mixed populations of two fluorescently labeled strains of Escherichia coli develop well defined, sector-like regions with fractal boundaries in expanding colonies. The formation of these regions is driven by random fluctuations that originate in a thin band of pioneers at the expanding frontier. A comparison of bacterial and yeast colonies (Saccharomyces cerevisiae) suggests that this large-scale genetic sectoring is a generic phenomenon that may provide a detectable footprint of past range expansions. PMID:18056799
Lessons Learned from Large-Scale Randomized Experiments
ERIC Educational Resources Information Center
Slavin, Robert E.; Cheung, Alan C. K.
2017-01-01
Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk
2016-07-18
Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.
Fabio, Anthony; Geller, Ruth; Bazaco, Michael; Bear, Todd M; Foulds, Abigail L; Duell, Jessica; Sharma, Ravi
2015-01-01
Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates.
Modeling velocity space-time correlations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2016-11-01
Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.
NASA Astrophysics Data System (ADS)
Baumgartner, Peter O.
A database on Middle Jurassic-Early Cretaceous radiolarians consisting of first and final occurrences of 110 species in 226 samples from 43 localities was used to compute Unitary Associations and probabilistic ranking and scaling (RASC), in order to test deterministic versus probabilistic quantitative biostratigraphic methods. Because the Mesozoic radiolarian fossil record is mainly dissolution-controlled, the sequence of events differs greatly from section to section. The scatter of local first and final appearances along a time scale is large compared to the species range; it is asymmetrical, with a maximum near the ends of the range and it is non-random. Thus, these data do not satisfy the statistical assumptions made in ranking and scaling. Unitary Associations produce maximum ranges of the species relative to each other by stacking cooccurrence data from all sections and therefore compensate for the local dissolution effects. Ranking and scaling, based on the assumption of a normal random distribution of the events, produces average ranges which are for most species much shorter than the maximum UA-ranges. There are, however, a number of species with similar ranges in both solutions. These species are believed to be the most dissolution-resistant and, therefore, the most reliable ones for the definition of biochronozones. The comparison of maximum and average ranges may be a powerful tool to test reliability of species for biochronology. Dissolution-controlled fossil data yield high crossover frequencies and therefore small, statistically insignificant interfossil distances. Scaling has not produced a useful sequence for this type of data.
The Relationship of Class Size Effects and Teacher Salary
ERIC Educational Resources Information Center
Peevely, Gary; Hedges, Larry; Nye, Barbara A.
2005-01-01
The effects of class size on academic achievement have been studied for decades. Although the results of small-scale, randomized experiments and large-scale, econometric studies point to positive effects of small classes, some scholars see the evidence as ambiguous. Recent analyses from a 4-year, large-scale, randomized experiment on the effects…
Physical models of polarization mode dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menyuk, C.R.; Wai, P.K.A.
The effect of randomly varying birefringence on light propagation in optical fibers is studied theoretically in the parameter regime that will be used for long-distance communications. In this regime, the birefringence is large and varies very rapidly in comparison to the nonlinear and dispersive scale lengths. We determine the polarization mode dispersion, and we show that physically realistic models yield the same result for polarization mode dispersion as earlier heuristic models that were introduced by Poole. We also prove an ergodic theorem.
Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R
2012-01-01
In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.
Chen, Xiaoqin; Li, Ying; Zheng, Hui; Hu, Kaming; Zhang, Hongxing; Zhao, Ling; Li, Yan; Liu, Lian; Mang, Lingling; Yu, Shuyuan
2009-07-01
Acupuncture to treat Bell's palsy is one of the most commonly used methods in China. There are a variety of acupuncture treatment options to treat Bell's palsy in clinical practice. Since Bell's palsy has three different path-stages (acute stage, resting stage and restoration stage), so whether acupuncture is effective in the different path-stages and which acupuncture treatment is the best method are major issues in acupuncture clinical trials about Bell's palsy. In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with acupuncture. There are five acupuncture groups, with four according to different path-stages and one not. In total, 900 patients with Bell's palsy are enrolled in this study. These patients are randomly assigned to receive one of the following four treatment groups according to different path-stages, i.e. 1) staging acupuncture group, 2) staging acupuncture and moxibustion group, 3) staging electro-acupuncture group, 4) staging acupuncture along yangming musculature group or non-staging acupuncture control group. The outcome measurements in this trial are the effect comparison achieved among these five groups in terms of House-Brackmann scale (Global Score and Regional Score), Facial Disability Index scale, Classification scale of Facial Paralysis, and WHOQOL-BREF scale before randomization (baseline phase) and after randomization. The result of this trial will certify the efficacy of using staging acupuncture and moxibustion to treat Bell's palsy, and to approach a best acupuncture treatment among these five different methods for treating Bell's palsy.
NASA Astrophysics Data System (ADS)
Gorokhovski, Mikhael; Zamansky, Rémi
2018-03-01
Consistently with observations from recent experiments and DNS, we focus on the effects of strong velocity increments at small spatial scales for the simulation of the drag force on particles in high Reynolds number flows. In this paper, we decompose the instantaneous particle acceleration in its systematic and residual parts. The first part is given by the steady-drag force obtained from the large-scale energy-containing motions, explicitly resolved by the simulation, while the second denotes the random contribution due to small unresolved turbulent scales. This is in contrast with standard drag models in which the turbulent microstructures advected by the large-scale eddies are deemed to be filtered by the particle inertia. In our paper, the residual term is introduced as the particle acceleration conditionally averaged on the instantaneous dissipation rate along the particle path. The latter is modeled from a log-normal stochastic process with locally defined parameters obtained from the resolved field. The residual term is supplemented by an orientation model which is given by a random walk on the unit sphere. We propose specific models for particles with diameter smaller and larger size than the Kolmogorov scale. In the case of the small particles, the model is assessed by comparison with direct numerical simulation (DNS). Results showed that by introducing this modeling, the particle acceleration statistics from DNS is predicted fairly well, in contrast with the standard LES approach. For the particles bigger than the Kolmogorov scale, we propose a fluctuating particle response time, based on an eddy viscosity estimated at the particle scale. This model gives stretched tails of the particle acceleration distribution and dependence of its variance consistent with experiments.
Geller, Ruth; Bear, Todd M.; Foulds, Abigail L.; Duell, Jessica; Sharma, Ravi
2015-01-01
Background. Emerging research highlights the promise of community- and policy-level strategies in preventing youth violence. Large-scale economic developments, such as sports and entertainment arenas and casinos, may improve the living conditions, economics, public health, and overall wellbeing of area residents and may influence rates of violence within communities. Objective. To assess the effect of community economic development efforts on neighborhood residents' perceptions on violence, safety, and economic benefits. Methods. Telephone survey in 2011 using a listed sample of randomly selected numbers in six Pittsburgh neighborhoods. Descriptive analyses examined measures of perceived violence and safety and economic benefit. Responses were compared across neighborhoods using chi-square tests for multiple comparisons. Survey results were compared to census and police data. Results. Residents in neighborhoods with the large-scale economic developments reported more casino-specific and arena-specific economic benefits. However, 42% of participants in the neighborhood with the entertainment arena felt there was an increase in crime, and 29% of respondents from the neighborhood with the casino felt there was an increase. In contrast, crime decreased in both neighborhoods. Conclusions. Large-scale economic developments have a direct influence on the perception of violence, despite actual violence rates. PMID:26273310
Findlay, S; Sinsabaugh, R L
2006-10-01
We examined bacterial metabolic activity and community similarity in shallow subsurface stream sediments distributed across three regions of the eastern United States to assess whether there were parallel changes in functional and structural attributes at this large scale. Bacterial growth, oxygen consumption, and a suite of extracellular enzyme activities were assayed to describe functional variability. Community similarity was assessed using randomly amplified polymorphic DNA (RAPD) patterns. There were significant differences in streamwater chemistry, metabolic activity, and bacterial growth among regions with, for instance, twofold higher bacterial production in streams near Baltimore, MD, compared to Hubbard Brook, NH. Five of eight extracellular enzymes showed significant differences among regions. Cluster analyses of individual streams by metabolic variables showed clear groups with significant differences in representation of sites from different regions among groups. Clustering of sites based on randomly amplified polymorphic DNA banding resulted in groups with generally less internal similarity although there were still differences in distribution of regional sites. There was a marginally significant (p = 0.09) association between patterns based on functional and structural variables. There were statistically significant but weak (r2 approximately 30%) associations between landcover and measures of both structure and function. These patterns imply a large-scale organization of biofilm communities and this structure may be imposed by factor(s) such as landcover and covariates such as nutrient concentrations, which are known to also cause differences in macrobiota of stream ecosystems.
Legume genome evolution viewed through the Medicago truncatula and Lotus japonicus genomes
Cannon, Steven B.; Sterck, Lieven; Rombauts, Stephane; Sato, Shusei; Cheung, Foo; Gouzy, Jérôme; Wang, Xiaohong; Mudge, Joann; Vasdewani, Jayprakash; Schiex, Thomas; Spannagl, Manuel; Monaghan, Erin; Nicholson, Christine; Humphray, Sean J.; Schoof, Heiko; Mayer, Klaus F. X.; Rogers, Jane; Quétier, Francis; Oldroyd, Giles E.; Debellé, Frédéric; Cook, Douglas R.; Retzel, Ernest F.; Roe, Bruce A.; Town, Christopher D.; Tabata, Satoshi; Van de Peer, Yves; Young, Nevin D.
2006-01-01
Genome sequencing of the model legumes, Medicago truncatula and Lotus japonicus, provides an opportunity for large-scale sequence-based comparison of two genomes in the same plant family. Here we report synteny comparisons between these species, including details about chromosome relationships, large-scale synteny blocks, microsynteny within blocks, and genome regions lacking clear correspondence. The Lotus and Medicago genomes share a minimum of 10 large-scale synteny blocks, each with substantial collinearity and frequently extending the length of whole chromosome arms. The proportion of genes syntenic and collinear within each synteny block is relatively homogeneous. Medicago–Lotus comparisons also indicate similar and largely homogeneous gene densities, although gene-containing regions in Mt occupy 20–30% more space than Lj counterparts, primarily because of larger numbers of Mt retrotransposons. Because the interpretation of genome comparisons is complicated by large-scale genome duplications, we describe synteny, synonymous substitutions and phylogenetic analyses to identify and date a probable whole-genome duplication event. There is no direct evidence for any recent large-scale genome duplication in either Medicago or Lotus but instead a duplication predating speciation. Phylogenetic comparisons place this duplication within the Rosid I clade, clearly after the split between legumes and Salicaceae (poplar). PMID:17003129
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Yan; Shao, Hao; Zhong, Yaozhao; Zhang, Sai; Zhao, Zongxi
2012-06-01
Band structure and wave localization are investigated for sea surface water waves over large-scale sand wave topography. Sand wave height, sand wave width, water depth, and water width between adjacent sand waves have significant impact on band gaps. Random fluctuations of sand wave height, sand wave width, and water depth induce water wave localization. However, random water width produces a perfect transmission tunnel of water waves at a certain frequency so that localization does not occur no matter how large a disorder level is applied. Together with theoretical results, the field experimental observations in the Taiwan Bank suggest band gap and wave localization as the physical mechanism of sea surface water wave propagating over natural large-scale sand waves.
Universal statistics of vortex tangles in three-dimensional random waves
NASA Astrophysics Data System (ADS)
Taylor, Alexander J.
2018-02-01
The tangled nodal lines (wave vortices) in random, three-dimensional wavefields are studied as an exemplar of a fractal loop soup. Their statistics are a three-dimensional counterpart to the characteristic random behaviour of nodal domains in quantum chaos, but in three dimensions the filaments can wind around one another to give distinctly different large scale behaviours. By tracing numerically the structure of the vortices, their conformations are shown to follow recent analytical predictions for random vortex tangles with periodic boundaries, where the local disorder of the model ‘averages out’ to produce large scale power law scaling relations whose universality classes do not depend on the local physics. These results explain previous numerical measurements in terms of an explicit effect of the periodic boundaries, where the statistics of the vortices are strongly affected by the large scale connectedness of the system even at arbitrarily high energies. The statistics are investigated primarily for static (monochromatic) wavefields, but the analytical results are further shown to directly describe the reconnection statistics of vortices evolving in certain dynamic systems, or occurring during random perturbations of the static configuration.
Large-scale structure of randomly jammed spheres
NASA Astrophysics Data System (ADS)
Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio
2017-05-01
We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.
NASA Astrophysics Data System (ADS)
Sato, Haruo; Fehler, Michael C.
2016-10-01
The envelope broadening and the peak delay of the S-wavelet of a small earthquake with increasing travel distance are results of scattering by random velocity inhomogeneities in the earth medium. As a simple mathematical model, Sato proposed a new stochastic synthesis of the scalar wavelet envelope in 3-D von Kármán type random media when the centre wavenumber of the wavelet is in the power-law spectral range of the random velocity fluctuation. The essential idea is to split the random medium spectrum into two components using the centre wavenumber as a reference: the long-scale (low-wavenumber spectral) component produces the peak delay and the envelope broadening by multiple scattering around the forward direction; the short-scale (high-wavenumber spectral) component attenuates wave amplitude by wide angle scattering. The former is calculated by the Markov approximation based on the parabolic approximation and the latter is calculated by the Born approximation. Here, we extend the theory for the envelope synthesis of a wavelet in 2-D random media, which makes it easy to compare with finite difference (FD) simulation results. The synthetic wavelet envelope is analytically written by using the random medium parameters in the angular frequency domain. For the case that the power spectral density function of the random velocity fluctuation has a steep roll-off at large wavenumbers, the envelope broadening is small and frequency independent, and scattering attenuation is weak. For the case of a small roll-off, however, the envelope broadening is large and increases with frequency, and the scattering attenuation is strong and increases with frequency. As a preliminary study, we compare synthetic wavelet envelopes with the average of FD simulation wavelet envelopes in 50 synthesized random media, which are characterized by the RMS fractional velocity fluctuation ε = 0.05, correlation scale a = 5 km and the background wave velocity V0 = 4 km s-1. We use the radiation of a 2 Hz Ricker wavelet from a point source. For all the cases of von Kármán order κ = 0.1, 0.5 and 1, we find the synthetic wavelet envelopes are a good match to the characteristics of FD simulation wavelet envelopes in a time window starting from the onset through the maximum peak to the time when the amplitude decreases to half the peak amplitude.
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
Antonov, N V; Kostenko, M M
2014-12-01
The field theoretic renormalization group and the operator product expansion are applied to two models of passive scalar quantities (the density and the tracer fields) advected by a random turbulent velocity field. The latter is governed by the Navier-Stokes equation for compressible fluid, subject to external random force with the covariance ∝δ(t-t')k(4-d-y), where d is the dimension of space and y is an arbitrary exponent. The original stochastic problems are reformulated as multiplicatively renormalizable field theoretic models; the corresponding renormalization group equations possess infrared attractive fixed points. It is shown that various correlation functions of the scalar field, its powers and gradients, demonstrate anomalous scaling behavior in the inertial-convective range already for small values of y. The corresponding anomalous exponents, identified with scaling (critical) dimensions of certain composite fields ("operators" in the quantum-field terminology), can be systematically calculated as series in y. The practical calculation is performed in the leading one-loop approximation, including exponents in anisotropic contributions. It should be emphasized that, in contrast to Gaussian ensembles with finite correlation time, the model and the perturbation theory presented here are manifestly Galilean covariant. The validity of the one-loop approximation and comparison with Gaussian models are briefly discussed.
He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi
2015-11-01
A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B
2016-01-01
The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.
Towards Productive Critique of Large-Scale Comparisons in Education
ERIC Educational Resources Information Center
Gorur, Radhika
2017-01-01
International large-scale assessments and comparisons (ILSAs) in education have become significant policy phenomena. How a country fares in these assessments has come to signify not only how a nation's education system is performing, but also its future prospects in a global economic "race". These assessments provoke passionate arguments…
Non-Gaussian Multi-resolution Modeling of Magnetosphere-Ionosphere Coupling Processes
NASA Astrophysics Data System (ADS)
Fan, M.; Paul, D.; Lee, T. C. M.; Matsuo, T.
2016-12-01
The most dynamic coupling between the magnetosphere and ionosphere occurs in the Earth's polar atmosphere. Our objective is to model scale-dependent stochastic characteristics of high-latitude ionospheric electric fields that originate from solar wind magnetosphere-ionosphere interactions. The Earth's high-latitude ionospheric electric field exhibits considerable variability, with increasing non-Gaussian characteristics at decreasing spatio-temporal scales. Accurately representing the underlying stochastic physical process through random field modeling is crucial not only for scientific understanding of the energy, momentum and mass exchanges between the Earth's magnetosphere and ionosphere, but also for modern technological systems including telecommunication, navigation, positioning and satellite tracking. While a lot of efforts have been made to characterize the large-scale variability of the electric field in the context of Gaussian processes, no attempt has been made so far to model the small-scale non-Gaussian stochastic process observed in the high-latitude ionosphere. We construct a novel random field model using spherical needlets as building blocks. The double localization of spherical needlets in both spatial and frequency domains enables the model to capture the non-Gaussian and multi-resolutional characteristics of the small-scale variability. The estimation procedure is computationally feasible due to the utilization of an adaptive Gibbs sampler. We apply the proposed methodology to the computational simulation output from the Lyon-Fedder-Mobarry (LFM) global magnetohydrodynamics (MHD) magnetosphere model. Our non-Gaussian multi-resolution model results in characterizing significantly more energy associated with the small-scale ionospheric electric field variability in comparison to Gaussian models. By accurately representing unaccounted-for additional energy and momentum sources to the Earth's upper atmosphere, our novel random field modeling approach will provide a viable remedy to the current numerical models' systematic biases resulting from the underestimation of high-latitude energy and momentum sources.
The analysis of MAI in large scale MIMO-CDMA system
NASA Astrophysics Data System (ADS)
Berceanu, Madalina-Georgiana; Voicu, Carmen; Halunga, Simona
2016-12-01
Recently, technological development imposed a rapid growth in the use of data carried by cellular services, which also implies the necessity of higher data rates and lower latency. To meet the users' demands, it was brought into discussion a series of new data processing techniques. In this paper, we approached the MIMO technology that uses multiple antennas at the receiver and transmitter ends. To study the performances obtained by this technology, we proposed a MIMO-CDMA system, where image transmission has been used instead of random data transmission to take benefit of a larger range of quality indicators. In the simulations we increased the number of antennas, we observed how the performances of the system are modified and, based on that, we were able to make a comparison between a conventional MIMO and a Large Scale MIMO system, in terms of BER and MSSIM index, which is a metric that compares the quality of the image before transmission with the received one.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
The Impact of a Comparison Curriculum in Algebra I: A Randomized Experiment
ERIC Educational Resources Information Center
Star, Jon R.; Rittle-Johnson, Bethany; Durkin, Kelley; Newton, Kristie; Pollack, Courtney; Lynch, Kathleen; Gogolen, Claire
2013-01-01
Comparison is a powerful tool that has been shown to improve learning in a variety of domains. In both laboratory studies and small-scale classroom studies, having learners compare and contrast worked examples has been shown to reliably lead to gains in students' knowledge. Comparison is also integral to "best practices" in mathematics…
Predicting protein functions from redundancies in large-scale protein interaction networks
NASA Technical Reports Server (NTRS)
Samanta, Manoj Pratim; Liang, Shoudan
2003-01-01
Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Envisaging bacteria as phage targets
Abedon, Stephen T.
2011-01-01
It can be difficult to appreciate just how small bacteria and phages are or how large, in comparison, the volumes that they occupy. A single milliliter, for example, can represent to a phage what would be, with proper scaling, an “ocean” to you and me. Here I illustrate, using more easily visualized macroscopic examples, the difficulties that a phage, as a randomly diffusing particle, can have in locating bacteria to infect. I conclude by restating the truism that the rate of phage adsorption to a given target bacterium is a function of phage density, that is, titer, in combination with the degree of bacterial susceptibility to adsorption by an encountering phage. PMID:23616932
Bottiglione, F; Carbone, G
2015-01-14
The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.
Freak waves in random oceanic sea states.
Onorato, M; Osborne, A R; Serio, M; Bertone, S
2001-06-18
Freak waves are very large, rare events in a random ocean wave train. Here we study their generation in a random sea state characterized by the Joint North Sea Wave Project spectrum. We assume, to cubic order in nonlinearity, that the wave dynamics are governed by the nonlinear Schrödinger (NLS) equation. We show from extensive numerical simulations of the NLS equation how freak waves in a random sea state are more likely to occur for large values of the Phillips parameter alpha and the enhancement coefficient gamma. Comparison with linear simulations is also reported.
Two Universality Classes for the Many-Body Localization Transition
NASA Astrophysics Data System (ADS)
Khemani, Vedika; Sheng, D. N.; Huse, David A.
2017-08-01
We provide a systematic comparison of the many-body localization (MBL) transition in spin chains with nonrandom quasiperiodic versus random fields. We find evidence suggesting that these belong to two separate universality classes: the first dominated by "intrinsic" intrasample randomness, and the second dominated by external intersample quenched randomness. We show that the effects of intersample quenched randomness are strongly growing, but not yet dominant, at the system sizes probed by exact-diagonalization studies on random models. Thus, the observed finite-size critical scaling collapses in such studies appear to be in a preasymptotic regime near the nonrandom universality class, but showing signs of the initial crossover towards the external-randomness-dominated universality class. Our results provide an explanation for why exact-diagonalization studies on random models see an apparent scaling near the transition while also obtaining finite-size scaling exponents that strongly violate Harris-Chayes bounds that apply to disorder-driven transitions. We also show that the MBL phase is more stable for the quasiperiodic model as compared to the random one, and the transition in the quasiperiodic model suffers less from certain finite-size effects.
Futamura, Masaki; Leshem, Yael A; Thomas, Kim S; Nankervis, Helen; Williams, Hywel C; Simpson, Eric L
2016-02-01
Investigators often use global assessments to provide a snapshot of overall disease severity in dermatologic clinical trials. Although easy to perform, the frequency of use and standardization of global assessments in studies of atopic dermatitis (AD) is unclear. We sought to assess the frequency, definitions, and methods of analysis of Investigator Global Assessment in randomized controlled trials of AD. We conducted a systematic review using all published randomized controlled trials of AD treatments in the Global Resource of Eczema Trials database (2000-2014). We determined the frequency of global scales application and defining features. Among 317 trials identified, 101 trials (32%) used an investigator-performed global assessment as an outcome measure. There was large variability in global assessments between studies in nomenclature, scale size, definitions, outcome description, and analysis. Both static and dynamic scales were identified that ranged from 4- to 7-point scales. North American studies used global assessments more commonly than studies from other countries. The search was restricted to the Global Resource of Eczema Trials database. Global assessments are used frequently in studies of AD, but their complete lack of standardized definitions and implementation preclude any meaningful comparisons between studies, which in turn impedes data synthesis to inform clinical decision-making. Standardization is urgently required. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
Imani, Saeed; Atef Vahid, Mohammad Kazem; Gharraee, Banafsheh; Habibi, Mojtaba; Bowen, Sarah; Noroozi, Alireza
2015-03-01
In response to high burden of opioid abuse in Iran, Ministry of Health has launched a large-scale opioid maintenance treatment program, delivered through a network of certified drug treatment centers. To promote opioid pharmacotherapies, there is an urgent need to develop and introduce evidence-based psychosocial interventions into the network. This is a randomized clinical trial (RCT) to investigate feasibility and effectiveness of adding mindfulness-based group therapy to opioid pharmacotherapies as compared to opioid pharmacotherapies alone. The primary outcomes were treatment retention and percentage of weekly morphine, methamphetamine, and benzodiazepine negative tests. This is the first RCT that explores the effectiveness of mindfulness-based relapse prevention group therapy among opioid dependent clients in Iran. The feasibility of group therapy and comparison of outcomes in intervention and control groups should be discussed in the outcome article.
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
A body image and disordered eating intervention for women in midlife: a randomized controlled trial.
McLean, Siân A; Paxton, Susan J; Wertheim, Eleanor H
2011-12-01
This study examined the outcome of a body image and disordered eating intervention for midlife women. The intervention was specifically designed to address risk factors that are pertinent in midlife. Participants were 61 women aged 30 to 60 years (M = 43.92, SD = 8.22) randomly assigned to intervention (n = 32) or (delayed treatment) control (n = 29) groups. Following an 8-session facilitated group cognitive behavioral therapy-based intervention, outcomes from the Body Shape Questionnaire; Eating Disorder Examination Questionnaire; Body Image Avoidance Questionnaire; Physical Appearance Comparison Scale; Sociocultural Attitudes Towards Appearance Scale, Internalization subscale; measures of appearance importance, cognitive reappraisal, and self-care; Dutch Eating Behavior Questionnaire; and Kessler Psychological Distress Scale were compared for statistical and clinical significance from baseline to posttest and 6-month follow-up. Following the intent-to-treat principle, mixed-model analyses with a mixed within-between design demonstrated that the intervention group had large improvements that were statistically significantly different from the control group in body image, disordered eating, and risk factor variables and that were maintained at 6-month follow-up. Furthermore, the improvements were also of clinical importance. This study provides support for the efficacy of an intervention to reduce body image and eating concerns in midlife women. Further research into interventions tailored for this population is warranted.
ERIC Educational Resources Information Center
Flanagan, Helen E.; Perry, Adrienne; Freeman, Nancy L.
2012-01-01
File review data were used to explore the impact of a large-scale publicly funded Intensive Behavioral Intervention (IBI) program for young children with autism. Outcomes were compared for 61 children who received IBI and 61 individually matched children from a waitlist comparison group. In addition, predictors of better cognitive outcomes were…
The cosmological principle is not in the sky
NASA Astrophysics Data System (ADS)
Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan
2017-08-01
The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.
NASA Technical Reports Server (NTRS)
Ricks, W. R.
1994-01-01
PWC is used for pair-wise comparisons in both psychometric scaling techniques and cognitive research. The cognitive tasks and processes of a human operator of automated systems are now prominent considerations when defining system requirements. Recent developments in cognitive research have emphasized the potential utility of psychometric scaling techniques, such as multidimensional scaling, for representing human knowledge and cognitive processing structures. Such techniques involve collecting measurements of stimulus-relatedness from human observers. When data are analyzed using this scaling approach, an n-dimensional representation of the stimuli is produced. This resulting representation is said to describe the subject's cognitive or perceptual view of the stimuli. PWC applies one of the many techniques commonly used to acquire the data necessary for these types of analyses: pair-wise comparisons. PWC administers the task, collects the data from the test subject, and formats the data for analysis. It therefore addresses many of the limitations of the traditional "pen-and-paper" methods. By automating the data collection process, subjects are prevented from going back to check previous responses, the possibility of erroneous data transfer is eliminated, and the burden of the administration and taking of the test is eased. By using randomization, PWC ensures that subjects see the stimuli pairs presented in random order, and that each subject sees pairs in a different random order. PWC is written in Turbo Pascal v6.0 for IBM PC compatible computers running MS-DOS. The program has also been successfully compiled with Turbo Pascal v7.0. A sample executable is provided. PWC requires 30K of RAM for execution. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. Two electronic versions of the documentation are included on the diskette: one in ASCII format and one in MS Word for Windows format. PWC was developed in 1993.
Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving
Semeniuk, Yulia Yuriyivna; Brown, Roger L.; Riesch, Susan K.
2016-01-01
We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem solving skill. The intervention is based on the Circumplex Model and Social Problem Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. PMID:26936844
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.; Wilcox, J. M.; Svalgaard, L.; Scherrer, P. H.; Mcintosh, P. S.
1977-01-01
Two methods of observing the neutral line of the large-scale photospheric magnetic field are compared: neutral line positions inferred from H-alpha photographs (McIntosh and Nolte, 1975) and observations of the photospheric magnetic field made with low spatial resolution (three minutes) and high sensitivity using the Stanford magnetograph. The comparison is found to be very favorable.
NASA Astrophysics Data System (ADS)
Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.
2003-12-01
Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.
Scale invariance in chaotic time series: Classical and quantum examples
NASA Astrophysics Data System (ADS)
Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro
Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Multiresolution comparison of precipitation datasets for large-scale models
NASA Astrophysics Data System (ADS)
Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.
2014-12-01
Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.
Gyulai, Franciska; Rába, Katalin; Baranyai, Ildikó; Berkes, Enikő; Bender, Tamás
2015-01-01
Background. This study evaluates the effect of adjuvant BEMER therapy in patients with knee arthrosis and chronic low back pain in a randomized double blind design. Methods. A total of 50 patients with chronic low back pain and 50 patients with osteoarthritis of knee took part in this study and were randomized into 4 groups. Hospitalized patients received a standardized physiotherapy package for 3 weeks followed by BEMER therapy or placebo. Results. In patients with low back pain, the comparison of the results obtained at the first and second visit showed a significant improvement in resting VAS scores and Fatigue Scale scores. The Oswestry scores and Quality of Life Scale scores showed no change. In patients with knee arthrosis, the comparison of the first and second measurements showed no significant improvement in the abovementioned parameters, while the comparison of the first and third scores revealed a significant improvement in the Fatigue Scale scores and in the vitality test on the Quality of Life Scale. Conclusions. Our study showed that BEMER physical vascular therapy reduced pain and fatigue in the short term in patients with chronic low back pain, while long-term therapy appears to be beneficial in patients with osteoarthritis of knee. PMID:26078768
Using Small-Scale Randomized Controlled Trials to Evaluate the Efficacy of New Curricular Materials
Bass, Kristin M.; Stark, Louisa A.
2014-01-01
How can researchers in K–12 contexts stay true to the principles of rigorous evaluation designs within the constraints of classroom settings and limited funding? This paper explores this question by presenting a small-scale randomized controlled trial (RCT) designed to test the efficacy of curricular supplemental materials on epigenetics. The researchers asked whether the curricular materials improved students’ understanding of the content more than an alternative set of activities. The field test was conducted in a diverse public high school setting with 145 students who were randomly assigned to a treatment or comparison condition. Findings indicate that students in the treatment condition scored significantly higher on the posttest than did students in the comparison group (effect size: Cohen's d = 0.40). The paper discusses the strengths and limitations of the RCT, the contextual factors that influenced its enactment, and recommendations for others wishing to conduct small-scale rigorous evaluations in educational settings. Our intention is for this paper to serve as a case study for university science faculty members who wish to employ scientifically rigorous evaluations in K–12 settings while limiting the scope and budget of their work. PMID:25452482
Random access in large-scale DNA data storage.
Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin
2018-03-01
Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.
Topological analysis of the CfA redshift survey
NASA Technical Reports Server (NTRS)
Vogeley, Michael S.; Park, Changbom; Geller, Margaret J.; Huchra, John P.; Gott, J. Richard, III
1994-01-01
We study the topology of large-scale structure in the Center for Astrophysics Redshift Survey, which now includes approximately 12,000 galaxies with limiting magnitude m(sub B) is less than or equal to 15.5. The dense sampling and large volume of this survey allow us to compute the topology on smoothing scales from 6 to 20/h Mpc; we thus examine the topology of structure in both 'nonlinear' and 'linear' regimes. On smoothing scales less than or equal to 10/h Mpc this sample has 3 times the number of resolution elements of samples examined in previous studies. Isodensity surface of the smoothed galaxy density field demonstrate that coherent high-density structures and large voids dominate the galaxy distribution. We compute the genus-threshold density relation for isodensity surfaces of the CfA survey. To quantify phase correlation in these data, we compare the CfA genus with the genus of realizations of Gaussian random fields with the power spectrum measured for the CfA survey. On scales less than or equal to 10/h Mpc the observed genus amplitude is smaller than random phase (96% confidence level). This decrement reflects the degree of phase coherence in the observed galaxy distribution. In other words the genus amplitude on these scales is not good measure of the power spectrum slope. On scales greater than 10/h Mpc, where the galaxy distribution is rougly in the 'linear' regime, the genus ampitude is consistent with the random phase amplitude. The shape of the genus curve reflects the strong coherence in the observed structure; the observed genus curve appears broader than random phase (94% confidence level for smoothing scales less than or equal to 10/h Mpc) because the topolgoy is spongelike over a very large range of density threshold. This departre from random phase consistent with a distribution like a filamentary net of 'walls with holes.' On smoothing scales approaching approximately 20/h Mpc the shape of the CfA genus curve is consistent with random phase. There is very weak evidence for a shift of the genus toward a 'bubble-like' topology. To test cosmological models, we compute the genus for mock CfA surveys drawn from large (L greater than or approximately 400/h Mpc) N-body simulations of three variants of the cold dark matter (CDM) cosmogony. The genus amplitude of the 'standard' CDM model (omega h = 0.5, b = 1.5) differs from the observations (96% confidence level) on smoothing scales is less than or approximately 10/h Mpc. An open CDM model (omega h = 0.2) and a CDM model with nonzero cosmological constant (omega h = 0.24, lambda (sub 0) = 0.6) are consistent with the observed genus amplitude over the full range of smoothing scales. All of these models fail (97% confidence level) to match the broadness of the observed genus curve on smoothing scales is less than or equal to 10/h Mpc.
Cosmicflows Constrained Local UniversE Simulations
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo
2016-01-01
This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.
Scale-free characteristics of random networks: the topology of the world-wide web
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Albert, Réka; Jeong, Hawoong
2000-06-01
The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.
NASA Astrophysics Data System (ADS)
Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe
2017-12-01
Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.
Muroff, Jordana; Amodeo, Maryann; Larson, Mary Jo; Carey, Margaret; Loftin, Ralph D
2011-01-01
This article describes a data management system (DMS) developed to support a large-scale randomized study of an innovative web-course that was designed to improve substance abuse counselors' knowledge and skills in applying a substance abuse treatment method (i.e., cognitive behavioral therapy; CBT). The randomized trial compared the performance of web-course-trained participants (intervention group) and printed-manual-trained participants (comparison group) to determine the effectiveness of the web-course in teaching CBT skills. A single DMS was needed to support all aspects of the study: web-course delivery and management, as well as randomized trial management. The authors briefly reviewed several other systems that were described as built either to handle randomized trials or to deliver and evaluate web-based training. However it was clear that these systems fell short of meeting our needs for simultaneous, coordinated management of the web-course and the randomized trial. New England Research Institute's (NERI) proprietary Advanced Data Entry and Protocol Tracking (ADEPT) system was coupled with the web-programmed course and customized for our purposes. This article highlights the requirements for a DMS that operates at the intersection of web-based course management systems and randomized clinical trial systems, and the extent to which the coupled, customized ADEPT satisfied those requirements. Recommendations are included for institutions and individuals considering conducting randomized trials and web-based training programs, and seeking a DMS that can meet similar requirements.
Chartier-Kastler, Emmanuel; Denys, Pierre
2011-01-01
Neurogenic bladder can be effectively managed with intermittent catheterization (IC) to improve or restore continence, but there is no consensus on which type of catheter is preferred. Hydrophilic catheters were developed to reduce urethral friction, thereby minimizing trauma and sticking, and making them more acceptable to the patient, and easier and safer to use. The objective of this article was to review the literature on the benefits of hydrophilic catheters in patients with neurogenic bladder. A large body of experimental and observational evidence, including randomized controlled trials, was identified using PubMed. Compared with plastic catheters that have been manually lubricated with gel, hydrophilic catheters reduce urinary tract infection and microhematuria. Hydrophilic catheters are also associated with high levels of patient satisfaction because they are comfortable to use. There is a wealth of evidence, including randomized controlled trials, to support the benefits of hydrophilic catheters in terms of safety and quality of life, especially in men with spinal cord injury. More data are required for spina bifida, multiple sclerosis, and in women. Further research is warranted, especially large-scale and long-term robust comparisons of different types of catheter, and in well-defined and stratified populations. Copyright © 2010 Wiley-Liss, Inc.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Davari-Ashtiani, Rozita; Shahrbabaki, Mahin Eslami; Razjouyan, Katayoon; Amini, Homayoun; Mazhabdar, Homa
2010-12-01
The efficacy and side effects of buspirone compared with methylphenidate (MPH) in the treatment of children with attention-deficit/hyperactivity disorder (ADHD). A total of 34 children with ADHD as defined by DSM-IV-TR were randomized to buspirone or methylphenidate dosed on weight-adjusted basis at buspirone (0.5 mg/kg/day) and methylphenidate (0.3-1 mg/kg/day) for a 6-week double-blind clinical trial. The principle measures of outcome were the teacher and parent ADHD Rating Scale. The side effects were assessed by the special side effect checklist of each drug. In both groups, the scores of teacher and parent ADHD Rating Scale significantly declined on the 6th week as compared to baseline (p = 0.001). These effects were observed in the subscales too. No significant differences were observed between the two protocols on the total scores of parent and teacher ADHD Rating Scale, but methylphenidate was superior to buspirone in decreasing the symptoms of inattention. The side effects of buspirone were mild and rare in comparison with MPH. Buspirone has a favorable side-effects profile. It also has clinically and statistically significant impacts on improving the ADHD symptoms in children. These preliminary findings of the efficacy of buspirone in children with ADHD need large and cross-over studies.
ERIC Educational Resources Information Center
Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo
2017-01-01
This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…
ERIC Educational Resources Information Center
Gersten, Russell; Rolfhus, Eric; Clarke, Ben; Decker, Lauren E.; Wilkins, Chuck; Dimino, Joseph
2015-01-01
Replication studies are extremely rare in education. This randomized controlled trial (RCT) is a scale-up replication of Fuchs et al., which in a sample of 139 found a statistically significant positive impact for Number Rockets, a small-group intervention for at-risk first graders that focused on building understanding of number operations. The…
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
Cooperation-Induced Topological Complexity: A Promising Road to Fault Tolerance and Hebbian Learning
2012-03-16
topological complexity a way to compare the efficiency of a scale-free network to the random network of Erdos and Renyi . All this is extensively dis- cussed in...an excellent review paper byArenas et al. (2008) showing very interesting comparisons of Erdos– Renyi networks and scale- free networks as a function
NASA Technical Reports Server (NTRS)
Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius
1998-01-01
This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.
Robinson, Delbert G.; Gallego, Juan A.; John, Majnu; Petrides, Georgios; Hassoun, Youssef; Zhang, Jian-Ping; Lopez, Leonardo; Braga, Raphael J.; Sevy, Serge M.; Addington, Jean; Kellner, Charles H.; Tohen, Mauricio; Naraine, Melissa; Bennett, Natasha; Greenberg, Jessica; Lencz, Todd; Correll, Christoph U.; Kane, John M.; Malhotra, Anil K.
2015-01-01
Research findings are particularly important for medication choice for first-episode patients as individual prior medication response to guide treatment decisions is unavailable. We describe the first large-scale double-masked randomized comparison with first-episode patients of aripiprazole and risperidone, 2 commonly used first-episode treatment agents. One hundred ninety-eight participants aged 15–40 years with schizophrenia, schizophreniform disorder, schizoaffective disorder or psychotic disorder Not Otherwise Specified, and who had been treated in their lifetime with antipsychotics for 2 weeks or less were randomly assigned to double-masked aripiprazole (5–30mg/d) or risperidone (1–6mg/d) and followed for 12 weeks. Positive symptom response rates did not differ (62.8% vs 56.8%) nor did time to response. Aripiprazole-treated participants had better negative symptom outcomes but experienced more akathisia. Body mass index change did not differ between treatments but advantages were found for aripiprazole treatment for total and low-density lipoprotein cholesterol, fasting glucose, and prolactin levels. Post hoc analyses suggested advantages for aripiprazole on depressed mood. Overall, if the potential for akathisia is a concern, low-dose risperidone as used in this trial maybe a preferred choice over aripiprazole. Otherwise, aripiprazole would be the preferred choice over risperidone in most situations based upon metabolic outcome advantages and some symptom advantages within the context of similar positive symptom response between medications. PMID:26338693
Li, Nicole; Yan, Lijing L; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce
2013-11-01
Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24-hour urine. The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. © 2013.
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.
2018-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013
NASA Technical Reports Server (NTRS)
Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean;
2016-01-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M
2016-04-01
Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.
Random sequential adsorption of straight rigid rods on a simple cubic lattice
NASA Astrophysics Data System (ADS)
García, G. D.; Sanchez-Varretti, F. O.; Centres, P. M.; Ramirez-Pastor, A. J.
2015-10-01
Random sequential adsorption of straight rigid rods of length k (k-mers) on a simple cubic lattice has been studied by numerical simulations and finite-size scaling analysis. The k-mers were irreversibly and isotropically deposited into the lattice. The calculations were performed by using a new theoretical scheme, whose accuracy was verified by comparison with rigorous analytical data. The results, obtained for k ranging from 2 to 64, revealed that (i) the jamming coverage for dimers (k = 2) is θj = 0.918388(16) . Our result corrects the previously reported value of θj = 0.799(2) (Tarasevich and Cherkasova, 2007); (ii) θj exhibits a decreasing function when it is plotted in terms of the k-mer size, being θj(∞) = 0.4045(19) the value of the limit coverage for large k's; and (iii) the ratio between percolation threshold and jamming coverage shows a non-universal behavior, monotonically decreasing to zero with increasing k.
Jeong, Hyundoo; Yoon, Byung-Jun
2017-03-14
Network querying algorithms provide computational means to identify conserved network modules in large-scale biological networks that are similar to known functional modules, such as pathways or molecular complexes. Two main challenges for network querying algorithms are the high computational complexity of detecting potential isomorphism between the query and the target graphs and ensuring the biological significance of the query results. In this paper, we propose SEQUOIA, a novel network querying algorithm that effectively addresses these issues by utilizing a context-sensitive random walk (CSRW) model for network comparison and minimizing the network conductance of potential matches in the target network. The CSRW model, inspired by the pair hidden Markov model (pair-HMM) that has been widely used for sequence comparison and alignment, can accurately assess the node-to-node correspondence between different graphs by accounting for node insertions and deletions. The proposed algorithm identifies high-scoring network regions based on the CSRW scores, which are subsequently extended by maximally reducing the network conductance of the identified subnetworks. Performance assessment based on real PPI networks and known molecular complexes show that SEQUOIA outperforms existing methods and clearly enhances the biological significance of the query results. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/SEQUOIA .
Statistical Measures of Large-Scale Structure
NASA Astrophysics Data System (ADS)
Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard
1993-12-01
\\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.
Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs
NASA Astrophysics Data System (ADS)
van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.
2018-04-01
Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.
Analysis of the Efficacy of an Intervention to Improve Parent-Adolescent Problem Solving.
Semeniuk, Yulia Yuriyivna; Brown, Roger L; Riesch, Susan K
2016-07-01
We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem-solving skill. The intervention is based on the Circumplex Model and Social Problem-Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem-Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed. © The Author(s) 2016.
A Data Management System Integrating Web-Based Training and Randomized Trials
ERIC Educational Resources Information Center
Muroff, Jordana; Amodeo, Maryann; Larson, Mary Jo; Carey, Margaret; Loftin, Ralph D.
2011-01-01
This article describes a data management system (DMS) developed to support a large-scale randomized study of an innovative web-course that was designed to improve substance abuse counselors' knowledge and skills in applying a substance abuse treatment method (i.e., cognitive behavioral therapy; CBT). The randomized trial compared the performance…
ERIC Educational Resources Information Center
Piper, Benjamin; Oyanga, Arbogast; Mejia, Jessica; Pouezevara, Sarah
2017-01-01
Previous large-scale education technology interventions have shown only modest impacts on student achievement. Building on results from an earlier randomized controlled trial of three different applications of information and communication technologies (ICTs) on primary education in Kenya, the Tusome Early Grade Reading Activity developed the…
Udsen, Flemming Witt; Lilholt, Pernille Heyckendorff; Hejlesen, Ole; Ehlers, Lars Holger
2014-05-21
Several feasibility studies show promising results of telehealthcare on health outcomes and health-related quality of life for patients suffering from chronic obstructive pulmonary disease, and some of these studies show that telehealthcare may even lower healthcare costs. However, the only large-scale trial we have so far - the Whole System Demonstrator Project in England - has raised doubts about these results since it conclude that telehealthcare as a supplement to usual care is not likely to be cost-effective compared with usual care alone. The present study is known as 'TeleCare North' in Denmark. It seeks to address these doubts by implementing a large-scale, pragmatic, cluster-randomized trial with nested economic evaluation. The purpose of the study is to assess the effectiveness and the cost-effectiveness of a telehealth solution for patients suffering from chronic obstructive pulmonary disease compared to usual practice. General practitioners will be responsible for recruiting eligible participants (1,200 participants are expected) for the trial in the geographical area of the North Denmark Region. Twenty-six municipality districts in the region define the randomization clusters. The primary outcomes are changes in health-related quality of life, and the incremental cost-effectiveness ratio measured from baseline to follow-up at 12 months. Secondary outcomes are changes in mortality and physiological indicators (diastolic and systolic blood pressure, pulse, oxygen saturation, and weight). There has been a call for large-scale clinical trials with rigorous cost-effectiveness assessments in telehealthcare research. This study is meant to improve the international evidence base for the effectiveness and cost-effectiveness of telehealthcare to patients suffering from chronic obstructive pulmonary disease by implementing a large-scale pragmatic cluster-randomized clinical trial. Clinicaltrials.gov, http://NCT01984840, November 14, 2013.
NASA Technical Reports Server (NTRS)
Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.
1998-01-01
We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.
Cameron, Chris; Zummo, Jacqueline; Desai, Dharmik N; Drake, Christine; Hutton, Brian; Kotb, Ahmed; Weiden, Peter J
Aripiprazole lauroxil (AL) is a long-acting injectable atypical antipsychotic recently approved for treatment of schizophrenia on the basis of a large-scale trial of two doses of AL versus placebo. There are no direct-comparison studies with paliperidone palmitate (PP; long-acting antipsychotic used most often in acute settings) for the acute psychotic episode. To indirectly compare efficacy and safety of the pivotal AL study with all PP studies meeting indirect comparison criteria. Systematic searches of MEDLINE, Embase, Cochrane CENTRAL, PsycINFO, ClinicalTrials.gov, International Clinical Trials Registry Platform, and gray literature were performed to identify randomized controlled trials of PP with similar designs to the AL trial. Bayesian network meta-analysis compared treatments with respect to symptom response and tolerability issues including weight gain, akathisia, parkinsonism, and likelihood of treatment-emergent adverse events. Three appropriate PP studies were identified for indirect comparison. Both doses of AL (441 mg and 882 mg monthly) were used and compared with two efficacious doses of PP (156 mg and 234 mg monthly). All four active-treatment conditions were associated with comparable reductions in acute symptoms (Positive and Negative Syndrome Scale) versus placebo and were of similar magnitude (range of mean difference -8.12 to -12.01, with overlapping 95% credible intervals). Between-group comparisons of active-treatment arms were associated with summary estimates of magnitude near 0. No clinically meaningful differences in selected safety or tolerability parameter incidence were found between active treatments. These results suggest that both AL and PP are effective for treatment of adults experiencing acute exacerbation of schizophrenia. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A geographic comparison of selected large-scale planetary surface features
NASA Technical Reports Server (NTRS)
Meszaros, S. P.
1984-01-01
Photographic and cartographic comparisons of geographic features on Mercury, the Moon, Earth, Mars, Ganymede, Callisto, Mimas, and Tethys are presented. Planetary structures caused by impacts, volcanism, tectonics, and other natural forces are included. Each feature is discussed individually and then those of similar origin are compared at the same scale.
Hien, Denise A.; Wells, Elizabeth A.; Jiang, Huiping; Suarez-Morales, Lourdes; Campbell, Aimee N. C.; Cohen, Lisa R.; Miele, Gloria M.; Killeen, Therese; Brigham, Gregory S.; Zhang, Yulei; Hansen, Cheri; Hodgkins, Candace; Hatch-Maillette, Mary; Brown, Chanda; Kulaga, Agatha; Kristman-Valente, Allison; Chu, Melissa; Sage, Robert; Robinson, James A.; Liu, David; Nunes, Edward V.
2009-01-01
We compared the effectiveness of Seeking Safety (SS), an integrated cognitive behavioral treatment for substance use disorder (SUD) and post-traumatic stress disorder (PTSD), to an active comparison health education group (Women’s Health Education [WHE]) within NIDA’s Clinical Trials Network. We randomized 353 women to receive 12 sessions of SS (M = 6.2 sessions) or WHE (M = 6.0 sessions) with follow-up assessment at post-treatment and 3-, 6-, and 12-months post-treatment. Primary outcomes were the Clinician Administered PTSD Scale (CAPS) and PTSD Symptom Scale-Self Report (PSS-SR), and substance use (self-reported abstinence in the prior 7 days and days per week of any substance use). Intention-to-treat analysis showed large, clinically significant reductions in CAPS and PSS-SR symptoms (d = 1.94 and 1.12, respectively), but no reliable difference between conditions. Substance use outcomes were not significantly different over time between the two treatments and at follow-up showed no significant change from baseline, when 46% of participants were abstinent. Study results do not favor SS over WHE as an adjunct to SUD treatment for women with PTSD and reflect considerable opportunity to improve clinical outcomes in community-based treatments for these co-occurring conditions. PMID:19634955
Towards large scale multi-target tracking
NASA Astrophysics Data System (ADS)
Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus
2014-06-01
Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.
Li, Nicole; Yan, Lijing L.; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce
2013-01-01
Background Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. Design This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24 hour urine. Trial status The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. Discussion The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. PMID:24176436
Modeling space-time correlations of velocity fluctuations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2018-07-01
An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.
On the statistical mechanics of the 2D stochastic Euler equation
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
The dynamics of vortices and large scale structures is qualitatively very different in two dimensional flows compared to its three dimensional counterparts, due to the presence of multiple integrals of motion. These are believed to be responsible for a variety of phenomena observed in Euler flow such as the formation of large scale coherent structures, the existence of meta-stable states and random abrupt changes in the topology of the flow. In this paper we study stochastic dynamics of the finite dimensional approximation of the 2D Euler flow based on Lie algebra su(N) which preserves all integrals of motion. In particular, we exploit rich algebraic structure responsible for the existence of Euler's conservation laws to calculate the invariant measures and explore their properties and also study the approach to equilibrium. Unexpectedly, we find deep connections between equilibrium measures of finite dimensional su(N) truncations of the stochastic Euler equations and random matrix models. Our work can be regarded as a preparation for addressing the questions of large scale structures, meta-stability and the dynamics of random transitions between different flow topologies in stochastic 2D Euler flows.
Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg
2016-12-13
We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.
Neuropsychological Profiles on the WAIS-IV of Adults With ADHD.
Theiling, Johanna; Petermann, Franz
2016-11-01
The aim of the study was to investigate the pattern of neuropsychological profiles on the Wechsler Adult Intelligence Scale-IV (WAIS-IV) for adults With ADHD relative to randomly matched controls and to assess overall intellectual ability discrepancies of the Full Scale Intelligence Quotient (FSIQ) and the General Ability Index (GAI). In all, 116 adults With ADHD and 116 controls between 16 and 71 years were assessed. Relative to controls, adults With ADHD show significant decrements in subtests with working memory and processing speed demands with moderate to large effect sizes and a higher GAI in comparison with the FSIQ. This suggests first that deficits identified with previous WAIS versions are robust in adults With ADHD and remain deficient when assessed with the WAIS-IV; second that the WAIS-IV reliably differentiates between patients and controls; and third that a reduction of the FSIQ is most likely due to a decrement in working memory and processing speed abilities. The findings have essential implications for the diagnostic process. © The Author(s) 2014.
A random distribution reacting mixing layer model
NASA Technical Reports Server (NTRS)
Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.
1994-01-01
A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.
Genus Topology of Structure in the Sloan Digital Sky Survey: Model Testing
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III; Hambrick, D. Clay; Vogeley, Michael S.; Kim, Juhan; Park, Changbom; Choi, Yun-Young; Cen, Renyue; Ostriker, Jeremiah P.; Nagamine, Kentaro
2008-03-01
We measure the three-dimensional topology of large-scale structure in the Sloan Digital Sky Survey (SDSS). This allows the genus statistic to be measured with unprecedented statistical accuracy. The sample size is now sufficiently large to allow the topology to be an important tool for testing galaxy formation models. For comparison, we make mock SDSS samples using several state-of-the-art N-body simulations: the Millennium run of Springel et al. (10 billion particles), the Kim & Park CDM models (1.1 billion particles), and the Cen & Ostriker hydrodynamic code models (8.6 billion cell hydro mesh). Each of these simulations uses a different method for modeling galaxy formation. The SDSS data show a genus curve that is broadly characteristic of that produced by Gaussian random-phase initial conditions. Thus, the data strongly support the standard model of inflation where Gaussian random-phase initial conditions are produced by random quantum fluctuations in the early universe. But on top of this general shape there are measurable differences produced by nonlinear gravitational effects and biasing connected with galaxy formation. The N-body simulations have been tuned to reproduce the power spectrum and multiplicity function but not topology, so topology is an acid test for these models. The data show a "meatball" shift (only partly due to the Sloan Great Wall of galaxies) that differs at the 2.5 σ level from the results of the Millenium run and the Kim & Park dark halo models, even including the effects of cosmic variance.
Real-time fast physical random number generator with a photonic integrated circuit.
Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu
2017-03-20
Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
Non-volatile, high density, high speed, Micromagnet-Hall effect Random Access Memory (MHRAM)
NASA Technical Reports Server (NTRS)
Wu, Jiin C.; Katti, Romney R.; Stadler, Henry L.
1991-01-01
The micromagnetic Hall effect random access memory (MHRAM) has the potential of replacing ROMs, EPROMs, EEPROMs, and SRAMs because of its ability to achieve non-volatility, radiation hardness, high density, and fast access times, simultaneously. Information is stored magnetically in small magnetic elements (micromagnets), allowing unlimited data retention time, unlimited numbers of rewrite cycles, and inherent radiation hardness and SEU immunity, making the MHRAM suitable for ground based as well as spaceflight applications. The MHRAM device design is not affected by areal property fluctuations in the micromagnet, so high operating margins and high yield can be achieved in large scale integrated circuit (IC) fabrication. The MHRAM has short access times (less than 100 nsec). Write access time is short because on-chip transistors are used to gate current quickly, and magnetization reversal in the micromagnet can occur in a matter of a few nanoseconds. Read access time is short because the high electron mobility sensor (InAs or InSb) produces a large signal voltage in response to the fringing magnetic field from the micromagnet. High storage density is achieved since a unit cell consists only of two transistors and one micromagnet Hall effect element. By comparison, a DRAM unit cell has one transistor and one capacitor, and a SRAM unit cell has six transistors.
van Staa, T-P; Klungel, O; Smeeth, L
2014-06-01
A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.
1987-01-01
A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.
Galaxy clusters in simulations of the local Universe: a matter of constraints
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Tempel, Elmo
2018-06-01
To study the full formation and evolution history of galaxy clusters and their population, high-resolution simulations of the latter are flourishing. However, comparing observed clusters to the simulated ones on a one-to-one basis to refine the models and theories down to the details is non-trivial. The large variety of clusters limits the comparisons between observed and numerical clusters. Simulations resembling the local Universe down to the cluster scales permit pushing the limit. Simulated and observed clusters can be matched on a one-to-one basis for direct comparisons provided that clusters are well reproduced besides being in the proper large-scale environment. Comparing random and local Universe-like simulations obtained with differently grouped observational catalogues of peculiar velocities, this paper shows that the grouping scheme used to remove non-linear motions in the catalogues that constrain the simulations affects the quality of the numerical clusters. With a less aggressive grouping scheme - galaxies still falling on to clusters are preserved - combined with a bias minimization scheme, the mass of the dark matter haloes, simulacra for five local clusters - Virgo, Centaurus, Coma, Hydra, and Perseus - is increased by 39 per cent closing the gap with observational mass estimates. Simulacra are found on average in 89 per cent of the simulations, an increase of 5 per cent with respect to the previous grouping scheme. The only exception is Perseus. Since the Perseus-Pisces region is not well covered by the used peculiar velocity catalogue, the latest release lets us foresee a better simulacrum for Perseus in a near future.
Masaracchio, Michael; Cleland, Joshua A; Hellman, Madeleine; Hagins, Marshall
2013-03-01
Randomized clinical trial. To investigate the short-term effects of thoracic spine thrust manipulation combined with cervical spine nonthrust manipulation (experimental group) versus cervical spine nonthrust manipulation alone (comparison group) in individuals with mechanical neck pain. Research has demonstrated improved outcomes with both nonthrust manipulation directed at the cervical spine and thrust manipulation directed at the thoracic spine in patients with neck pain. Previous studies have not determined if thoracic spine thrust manipulation may increase benefits beyond those provided by cervical nonthrust manipulation alone. Sixty-four participants with mechanical neck pain were randomized into 1 of 2 groups, an experimental or comparison group. Both groups received 2 treatment sessions of cervical spine nonthrust manipulation and a home exercise program consisting of active range-of-motion exercises, and the experimental group received additional thoracic spine thrust manipulations. Outcome measures were collected at baseline and at a 1-week follow-up, and included the numeric pain rating scale, the Neck Disability Index, and the global rating of change. Participants in the experimental group demonstrated significantly greater improvements (P<.001) on both the numeric pain rating scale and Neck Disability Index at the 1-week follow-up compared to those in the comparison group. In addition, 31 of 33 (94%) participants in the experimental group, compared to 11 of 31 participants (35%) in the comparison group, indicated a global rating of change score of +4 or higher at the 1-week follow-up, with an associated number needed to treat of 2. Individuals with neck pain who received a combination of thoracic spine thrust manipulation and cervical spine nonthrust manipulation plus exercise demonstrated better overall short-term outcomes on the numeric pain rating scale, the Neck Disability Index, and the global rating of change.
Parameters affecting the resilience of scale-free networks to random failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran
2005-09-01
It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degreemore » of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.« less
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Effects of Interim Assessments on Student Achievement: Evidence from a Large-Scale Experiment
ERIC Educational Resources Information Center
Konstantopoulos, Spyros; Miller, Shazia R.; van der Ploeg, Arie; Li, Wei
2016-01-01
We use data from a large-scale, school-level randomized experiment conducted in 2010-2011 in public schools in Indiana. Our sample includes more than 30,000 students in 70 schools. We examine the impact of two interim assessment programs (i.e., mCLASS in Grades K-2 and Acuity in Grades 3--8) on mathematics and reading achievement. Two-level models…
ERIC Educational Resources Information Center
Wendt, Heike; Bos, Wilfried; Goy, Martin
2011-01-01
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
de Fabritus, Lauriane; Nougairède, Antoine; Aubry, Fabien; Gould, Ernest A; de Lamballerie, Xavier
2016-01-01
Large-scale codon re-encoding is a new method of attenuating RNA viruses. However, the use of infectious clones to generate attenuated viruses has inherent technical problems. We previously developed a bacterium-free reverse genetics protocol, designated ISA, and now combined it with large-scale random codon-re-encoding method to produce attenuated tick-borne encephalitis virus (TBEV), a pathogenic flavivirus which causes febrile illness and encephalitis in humans. We produced wild-type (WT) and two re-encoded TBEVs, containing 273 or 273+284 synonymous mutations in the NS5 and NS5+NS3 coding regions respectively. Both re-encoded viruses were attenuated when compared with WT virus using a laboratory mouse model and the relative level of attenuation increased with the degree of re-encoding. Moreover, all infected animals produced neutralizing antibodies. This novel, rapid and efficient approach to engineering attenuated viruses could potentially expedite the development of safe and effective new-generation live attenuated vaccines.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Amine, K; El Amrani, Y; Chemlali, S; Kissa, J
2018-02-01
The aim of this Systematic Review (SR) was to assess the clinical efficacy of alternatives procedures; Acellular Dermal Matrix (ADM), Xenogeneic Collagen Matrix (XCM), Enamel Matrix Derivative (EMD) and Platelet Rich Fibrin (PRF), compared to conventional procedures in the treatment of localized gingival recessions. Electronic searches were performed to identify randomized clinical trials (RCTs) on treatment of single gingival recession with at least 6 months of follow-up. Applying guidelines of the Preferred Reporting Items for Systematic Review and Meta-Analyses statement (PRISMA). The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias tool. Eighteen randomized controlled trials (RCTs) with a total of 390 treated patients (606 recessions) were included. This systematic review showed that: Coronally Advanced Flap (CAF) in conjunction with ADM was significantly better than CAF alone, while the comparison between CAF+ADM and CTG was affected by large uncertainty. The CAF+EMD was significantly better than CAF alone, whereas the comparison between CAF+EMD and CTG was affected by large uncertainty. No significant difference was recorded when comparing CAF+XCM with CAF alone, and the comparison between CAF+XCM and CTG was affected by large uncertainty. The comparison between PRF and others technique was affected by large uncertainty. ADM, XCM and EMD assisted to CAF might be considered alternatives of CTG in the treatment of Miller class I and II gingival recession. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Weinfurt, Kevin P; Hernandez, Adrian F; Coronado, Gloria D; DeBar, Lynn L; Dember, Laura M; Green, Beverly B; Heagerty, Patrick J; Huang, Susan S; James, Kathryn T; Jarvik, Jeffrey G; Larson, Eric B; Mor, Vincent; Platt, Richard; Rosenthal, Gary E; Septimus, Edward J; Simon, Gregory E; Staman, Karen L; Sugarman, Jeremy; Vazquez, Miguel; Zatzick, Douglas; Curtis, Lesley H
2017-09-18
The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost effective manner; research currently involves the use of labor-intensive parallel systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care. However, experience with these trials is nascent and best practices regarding design operational, analytic, and reporting methodologies are undeveloped. To strengthen the national capacity to implement cost-effective, large-scale PCTs, the Common Fund of the National Institutes of Health created the Health Care Systems Research Collaboratory (Collaboratory) to support the design, execution, and dissemination of a series of demonstration projects using a pragmatic research design. In this article, we will describe the Collaboratory, highlight some of the challenges encountered and solutions developed thus far, and discuss remaining barriers and opportunities for large-scale evidence generation using PCTs. A planning phase is critical, and even with careful planning, new challenges arise during execution; comparisons between arms can be complicated by unanticipated changes. Early and ongoing engagement with both health care system leaders and front-line clinicians is critical for success. There is also marked uncertainty when applying existing ethical and regulatory frameworks to PCTS, and using existing electronic health records for data capture adds complexity.
Tuttolomondo, Antonino; Di Raimondo, Domenico; Pecoraro, Rosaria; Maida, Carlo; Arnao, Valentina; Della Corte, Vittoriano; Simonetta, Irene; Corpora, Francesca; Di Bona, Danilo; Maugeri, Rosario; Iacopino, Domenico Gerardo; Pinto, Antonio
2016-03-01
Statins have beneficial effects on cerebral circulation and brain parenchyma during ischemic stroke and reperfusion. The primary hypothesis of this randomized parallel trial was that treatment with 80 mg/day of atorvastatin administered early at admission after acute atherosclerotic ischemic stroke could reduce serum levels of markers of immune-inflammatory activation of the acute phase and that this immune-inflammatory modulation could have a possible effect on prognosis of ischemic stroke evaluated by some outcome indicators. We enrolled 42 patients with acute ischemic stroke classified as large arteries atherosclerosis stroke (LAAS) randomly assigned in a randomized parallel trial to the following groups: Group A, 22 patients treated with atorvastatin 80 mg (once-daily) from admission day until discharge; Group B, 20 patients not treated with atorvastatin 80 mg until discharge, and after discharge, treatment with atorvastatin has been started. At 72 hours and at 7 days after acute ischemic stroke, subjects of group A showed significantly lower plasma levels of tumor necrosis factor-α, interleukin (IL)-6, vascular cell adhesion molecule-1, whereas no significant difference with regard to plasma levels of IL-10, E-Selectin, and P-Selectin was observed between the 2 groups. At 72 hours and 7 days after admission, stroke patients treated with atorvastatin 80 mg in comparison with stroke subjects not treated with atorvastatin showed a significantly lower mean National Institutes of Health Stroke Scale and modified Rankin scores. Our findings provide the first evidence that atorvastatin acutely administered immediately after an atherosclerotic ischemic stroke exerts a lowering effect on immune-inflammatory activation of the acute phase of stroke and that its early use is associated to a better functional and prognostic profile.
Comparison of Theory and Experiment on Aeroacoustic Loads and Deflections
NASA Astrophysics Data System (ADS)
Campos, L. M. B. C.; Bourgine, A.; Bonomi, B.
1999-01-01
The correlation of acoustic pressure loads induced by a turbulent wake on a nearby structural panel is considered: this problem is relevant to the acoustic fatigue of aircraft, rocket and satellite structures. Both the correlation of acoustic pressure loads and the panel deflections, were measured in an 8-m diameter transonic wind tunnel. Using the measured correlation of acoustic pressures, as an input to a finite-element aeroelastic code, the panel response was reproduced. The latter was also satisfactorily reproduced, using again the aeroelastic code, with input given by a theoretical formula for the correlation of acoustic pressures; the derivation of this formula, and the semi-empirical parameters which appear in it, are included in this paper. The comparison of acoustic responses in aeroacoustic wind tunnels (AWT) and progressive wave tubes (PWT) shows that much work needs to be done to bridge that gap; this is important since the PWT is the standard test means, whereas the AWT is more representative of real flight conditions but also more demanding in resources. Since this may be the first instance of successful modelling of acoustic fatigue, it may be appropriate to list briefly the essential ``positive'' features and associated physical phenomena: (i) a standard aeroelastic structural code can predict acoustic fatigue, provided that the correlation of pressure loads be adequately specified; (ii) the correlation of pressure loads is determined by the interference of acoustic waves, which depends on the exact evaluation of multiple scattering integrals, involving the statistics of random phase shifts; (iii) for the relatively low frequencies (one to a few hundred Hz) of aeroacoustic fatigue, the main cause of random phase effects is scattering by irregular wakes, which are thin on wavelength scale, and appear as partially reflecting rough interfaces. It may also be appropriate to mention some of the ``negative'' features, to which may be attached illusory importance; (iv) deterministic flow features, even conspicuous or of large scale, such as convection, are not relevant to aeroacoustic fatigue, because they do not produce random phase shifts; (v) local turbulence, of scale much smaller than the wavelength of sound, cannot produce significant random phase shifts, and is also of little consequence to aeroacoustic fatigue; (vi) the precise location of sound sources can become of little consequence, after multiple scattering gives rise to a diffuse sound field; and (vii) there is not much ground for distinction between unsteady flow and sound waves, since at transonic speeds they are both associated with pressures fluctuating in time and space.
Cheon, Eun-Jin; Lee, Kwang-Hun; Park, Young-Woo; Lee, Jong-Hun; Koo, Bon-Hoon; Lee, Seung-Jae; Sung, Hyung-Mo
2017-04-01
The purpose of this study was to compare the efficacy and safety of aripiprazole versus bupropion augmentation in patients with major depressive disorder (MDD) unresponsive to selective serotonin reuptake inhibitors (SSRIs). This is the first randomized, prospective, open-label, direct comparison study between aripiprazole and bupropion augmentation. Participants had at least moderately severe depressive symptoms after 4 weeks or more of SSRI treatment. A total of 103 patients were randomized to either aripiprazole (n = 56) or bupropion (n = 47) augmentation for 6 weeks. Concomitant use of psychotropic agents was prohibited. Montgomery Asberg Depression Rating Scale, 17-item Hamilton Depression Rating scale, Iowa Fatigue Scale, Drug-Induced Extrapyramidal Symptoms Scale, Psychotropic-Related Sexual Dysfunction Questionnaire scores were obtained at baseline and after 1, 2, 4, and 6 weeks of treatment. Overall, both treatments significantly improved depressive symptoms without causing serious adverse events. There were no significant differences in the Montgomery Asberg Depression Rating Scale, 17-item Hamilton Depression Rating scale, and Iowa Fatigue Scale scores, and response rates. However, significant differences in remission rates between the 2 groups were evident at week 6 (55.4% vs 34.0%, respectively; P = 0.031), favoring aripiprazole over bupropion. There were no significant differences in adverse sexual events, extrapyramidal symptoms, or akathisia between the 2 groups. The present study suggests that aripiprazole augmentation is at least comparable to bupropion augmentation in combination with SSRI in terms of efficacy and tolerability in patients with MDD. Both aripiprazole and bupropion could help reduce sexual dysfunction and fatigue in patients with MDD. Aripiprazole and bupropion may offer effective and safe augmentation strategies in patients with MDD who are unresponsive to SSRIs. Double-blinded trials are warranted to confirm the present findings.
Coupled continuous time-random walks in quenched random environment
NASA Astrophysics Data System (ADS)
Magdziarz, M.; Szczotka, W.
2018-02-01
We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.
Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel
2011-05-23
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.
Parallel capillary-tube-based extension of thermoacoustic theory for random porous media.
Roh, Heui-Seol; Raspet, Richard; Bass, Henry E
2007-03-01
Thermoacoustic theory is extended to stacks made of random bulk media. Characteristics of the porous stack such as the tortuosity and dynamic shape factors are introduced into the thermoacoustic wave equation in the low reduced frequency approximation. Basic thermoacoustic equations for a bulk porous medium are formulated analogously to the equations for a single pore. Use of different dynamic shape factors for the viscous and thermal effects is adopted and scaling using the dynamic shape factors and tortuosity is demonstrated. Comparisons of the calculated and experimentally derived thermoacoustic properties of reticulated vitreous carbon and aluminum foam show good agreement. A consistent mathematical model of sound propagation in a random porous medium with an imposed temperature is developed. This treatment leads to an expression for the coefficient of the temperature gradient in terms of scaled cylindrical thermoviscous functions.
Emergence of Multiscaling in a Random-Force Stirred Fluid
NASA Astrophysics Data System (ADS)
Yakhot, Victor; Donzis, Diego
2017-07-01
We consider the transition to strong turbulence in an infinite fluid stirred by a Gaussian random force. The transition is defined as a first appearance of anomalous scaling of normalized moments of velocity derivatives (dissipation rates) emerging from the low-Reynolds-number Gaussian background. It is shown that, due to multiscaling, strongly intermittent rare events can be quantitatively described in terms of an infinite number of different "Reynolds numbers" reflecting a multitude of anomalous scaling exponents. The theoretically predicted transition disappears at Rλ≤3 . The developed theory is in quantitative agreement with the outcome of large-scale numerical simulations.
Operating Reserves and Wind Power Integration: An International Comparison; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milligan, M.; Donohoo, P.; Lew, D.
2010-10-01
This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.
Uncovering Randomness and Success in Society
Jalan, Sarika; Sarkar, Camellia; Madhusudanan, Anagha; Dwivedi, Sanjiv Kumar
2014-01-01
An understanding of how individuals shape and impact the evolution of society is vastly limited due to the unavailability of large-scale reliable datasets that can simultaneously capture information regarding individual movements and social interactions. We believe that the popular Indian film industry, “Bollywood”, can provide a social network apt for such a study. Bollywood provides massive amounts of real, unbiased data that spans more than 100 years, and hence this network has been used as a model for the present paper. The nodes which maintain a moderate degree or widely cooperate with the other nodes of the network tend to be more fit (measured as the success of the node in the industry) in comparison to the other nodes. The analysis carried forth in the current work, using a conjoined framework of complex network theory and random matrix theory, aims to quantify the elements that determine the fitness of an individual node and the factors that contribute to the robustness of a network. The authors of this paper believe that the method of study used in the current paper can be extended to study various other industries and organizations. PMID:24533073
Uncovering randomness and success in society.
Jalan, Sarika; Sarkar, Camellia; Madhusudanan, Anagha; Dwivedi, Sanjiv Kumar
2014-01-01
An understanding of how individuals shape and impact the evolution of society is vastly limited due to the unavailability of large-scale reliable datasets that can simultaneously capture information regarding individual movements and social interactions. We believe that the popular Indian film industry, "Bollywood", can provide a social network apt for such a study. Bollywood provides massive amounts of real, unbiased data that spans more than 100 years, and hence this network has been used as a model for the present paper. The nodes which maintain a moderate degree or widely cooperate with the other nodes of the network tend to be more fit (measured as the success of the node in the industry) in comparison to the other nodes. The analysis carried forth in the current work, using a conjoined framework of complex network theory and random matrix theory, aims to quantify the elements that determine the fitness of an individual node and the factors that contribute to the robustness of a network. The authors of this paper believe that the method of study used in the current paper can be extended to study various other industries and organizations.
Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1988-01-01
These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.
Hind, Jacqueline A.; Gensler, Gary; Brandt, Diane K.; Miller Gardner, Patricia J.; Blumenthal, Loreen; Gramigna, Gary D.; Kosek, Steven; Lundy, Donna; McGarvey-Toler, Susan; Rockafellow, Susan; Sullivan, Paula A.; Villa, Marybell; Gill, Gary D.; Lindblad, Anne S.; Logemann, Jeri A.; Robbins, JoAnne
2009-01-01
Accurate detection and classification of aspiration is a critical component of videofluoroscopic swallowing evaluation, the most commonly utilized instrumental method for dysphagia diagnosis and treatment. Currently published literature indicates that inter-judge reliability for the identification of aspiration ranges from poor to fairly good depending on the amount of training provided to clinicians. The majority of extant studies compared judgments among clinicians. No studies included judgments made during the use of a postural compensatory strategy. The purpose of this study was to examine the accuracy of judgments made by speech-language pathologists (SLPs) practicing in hospitals compared with unblinded expert judges when identifying aspiration and using the 8-point Penetration/Aspiration Scale. Clinicians received extensive training for the detection of aspiration and minimal training on use of the Penetration/Aspiration Scale. Videofluoroscopic data were collected from 669 patients as part of a large, randomized clinical trial and include judgments of 10,200 swallows made by 76 clinicians from 44 hospitals in 11 states. Judgments were made on swallows during use of dysphagia compensatory strategies: chin down posture with thin-liquids and thickened liquids (nectar-thick and honey-thick consistencies) in a head neutral posture. The subject population included patients with Parkinson’s disease and/or dementia. Kappa statistics indicate high accuracy for all interventions by SLPs for identification of aspiration (all К > .86) and variable accuracy (range 69%–76%) using the Penetration/Aspiration Scale when compared to expert judges. It is concluded that while the accuracy of identifying the presence of aspiration by SLPs is excellent, more extensive training and/or image enhancement is recommended for precise use of the Penetration/Aspiration Scale. PMID:18953607
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
ERIC Educational Resources Information Center
Hedberg, E. C.; Hedges, Larry V.
2014-01-01
Randomized experiments are often considered the strongest designs to study the impact of educational interventions. Perhaps the most prevalent class of designs used in large scale education experiments is the cluster randomized design in which entire schools are assigned to treatments. In cluster randomized trials (CRTs) that assign schools to…
ERIC Educational Resources Information Center
Johnson, Matthew S.; Jenkins, Frank
2005-01-01
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
NASA Astrophysics Data System (ADS)
Lamb, Derek A.
2016-10-01
While sunspots follow a well-defined pattern of emergence in space and time, small-scale flux emergence is assumed to occur randomly at all times in the quiet Sun. HMI's full-disk coverage, high cadence, spatial resolution, and duty cycle allow us to probe that basic assumption. Some case studies of emergence suggest that temporal clustering on spatial scales of 50-150 Mm may occur. If clustering is present, it could serve as a diagnostic of large-scale subsurface magnetic field structures. We present the results of a manual survey of small-scale flux emergence events over a short time period, and a statistical analysis addressing the question of whether these events show spatio-temporal behavior that is anything other than random.
Stability of large-scale systems.
NASA Technical Reports Server (NTRS)
Siljak, D. D.
1972-01-01
The purpose of this paper is to present the results obtained in stability study of large-scale systems based upon the comparison principle and vector Liapunov functions. The exposition is essentially self-contained, with emphasis on recent innovations which utilize explicit information about the system structure. This provides a natural foundation for the stability theory of dynamic systems under structural perturbations.
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems
Kravchenko, Alexandra N.; Snapp, Sieglinde S.; Robertson, G. Philip
2017-01-01
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based–organic, management practices for a corn–soybean–wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world. PMID:28096409
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems.
Kravchenko, Alexandra N; Snapp, Sieglinde S; Robertson, G Philip
2017-01-31
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based-organic, management practices for a corn-soybean-wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world.
Scale-free Graphs for General Aviation Flight Schedules
NASA Technical Reports Server (NTRS)
Alexandov, Natalia M. (Technical Monitor); Kincaid, Rex K.
2003-01-01
In the late 1990s a number of researchers noticed that networks in biology, sociology, and telecommunications exhibited similar characteristics unlike standard random networks. In particular, they found that the cummulative degree distributions of these graphs followed a power law rather than a binomial distribution and that their clustering coefficients tended to a nonzero constant as the number of nodes, n, became large rather than O(1/n). Moreover, these networks shared an important property with traditional random graphs as n becomes large the average shortest path length scales with log n. This latter property has been coined the small-world property. When taken together these three properties small-world, power law, and constant clustering coefficient describe what are now most commonly referred to as scale-free networks. Since 1997 at least six books and over 400 articles have been written about scale-free networks. In this manuscript an overview of the salient characteristics of scale-free networks. Computational experience will be provided for two mechanisms that grow (dynamic) scale-free graphs. Additional computational experience will be given for constructing (static) scale-free graphs via a tabu search optimization approach. Finally, a discussion of potential applications to general aviation networks is given.
Blazing Signature Filter: a library for fast pairwise similarity comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon-Yong; Fujimoto, Grant M.; Wilson, Ryan
Identifying similarities between datasets is a fundamental task in data mining and has become an integral part of modern scientific investigation. Whether the task is to identify co-expressed genes in large-scale expression surveys or to predict combinations of gene knockouts which would elicit a similar phenotype, the underlying computational task is often a multi-dimensional similarity test. As datasets continue to grow, improvements to the efficiency, sensitivity or specificity of such computation will have broad impacts as it allows scientists to more completely explore the wealth of scientific data. A significant practical drawback of large-scale data mining is the vast majoritymore » of pairwise comparisons are unlikely to be relevant, meaning that they do not share a signature of interest. It is therefore essential to efficiently identify these unproductive comparisons as rapidly as possible and exclude them from more time-intensive similarity calculations. The Blazing Signature Filter (BSF) is a highly efficient pairwise similarity algorithm which enables extensive data mining within a reasonable amount of time. The algorithm transforms datasets into binary metrics, allowing it to utilize the computationally efficient bit operators and provide a coarse measure of similarity. As a result, the BSF can scale to high dimensionality and rapidly filter unproductive pairwise comparison. Two bioinformatics applications of the tool are presented to demonstrate the ability to scale to billions of pairwise comparisons and the usefulness of this approach.« less
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
Validation of the Chinese expanded Euthanasia Attitude Scale.
Chong, Alice Ming-Lin; Fok, Shiu-Yeu
2013-01-01
This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be used to examine attitudes toward general, active, passive, and non-voluntary euthanasia. The scale considers the role effect in decision-making about euthanasia requests and facilitates cross-cultural comparison of attitudes toward euthanasia. The new Chinese scale is more robust than its Western predecessors conceptually and measurement-wise.
Partial synchronization in networks of non-linearly coupled oscillators: The Deserter Hubs Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, Celso, E-mail: cbnfreitas@gmail.com; Macau, Elbert, E-mail: elbert.macau@inpe.br; Pikovsky, Arkady, E-mail: pikovsky@uni-potsdam.de
2015-04-15
We study the Deserter Hubs Model: a Kuramoto-like model of coupled identical phase oscillators on a network, where attractive and repulsive couplings are balanced dynamically due to nonlinearity of interactions. Under weak force, an oscillator tends to follow the phase of its neighbors, but if an oscillator is compelled to follow its peers by a sufficient large number of cohesive neighbors, then it actually starts to act in the opposite manner, i.e., in anti-phase with the majority. Analytic results yield that if the repulsion parameter is small enough in comparison with the degree of the maximum hub, then the fullmore » synchronization state is locally stable. Numerical experiments are performed to explore the model beyond this threshold, where the overall cohesion is lost. We report in detail partially synchronous dynamical regimes, like stationary phase-locking, multistability, periodic and chaotic states. Via statistical analysis of different network organizations like tree, scale-free, and random ones, we found a measure allowing one to predict relative abundance of partially synchronous stationary states in comparison to time-dependent ones.« less
David, A S; Farrin, L; Hull, L; Unwin, C; Wessely, S; Wykes, T
2002-11-01
Complaints of poor memory and concentration are common in veterans of the 1991 Persian Gulf War as are other symptoms. Despite a large research effort, such symptoms remain largely unexplained. A comprehensive battery of neuropsychological tests and rating scales was administered to 341 UK servicemen who were returnees from the Gulf War and peace keeping duties in Bosnia, plus non-deployed military controls. All were drawn from a large randomized survey. Most were selected on the basis of impaired physical functioning defined operationally. Group comparisons revealed an association between physical functioning and symptoms of depression, post-traumatic stress reactions, increased anger and subjective cognitive failures. Poorer performance on some general cognitive measures, sequencing and attention was also seen in association with being 'ill' but virtually all differences disappeared after adjusting for depressed mood or multiple comparisons. Deployment was also associated with symptoms of post-traumatic stress and subjective cognitive failures, independently of health status, as well as minor general cognitive and constructional impairment. The latter remained significantly poorer in the Gulf group even after adjusting for depressed mood. Disturbances of mood are more prominent than quantifiable cognitive deficits in Gulf War veterans and probably lead to subjective underestimation of ability. Task performance deficits can themselves be explained by depressed mood although the direction of causality cannot be inferred confidently. Reduced constructional ability cannot be explained in this way and could be an effect of Gulf-specific exposures.
Convex hulls of random walks in higher dimensions: A large-deviation study
NASA Astrophysics Data System (ADS)
Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.
2017-12-01
The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .
Krüger, Rejko; Sharma, Manu; Riess, Olaf; Gasser, Thomas; Van Broeckhoven, Christine; Theuns, Jessie; Aasly, Jan; Annesi, Grazia; Bentivoglio, Anna Rita; Brice, Alexis; Djarmati, Ana; Elbaz, Alexis; Farrer, Matthew; Ferrarese, Carlo; Gibson, J Mark; Hadjigeorgiou, Georgios M; Hattori, Nobutaka; Ioannidis, John P A; Jasinska-Myga, Barbara; Klein, Christine; Lambert, Jean-Charles; Lesage, Suzanne; Lin, Juei-Jueng; Lynch, Timothy; Mellick, George D; de Nigris, Francesa; Opala, Grzegorz; Prigione, Alessandro; Quattrone, Aldo; Ross, Owen A; Satake, Wataru; Silburn, Peter A; Tan, Eng King; Toda, Tatsushi; Tomiyama, Hiroyuki; Wirdefeldt, Karin; Wszolek, Zbigniew; Xiromerisiou, Georgia; Maraganore, Demetrius M
2011-03-01
High-profile studies have provided conflicting results regarding the involvement of the Omi/HtrA2 gene in Parkinson's disease (PD) susceptibility. Therefore, we performed a large-scale analysis of the association of common Omi/HtrA2 variants in the Genetic Epidemiology of Parkinson's disease (GEO-PD) consortium. GEO-PD sites provided clinical and genetic data including affection status, gender, ethnicity, age at study, age at examination (all subjects); age at onset and family history of PD (patients). Genotyping was performed for the five most informative SNPs spanning the Omi/HtrA2 gene in approximately 2-3 kb intervals (rs10779958, rs2231250, rs72470544, rs1183739, rs2241028). Fixed as well as random effect models were used to provide summary risk estimates of Omi/HtrA2 variants. The 20 GEO-PD sites provided data for 6378 cases and 8880 controls. No overall significant associations for the five Omi/HtrA2 SNPs and PD were observed using either fixed effect or random effect models. The summary odds ratios ranged between 0.98 and 1.08 and the estimates of between-study heterogeneity were not large (non-significant Q statistics for all 5 SNPs; I(2) estimates 0-28%). Trends for association were seen for participants of Scandinavian descent for rs2241028 (OR 1.41, p=0.04) and for rs1183739 for age at examination (cut-off 65 years; OR 1.17, p=0.02), but these would not be significant after adjusting for multiple comparisons and their Bayes factors were only modest. This largest association study performed to define the role of any gene in the pathogenesis of Parkinson's disease revealed no overall strong association of Omi/HtrA2 variants with PD in populations worldwide. Copyright © 2009 Elsevier Inc. All rights reserved.
Landscape-scale geographic variations in microbial indices and labile phosphorus in Hapludults
USDA-ARS?s Scientific Manuscript database
Long-term soil and nutrient management practices can have lasting effects on the geographic distribution of soil microorganisms, function, and non-mobile nutrients such as phosphorus (P). The non-random redistribution can influence nutrient turnover rate and use efficiency of crops, in comparison to...
Random number generators for large-scale parallel Monte Carlo simulations on FPGA
NASA Astrophysics Data System (ADS)
Lin, Y.; Wang, F.; Liu, B.
2018-05-01
Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
Rheingold, Alyssa A; Zajac, Kristyn; Patton, Meghan
2012-01-01
Recent prevention research has established the efficacy of some child sexual abuse prevention programs targeting adults; however, less is known about the feasibility of implementing such programs. The current study examines the feasibility and acceptability of a child sexual abuse prevention program for child care professionals provided in two different formats: in person and Web based. The sample consisted of 188 child care professionals from a large-scale, multisite, randomized controlled trial. Findings indicate that both in-person and online training formats are feasible to implement and acceptable to professionals. When comparing formats, the in-person format was favored in terms of comfort level and likelihood of sharing information with others. These findings have significant implications for dissemination of child sexual abuse prevention programs for child care professionals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fangyan; Zhang, Song; Chung Wong, Pak
Effectively visualizing large graphs and capturing the statistical properties are two challenging tasks. To aid in these two tasks, many sampling approaches for graph simplification have been proposed, falling into three categories: node sampling, edge sampling, and traversal-based sampling. It is still unknown which approach is the best. We evaluate commonly used graph sampling methods through a combined visual and statistical comparison of graphs sampled at various rates. We conduct our evaluation on three graph models: random graphs, small-world graphs, and scale-free graphs. Initial results indicate that the effectiveness of a sampling method is dependent on the graph model, themore » size of the graph, and the desired statistical property. This benchmark study can be used as a guideline in choosing the appropriate method for a particular graph sampling task, and the results presented can be incorporated into graph visualization and analysis tools.« less
Unsteady loads due to propulsive lift configurations. Part A: Investigation of scaling laws
NASA Technical Reports Server (NTRS)
Morton, J. B.; Haviland, J. K.
1978-01-01
This study covered scaling laws, and pressure measurements made to determine details of the large scale jet structure and to verify scaling laws by direct comparison. The basis of comparison was a test facility at NASA Langley in which a JT-15D exhausted over a boilerplater airfoil surface to reproduce upper surface blowing conditions. A quarter scale model was built of this facility, using cold jets. A comparison between full scale and model pressure coefficient spectra, presented as functions of Strouhal numbers, showed fair agreement, however, a shift of spectral peaks was noted. This was not believed to be due to Mach number or Reynolds number effects, but did appear to be traceable to discrepancies in jet temperatures. A correction for jet temperature was then tried, similar to one used for far field noise prediction. This was found to correct the spectral peak discrepancy.
ERIC Educational Resources Information Center
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole
2016-01-01
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
NASA Technical Reports Server (NTRS)
Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.
1998-01-01
Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.
Using Propensity Scores in Quasi-Experimental Designs to Equate Groups
ERIC Educational Resources Information Center
Lane, Forrest C.; Henson, Robin K.
2010-01-01
Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…
Application of stochastic processes in random growth and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Oikonomou, Panagiotis
We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
A large-scale video codec comparison of x264, x265 and libvpx for practical VOD applications
NASA Astrophysics Data System (ADS)
De Cock, Jan; Mavlankar, Aditya; Moorthy, Anush; Aaron, Anne
2016-09-01
Over the last years, we have seen exciting improvements in video compression technology, due to the introduction of HEVC and royalty-free coding specifications such as VP9. The potential compression gains of HEVC over H.264/AVC have been demonstrated in different studies, and are usually based on the HM reference software. For VP9, substantial gains over H.264/AVC have been reported in some publications, whereas others reported less optimistic results. Differences in configurations between these publications make it more difficult to assess the true potential of VP9. Practical open-source encoder implementations such as x265 and libvpx (VP9) have matured, and are now showing high compression gains over x264. In this paper, we demonstrate the potential of these encoder imple- mentations, with settings optimized for non-real-time random access, as used in a video-on-demand encoding pipeline. We report results from a large-scale video codec comparison test, which includes x264, x265 and libvpx. A test set consisting of a variety of titles with varying spatio-temporal characteristics from our catalog is used, resulting in tens of millions of encoded frames, hence larger than test sets previously used in the literature. Re- sults are reported in terms of PSNR, SSIM, MS-SSIM, VIF and the recently introduced VMAF quality metric. BD-rate calculations show that using x265 and libvpx vs. x264 can lead to significant bitrate savings for the same quality. x265 outperforms libvpx in most cases, but the performance gap narrows (or even reverses) at the higher resolutions.
Effects of topology on network evolution
NASA Astrophysics Data System (ADS)
Oikonomou, Panos; Cluzel, Philippe
2006-08-01
The ubiquity of scale-free topology in nature raises the question of whether this particular network design confers an evolutionary advantage. A series of studies has identified key principles controlling the growth and the dynamics of scale-free networks. Here, we use neuron-based networks of boolean components as a framework for modelling a large class of dynamical behaviours in both natural and artificial systems. Applying a training algorithm, we characterize how networks with distinct topologies evolve towards a pre-established target function through a process of random mutations and selection. We find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. Whereas homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously. Remarkably, this latter property is robust to variations of the degree exponent. In contrast, homogeneous random networks require a specific tuning of their connectivity to optimize their ability to evolve. These results highlight an organizing principle that governs the evolution of complex networks and that can improve the design of engineered systems.
Naming games in two-dimensional and small-world-connected random geometric networks.
Lu, Qiming; Korniss, G; Szymanski, B K
2008-01-01
We investigate a prototypical agent-based model, the naming game, on two-dimensional random geometric networks. The naming game [Baronchelli, J. Stat. Mech.: Theory Exp. (2006) P06014] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the naming games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.
Kuhlmann, Tim; Dantlgraber, Michael; Reips, Ulf-Dietrich
2017-12-01
Visual analogue scales (VASs) have shown superior measurement qualities in comparison to traditional Likert-type response scales in previous studies. The present study expands the comparison of response scales to properties of Internet-based personality scales in a within-subjects design. A sample of 879 participants filled out an online questionnaire measuring Conscientiousness, Excitement Seeking, and Narcissism. The questionnaire contained all instruments in both answer scale versions in a counterbalanced design. Results show comparable reliabilities, means, and SDs for the VAS versions of the original scales, in comparison to Likert-type scales. To assess the validity of the measurements, age and gender were used as criteria, because all three constructs have shown non-zero correlations with age and gender in previous research. Both response scales showed a high overlap and the proposed relationships with age and gender. The associations were largely identical, with the exception of an increase in explained variance when predicting age from the VAS version of Excitement Seeking (B10 = 1318.95, ΔR(2) = .025). VASs showed similar properties to Likert-type response scales in most cases.
Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow
Suzuki, Kazuya; Miyazaki, Makito; Takagi, Jun; Itabashi, Takeshi; Ishiwata, Shin’ichi
2017-01-01
Collective behaviors of motile units through hydrodynamic interactions induce directed fluid flow on a larger length scale than individual units. In cells, active cytoskeletal systems composed of polar filaments and molecular motors drive fluid flow, a process known as cytoplasmic streaming. The motor-driven elongation of microtubule bundles generates turbulent-like flow in purified systems; however, it remains unclear whether and how microtubule bundles induce large-scale directed flow like the cytoplasmic streaming observed in cells. Here, we adopted Xenopus egg extracts as a model system of the cytoplasm and found that microtubule bundle elongation induces directed flow for which the length scale and timescale depend on the existence of geometrical constraints. At the lower activity of dynein, kinesins bundle and slide microtubules, organizing extensile microtubule bundles. In bulk extracts, the extensile bundles connected with each other and formed a random network, and vortex flows with a length scale comparable to the bundle length continually emerged and persisted for 1 min at multiple places. When the extracts were encapsulated in droplets, the extensile bundles pushed the droplet boundary. This pushing force initiated symmetry breaking of the randomly oriented bundle network, leading to bundles aligning into a rotating vortex structure. This vortex induced rotational cytoplasmic flows on the length scale and timescale that were 10- to 100-fold longer than the vortex flows emerging in bulk extracts. Our results suggest that microtubule systems use not only hydrodynamic interactions but also mechanical interactions to induce large-scale temporally stable cytoplasmic flow. PMID:28265076
Wolf, M; Tamaschke, C; Mayer, W; Heger, M
2003-10-01
In homeopathy ARNICA is widely used as a woundhealing medication and for the treatment of hematomas. In this pilot study the efficacy and safety of ARNICA D12 in patients following varicose vein surgery were investigated. Prospective, randomized, double-blind, placebo-controlled pilot trial according to ICH GCP guidelines. The study was conducted by a surgeon at the Angiosurgical Clinic, Berlin- Buch. After randomized allocation, 60 patients received either ARNICA D12 or placebo. Start of medication occurred the evening before operation with 5 globules. On the operation day one preoperative and hourly postoperative dosages after awakening were given. On days 2-14 of the study 5 globules 3 times a day were given. OUTCOME CRITERIA: Surface (in cm(2) and using a three-point verbal rating scale) and intensity of hematomas induced by operation, complications of wound healing, and intensity of pain (five-point verbal rating scale) as well as efficacy and safety of the study medication were assessed. Hematoma surface was reduced (from day 7 to day 14) under ARNICA by 75.5% and under placebo by 71.5% (p = 0.4726). The comparison of hematoma surface (small, medium, large) using the verbal rating scale yielded a value of p = 0.1260. Pain score decreased by 1.0 +/- 2.2 points under ARNICA and 0.3 +/- 0.8 points under placebo (p = 0.1977). Remission or improvement of pain was observed in 43.3% of patients in the ARNICA group and in 27.6% of patients in the placebo group. Tolerability was rated as very good in all cases. The results of this pilot study showed a trend towards a beneficial effect of ARNICA D12 with regard to reduction of hematoma and pain during the postoperative course. For a statistically significant proof of efficacy of ARNICA D12 in patients following varicose vein surgery a larger sample size is necessary. Copyright 2003 S. Karger GmbH, Freiburg
A randomized controlled trial of single point acupuncture in primary dysmenorrhea.
Liu, Cun-Zhi; Xie, Jie-Ping; Wang, Lin-Peng; Liu, Yu-Qi; Song, Jia-Shan; Chen, Yin-Ying; Shi, Guang-Xia; Zhou, Wei; Gao, Shu-Zhong; Li, Shi-Liang; Xing, Jian-Min; Ma, Liang-Xiao; Wang, Yan-Xia; Zhu, Jiang; Liu, Jian-Ping
2014-06-01
Acupuncture is often used for primary dysmenorrhea. But there is no convincing evidence due to low methodological quality. We aim to assess immediate effect of acupuncture at specific acupoint compared with unrelated acupoint and nonacupoint on primary dysmenorrhea. The Acupuncture Analgesia Effect in Primary Dysmenorrhoea-II is a multicenter controlled trial conducted in six large hospitals of China. Patients who met inclusion criteria were randomly assigned to classic acupoint (N = 167), unrelated acupoint (N = 167), or non-acupoint (N = 167) group on a 1:1:1 basis. They received three sessions with electro-acupuncture at a classic acupoint (Sanyinjiao, SP6), or an unrelated acupoint (Xuanzhong, GB39), or nonacupoint location, respectively. The primary outcome was subjective pain as measured by a 100-mm visual analog scale (VAS). Measurements were obtained at 0, 5, 10, 30, and 60 minutes following the first intervention. In addition, patients scored changes of general complaints using Cox retrospective symptom scales (RSS-Cox) and 7-point verbal rating scale (VRS) during three menstrual cycles. Secondary outcomes included VAS score for average pain, pain total time, additional in-bed time, and proportion of participants using analgesics during three menstrual cycles. Five hundred and one people underwent random assignment. The primary comparison of VAS scores following the first intervention demonstrated that classic acupoint group was more effective both than unrelated acupoint (-4.0 mm, 95% CI -7.1 to -0.9, P = 0.010) and nonacupoint (-4.0 mm, 95% CI -7.0 to -0.9, P = 0.012) groups. However, no significant differences were detected among the three acupuncture groups for RSS-Cox or VRS outcomes. The per-protocol analysis showed similar pattern. No serious adverse events were noted. Specific acupoint acupuncture produced a statistically, but not clinically, significant effect compared with unrelated acupoint and nonacupoint acupuncture in primary dysmenorrhea patients. Future studies should focus on effects of multiple points acupuncture on primary dysmenorrhea. Wiley Periodicals, Inc.
Fractional Stochastic Field Theory
NASA Astrophysics Data System (ADS)
Honkonen, Juha
2018-02-01
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data
The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...
Oono, Ryoko
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions 'how and why are communities different?' This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences.
2017-01-01
High-throughput sequencing technology has helped microbial community ecologists explore ecological and evolutionary patterns at unprecedented scales. The benefits of a large sample size still typically outweigh that of greater sequencing depths per sample for accurate estimations of ecological inferences. However, excluding or not sequencing rare taxa may mislead the answers to the questions ‘how and why are communities different?’ This study evaluates the confidence intervals of ecological inferences from high-throughput sequencing data of foliar fungal endophytes as case studies through a range of sampling efforts, sequencing depths, and taxonomic resolutions to understand how technical and analytical practices may affect our interpretations. Increasing sampling size reliably decreased confidence intervals across multiple community comparisons. However, the effects of sequencing depths on confidence intervals depended on how rare taxa influenced the dissimilarity estimates among communities and did not significantly decrease confidence intervals for all community comparisons. A comparison of simulated communities under random drift suggests that sequencing depths are important in estimating dissimilarities between microbial communities under neutral selective processes. Confidence interval analyses reveal important biases as well as biological trends in microbial community studies that otherwise may be ignored when communities are only compared for statistically significant differences. PMID:29253889
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
2011-01-01
Background Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. Methods We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC. Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. Results The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. Conclusions On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain. PMID:21605357
The structure of supersonic jet flow and its radiated sound
NASA Technical Reports Server (NTRS)
Mankbadi, Reda R.; Hayder, M. E.; Povinelli, Louis A.
1993-01-01
Large-eddy simulation of a supersonic jet is presented with emphasis on capturing the unsteady features of the flow pertinent to sound emission. A high-accuracy numerical scheme is used to solve the filtered, unsteady, compressible Navier-Stokes equations while modelling the subgrid-scale turbulence. For random inflow disturbance, the wave-like feature of the large-scale structure is demonstrated. The large-scale structure was then enhanced by imposing harmonic disturbances to the inflow. The limitation of using the full Navier-Stokes equation to calculate the far-field sound is discussed. Application of Lighthill's acoustic analogy is given with the objective of highlighting the difficulties that arise from the non-compactness of the source term.
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Screening large-scale association study data: exploiting interactions using random forests.
Lunetta, Kathryn L; Hayward, L Brooke; Segal, Jonathan; Van Eerdewegh, Paul
2004-12-10
Genome-wide association studies for complex diseases will produce genotypes on hundreds of thousands of single nucleotide polymorphisms (SNPs). A logical first approach to dealing with massive numbers of SNPs is to use some test to screen the SNPs, retaining only those that meet some criterion for further study. For example, SNPs can be ranked by p-value, and those with the lowest p-values retained. When SNPs have large interaction effects but small marginal effects in a population, they are unlikely to be retained when univariate tests are used for screening. However, model-based screens that pre-specify interactions are impractical for data sets with thousands of SNPs. Random forest analysis is an alternative method that produces a single measure of importance for each predictor variable that takes into account interactions among variables without requiring model specification. Interactions increase the importance for the individual interacting variables, making them more likely to be given high importance relative to other variables. We test the performance of random forests as a screening procedure to identify small numbers of risk-associated SNPs from among large numbers of unassociated SNPs using complex disease models with up to 32 loci, incorporating both genetic heterogeneity and multi-locus interaction. Keeping other factors constant, if risk SNPs interact, the random forest importance measure significantly outperforms the Fisher Exact test as a screening tool. As the number of interacting SNPs increases, the improvement in performance of random forest analysis relative to Fisher Exact test for screening also increases. Random forests perform similarly to the univariate Fisher Exact test as a screening tool when SNPs in the analysis do not interact. In the context of large-scale genetic association studies where unknown interactions exist among true risk-associated SNPs or SNPs and environmental covariates, screening SNPs using random forest analyses can significantly reduce the number of SNPs that need to be retained for further study compared to standard univariate screening methods.
The clinical evaluation of platelet-rich plasma on free gingival graft's donor site wound healing.
Samani, Mahmoud Khosravi; Saberi, Bardia Vadiati; Ali Tabatabaei, S M; Moghadam, Mahdjoube Goldani
2017-01-01
It has been proved that platelet-rich plasma (PRP) can promote wound healing. In this way, PRP can be advantageous in periodontal plastic surgeries, free gingival graft (FGG) being one such surgery. In this randomized split-mouth controlled trial, 10 patients who needed bilateral FGG were selected, and two donor sites were randomly assigned to experience either natural healing or healing-assisted with PRP. The outcome was assessed based on the comparison of the extent of wound closure, Manchester scale, Landry healing scale, visual analog scale, and tissue thickness between the study groups at different time intervals. Repeated measurements of analysis of variance and paired t -test were used. Statistical significance was P ≤ 0.05. Significant differences between the study groups and also across different time intervals were seen in all parameters except for the changes in tissue thickness. PRP accelerates the healing process of wounds and reduces the healing time.
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
ERIC Educational Resources Information Center
Wijekumar, Kausalai; Meyer, Bonnie J. F.; Lei, Pui-Wa; Lin, Yu-Chu; Johnson, Lori A.; Spielvogel, James A.; Shurmatz, Kathryn M.; Ray, Melissa; Cook, Michael
2014-01-01
This article reports on a large scale randomized controlled trial to study the efficacy of a web-based intelligent tutoring system for the structure strategy designed to improve content area reading comprehension. The research was conducted with 128 fifth-grade classrooms within 12 school districts in rural and suburban settings. Classrooms within…
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Efficient design of clinical trials and epidemiological research: is it possible?
Lauer, Michael S; Gordon, David; Wei, Gina; Pearson, Gail
2017-08-01
Randomized clinical trials and large-scale, cohort studies continue to have a critical role in generating evidence in cardiovascular medicine; however, the increasing concern is that ballooning costs threaten the clinical trial enterprise. In this Perspectives article, we discuss the changing landscape of clinical research, and clinical trials in particular, focusing on reasons for the increasing costs and inefficiencies. These reasons include excessively complex design, overly restrictive inclusion and exclusion criteria, burdensome regulations, excessive source-data verification, and concerns about the effect of clinical research conduct on workflow. Thought leaders have called on the clinical research community to consider alternative, transformative business models, including those models that focus on simplicity and leveraging of digital resources. We present some examples of innovative approaches by which some investigators have successfully conducted large-scale, clinical trials at relatively low cost. These examples include randomized registry trials, cluster-randomized trials, adaptive trials, and trials that are fully embedded within digital clinical care or administrative platforms.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
Chaotic gas turbine subject to augmented Lorenz equations.
Cho, Kenichiro; Miyano, Takaya; Toriyama, Toshiyuki
2012-09-01
Inspired by the chaotic waterwheel invented by Malkus and Howard about 40 years ago, we have developed a gas turbine that randomly switches the sense of rotation between clockwise and counterclockwise. The nondimensionalized expressions for the equations of motion of our turbine are represented as a starlike network of many Lorenz subsystems sharing the angular velocity of the turbine rotor as the central node, referred to as augmented Lorenz equations. We show qualitative similarities between the statistical properties of the angular velocity of the turbine rotor and the velocity field of large-scale wind in turbulent Rayleigh-Bénard convection reported by Sreenivasan et al. [Phys. Rev. E 65, 056306 (2002)]. Our equations of motion achieve the random reversal of the turbine rotor through the stochastic resonance of the angular velocity in a double-well potential and the force applied by rapidly oscillating fields. These results suggest that the augmented Lorenz model is applicable as a dynamical model for the random reversal of turbulent large-scale wind through cessation.
Akhondzadeh, Shahin; Fallah-Pour, Hasan; Afkham, Khosro; Jamshidi, Amir-Hossein; Khalighi-Cigaroudi, Farahnaz
2004-01-01
Background The morbidity and mortality associated with depression are considerable and continue to increase. Depression currently ranks fourth among the major causes of disability worldwide, after lower respiratory infections, prenatal conditions, and HIV/AIDS. Crocus sativus L. is used to treat depression. Many medicinal plants textbooks refer to this indication whereas there is no evidence-based document. Our objective was to compare the efficacy of stigmas of Crocus sativus (saffron) with imipramine in the treatment of mild to moderate depression in a 6-week pilot double-blind randomized trial. Methods Thirty adult outpatients who met the Diagnostic and Statistical Manual of Mental Disorders, 4th edition for major depression based on the structured clinical interview for DSM IV participated in the trial. Patients have a baseline Hamilton Rating Scale for Depression score of at least 18. In this double-blind, single-center trial, patients were randomly assigned to receive capsule of saffron 30 mg/day (TDS) (Group 1) and capsule of imipramine 100 mg/day (TDS) (Group 2) for a 6-week study. Results Saffron at this dose was found to be effective similar to imipramine in the treatment of mild to moderate depression (F = 2.91, d.f. = 1, P = 0.09). In the imipramine group anticholinergic effects such as dry mouth and also sedation were observed more often that was predictable. Conclusion The main overall finding from this study is that saffron may be of therapeutic benefit in the treatment of mild to moderate depression. To the best of our knowledge this is the first clinical trial that supports this indication for saffron. A large-scale trial with placebo control is warranted. PMID:15341662
MoghaddamHosseini, Vahideh; Nazarzadeh, Milad; Jahanfar, Shayesteh
2017-11-07
Fear of childbirth is a problematic mental health issue during pregnancy. But, effective interventions to reduce this problem are not well understood. To examine effective interventions for reducing fear of childbirth. The Cochrane Central Register of Controlled Trials, PubMed, Embase and PsycINFO were searched since inception till September 2017 without any restriction. Randomised controlled trials and quasi-randomised controlled trials comparing interventions for treatment of fear of childbirth were included. The standardized mean differences were pooled using random and fixed effect models. The heterogeneity was determined using the Cochran's test and I 2 index and was further explored in meta-regression model and subgroup analyses. Ten studies inclusive of 3984 participants were included in the meta-analysis (2 quasi-randomized and 8 randomized clinical trials). Eight studies investigated education and two studies investigated hypnosis-based intervention. The pooled standardized mean differences of fear for the education intervention and hypnosis group in comparison with control group were -0.46 (95% CI -0.73 to -0.19) and -0.22 (95% CI -0.34 to -0.10), respectively. Both types of interventions were effective in reducing fear of childbirth; however our pooled results revealed that educational interventions may reduce fear with double the effect of hypnosis. Further large scale randomized clinical trials and individual patient data meta-analysis are warranted for assessing the association. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Measuring the topology of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Sound production due to large-scale coherent structures
NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.
ERIC Educational Resources Information Center
Smolkowski, Keith; Strycker, Lisa; Ward, Bryce
2016-01-01
This study evaluated the scale-up of a Safe & Civil Schools "Foundations: Establishing Positive Discipline Policies" positive behavioral interventions and supports initiative through 4 years of "real-world" implementation in a large urban school district. The study extends results from a previous randomized controlled trial…
ERIC Educational Resources Information Center
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castilla, Veronica
2016-01-01
Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides…
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Physical principles and current status of emerging non-volatile solid state memories
NASA Astrophysics Data System (ADS)
Wang, L.; Yang, C.-H.; Wen, J.
2015-07-01
Today the influence of non-volatile solid-state memories on persons' lives has become more prominent because of their non-volatility, low data latency, and high robustness. As a pioneering technology that is representative of non-volatile solidstate memories, flash memory has recently seen widespread application in many areas ranging from electronic appliances, such as cell phones and digital cameras, to external storage devices such as universal serial bus (USB) memory. Moreover, owing to its large storage capacity, it is expected that in the near future, flash memory will replace hard-disk drives as a dominant technology in the mass storage market, especially because of recently emerging solid-state drives. However, the rapid growth of the global digital data has led to the need for flash memories to have larger storage capacity, thus requiring a further downscaling of the cell size. Such a miniaturization is expected to be extremely difficult because of the well-known scaling limit of flash memories. It is therefore necessary to either explore innovative technologies that can extend the areal density of flash memories beyond the scaling limits, or to vigorously develop alternative non-volatile solid-state memories including ferroelectric random-access memory, magnetoresistive random-access memory, phase-change random-access memory, and resistive random-access memory. In this paper, we review the physical principles of flash memories and their technical challenges that affect our ability to enhance the storage capacity. We then present a detailed discussion of novel technologies that can extend the storage density of flash memories beyond the commonly accepted limits. In each case, we subsequently discuss the physical principles of these new types of non-volatile solid-state memories as well as their respective merits and weakness when utilized for data storage applications. Finally, we predict the future prospects for the aforementioned solid-state memories for the next generation of data-storage devices based on a comparison of their performance. [Figure not available: see fulltext.
Robust-yet-fragile nature of interdependent networks
NASA Astrophysics Data System (ADS)
Tan, Fei; Xia, Yongxiang; Wei, Zhi
2015-05-01
Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
KC-135 aero-optical turbulent boundary layer/shear layer experiment revisited
NASA Technical Reports Server (NTRS)
Craig, J.; Allen, C.
1987-01-01
The aero-optical effects associated with propagating a laser beam through both an aircraft turbulent boundary layer and artificially generated shear layers are examined. The data present comparisons from observed optical performance with those inferred from aerodynamic measurements of unsteady density and correlation lengths within the same random flow fields. Using optical instrumentation with tens of microsecond temporal resolution through a finite aperture, optical performance degradation was determined and contrasted with the infinite aperture time averaged aerodynamic measurement. In addition, the optical data were artificially clipped to compare to theoretical scaling calculations. Optical instrumentation consisted of a custom Q switched Nd:Yag double pulsed laser, and a holographic camera which recorded the random flow field in a double pass, double pulse mode. Aerodynamic parameters were measured using hot film anemometer probes and a five hole pressure probe. Each technique is described with its associated theoretical basis for comparison. The effects of finite aperture and spatial and temporal frequencies of the random flow are considered.
Schoenberg, Mike R; Lange, Rael T; Brickell, Tracey A; Saklofske, Donald H
2007-04-01
Neuropsychologic evaluation requires current test performance be contrasted against a comparison standard to determine if change has occurred. An estimate of premorbid intelligence quotient (IQ) is often used as a comparison standard. The Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is a commonly used intelligence test. However, there is no method to estimate premorbid IQ for the WISC-IV, limiting the test's utility for neuropsychologic assessment. This study develops algorithms to estimate premorbid Full Scale IQ scores. Participants were the American WISC-IV standardization sample (N = 2172). The sample was randomly divided into 2 groups (development and validation). The development group was used to generate 12 algorithms. These algorithms were accurate predictors of WISC-IV Full Scale IQ scores in healthy children and adolescents. These algorithms hold promise as a method to predict premorbid IQ for patients with known or suspected neurologic dysfunction; however, clinical validation is required.
ERIC Educational Resources Information Center
van den Heuvel-Panhuizen, Marja; Robitzsch, Alexander; Treffers, Adri; Koller, Olaf
2009-01-01
This article discusses large-scale assessment of change in student achievement and takes the study by Hickendorff, Heiser, Van Putten, and Verhelst (2009) as an example. This study compared the achievement of students in the Netherlands in 1997 and 2004 on written division problems. Based on this comparison, they claim that there is a performance…
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.
Bodes Pardo, Gema; Lluch Girbés, Enrique; Roussel, Nathalie A; Gallego Izquierdo, Tomás; Jiménez Penick, Virginia; Pecos Martín, Daniel
2018-02-01
To assess the effect of a pain neurophysiology education (PNE) program plus therapeutic exercise (TE) for patients with chronic low back pain (CLBP). Single-blind randomized controlled trial. Private clinic and university. Patients with CLBP for ≥6 months (N=56). Participants were randomized to receive either a TE program consisting of motor control, stretching, and aerobic exercises (n=28) or the same TE program in addition to a PNE program (n=28), conducted in two 30- to 50-minute sessions in groups of 4 to 6 participants. The primary outcome was pain intensity rated on the numerical pain rating scale which was completed immediately after treatment and at 1- and 3-month follow-up. Secondary outcome measures were pressure pain threshold, finger-to-floor distance, Roland-Morris Disability Questionnaire, Pain Catastrophizing Scale, Tampa Scale for Kinesiophobia, and Patient Global Impression of Change. At 3-month follow-up, a large change in pain intensity (numerical pain rating scale: -2.2; -2.93 to -1.28; P<.001; d=1.37) was observed for the PNE plus TE group, and a moderate effect size was observed for the secondary outcome measures. Combining PNE with TE resulted in significantly better results for participants with CLBP, with a large effect size, compared with TE alone. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.
2014-12-01
We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M {sub stellar} > 10{sup 11.56} M {sub ☉}. We study the topology at two smoothing lengths: R {sub G} = 21 h {sup –1} Mpc and R {sub G} = 34 h {sup –1} Mpc. The genus topology studied at the R {sub G} = 21 h {sup –1} Mpc scale results in the highest genusmore » amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.« less
On the large scale structure of X-ray background sources
NASA Technical Reports Server (NTRS)
Bi, H. G.; Meszaros, A.; Meszaros, P.
1991-01-01
The large scale clustering of the sources responsible for the X-ray background is discussed, under the assumption of a discrete origin. The formalism necessary for calculating the X-ray spatial fluctuations in the most general case where the source density contrast in structures varies with redshift is developed. A comparison of this with observational limits is useful for obtaining information concerning various galaxy formation scenarios. The calculations presented show that a varying density contrast has a small impact on the expected X-ray fluctuations. This strengthens and extends previous conclusions concerning the size and comoving density of large scale structures at redshifts 0.5 between 4.0.
Unsupervised text mining for assessing and augmenting GWAS results.
Ailem, Melissa; Role, François; Nadif, Mohamed; Demenais, Florence
2016-04-01
Text mining can assist in the analysis and interpretation of large-scale biomedical data, helping biologists to quickly and cheaply gain confirmation of hypothesized relationships between biological entities. We set this question in the context of genome-wide association studies (GWAS), an actively emerging field that contributed to identify many genes associated with multifactorial diseases. These studies allow to identify groups of genes associated with the same phenotype, but provide no information about the relationships between these genes. Therefore, our objective is to leverage unsupervised text mining techniques using text-based cosine similarity comparisons and clustering applied to candidate and random gene vectors, in order to augment the GWAS results. We propose a generic framework which we used to characterize the relationships between 10 genes reported associated with asthma by a previous GWAS. The results of this experiment showed that the similarities between these 10 genes were significantly stronger than would be expected by chance (one-sided p-value<0.01). The clustering of observed and randomly selected gene also allowed to generate hypotheses about potential functional relationships between these genes and thus contributed to the discovery of new candidate genes for asthma. Copyright © 2016 Elsevier Inc. All rights reserved.
Determining Scale-dependent Patterns in Spatial and Temporal Datasets
NASA Astrophysics Data System (ADS)
Roy, A.; Perfect, E.; Mukerji, T.; Sylvester, L.
2016-12-01
Spatial and temporal datasets of interest to Earth scientists often contain plots of one variable against another, e.g., rainfall magnitude vs. time or fracture aperture vs. spacing. Such data, comprised of distributions of events along a transect / timeline along with their magnitudes, can display persistent or antipersistent trends, as well as random behavior, that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data. In a recent study we showed that lacunarity can be used for revealing changes in scale-dependent patterns in fracture spacing data. Here we present a further improvement in our technique, with lacunarity applied to various non-binary datasets comprised of event spacings and magnitudes. We test our technique on a set of four synthetic datasets, three of which are based on an autoregressive model and have magnitudes at every point along the "timeline" thus representing antipersistent, persistent, and random trends. The fourth dataset is made up of five clusters of events, each containing a set of random magnitudes. The concept of lacunarity ratio, LR, is introduced; this is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. It is demonstrated that LR can successfully delineate scale-dependent changes in terms of antipersistence and persistence in the synthetic datasets. This technique is then applied to three different types of data: a hundred-year rainfall record from Knoxville, TN, USA, a set of varved sediments from Marca Shale, and a set of fracture aperture and spacing data from NE Mexico. While the rainfall data and varved sediments both appear to be persistent at small scales, at larger scales they both become random. On the other hand, the fracture data shows antipersistence at small scale (within cluster) and random behavior at large scales. Such differences in behavior with respect to scale-dependent changes in antipersistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2013-01-01
Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…
ERIC Educational Resources Information Center
Chung, Gregory K. W. K.; Choi, Kilchan; Baker, Eva L.; Cai, Li
2014-01-01
A large-scale randomized controlled trial tested the effects of researcher-developed learning games on a transfer measure of fractions knowledge. The measure contained items similar to standardized assessments. Thirty treatment and 29 control classrooms (~1500 students, 9 districts, 26 schools) participated in the study. Students in treatment…
Márquez-Cruz, Maribel; Díaz-Martínez, Juan Pablo; Soto-Molina, Herman; De Saráchaga, Adib Jorge; Cervantes-Arriaga, Amin; Llorens-Arenas, Rodrigo; Rodríguez-Violante, Mayela
2016-01-01
Parkinson's disease (PD) is the second most common neurodegenerative disease. There are no clinical trials comparing all available pharmacological therapies for the treatment of early PD. The objective of this review is to indirectly analyze the efficacy of antiparkinson drugs currently available in Latin America. A systematic review was performed exploring only placebo-controlled randomized trials comparing antiparkinson monotherapy (levodopa, pramipexole, rasagiline, or selegiline) in patients with PD on Hoehn and Yahr stages I through III published from January 1994 to May 2014. The primary outcome was the mean change in the Unified PD Rating Scale (UPDRS) I, II and III. A mixed treatment comparison analysis (indirect comparisons) through a random-effects model was performed. Levodopa demonstrated the highest effects in terms of UPDRS score improvement both from baseline and when compared to other treatments. Levodopa showed a 60.1% probability of granting the greatest reduction in UPDRS I, II and III.
Fractal Analyses of High-Resolution Cloud Droplet Measurements.
NASA Astrophysics Data System (ADS)
Malinowski, Szymon P.; Leclerc, Monique Y.; Baumgardner, Darrel G.
1994-02-01
Fractal analyses of individual cloud droplet distributions using aircraft measurements along one-dimensional horizontal cross sections through clouds are performed. Box counting and cluster analyses are used to determine spatial scales of inhomogeneity of cloud droplet spacing. These analyses reveal that droplet spatial distributions do not exhibit a fractal behavior. A high variability in local droplet concentration in cloud volumes undergoing mixing was found. In these regions, thin filaments of cloudy air with droplet concentration close to those observed in cloud cores were found. Results suggest that these filaments may be anisotropic. Additional box counting analyses performed for various classes of cloud droplet diameters indicate that large and small droplets are similarly distributed, except for the larger characteristic spacing of large droplets.A cloud-clear air interface defined by a certain threshold of total droplet count (TDC) was investigated. There are indications that this interface is a convoluted surface of a fractal nature, at least in actively developing cumuliform clouds. In contrast, TDC in the cloud interior does not have fractal or multifractal properties. Finally a random Cantor set (RCS) was introduced as a model of a fractal process with an ill-defined internal scale. A uniform measure associated with the RCS after several generations was introduced to simulate the TDC records. Comparison of the model with real TDC records indicates similar properties of both types of data series.
Tuttolomondo, Antonino; Di Raimondo, Domenico; Pecoraro, Rosaria; Maida, Carlo; Arnao, Valentina; Corte, Vittoriano Della; Simonetta, Irene; Corpora, Francesca; Di Bona, Danilo; Maugeri, Rosario; Iacopino, Domenico Gerardo; Pinto, Antonio
2016-01-01
Abstract Statins have beneficial effects on cerebral circulation and brain parenchyma during ischemic stroke and reperfusion. The primary hypothesis of this randomized parallel trial was that treatment with 80 mg/day of atorvastatin administered early at admission after acute atherosclerotic ischemic stroke could reduce serum levels of markers of immune-inflammatory activation of the acute phase and that this immune-inflammatory modulation could have a possible effect on prognosis of ischemic stroke evaluated by some outcome indicators. We enrolled 42 patients with acute ischemic stroke classified as large arteries atherosclerosis stroke (LAAS) randomly assigned in a randomized parallel trial to the following groups: Group A, 22 patients treated with atorvastatin 80 mg (once-daily) from admission day until discharge; Group B, 20 patients not treated with atorvastatin 80 mg until discharge, and after discharge, treatment with atorvastatin has been started. At 72 hours and at 7 days after acute ischemic stroke, subjects of group A showed significantly lower plasma levels of tumor necrosis factor-α, interleukin (IL)-6, vascular cell adhesion molecule-1, whereas no significant difference with regard to plasma levels of IL-10, E-Selectin, and P-Selectin was observed between the 2 groups. At 72 hours and 7 days after admission, stroke patients treated with atorvastatin 80 mg in comparison with stroke subjects not treated with atorvastatin showed a significantly lower mean National Institutes of Health Stroke Scale and modified Rankin scores. Our findings provide the first evidence that atorvastatin acutely administered immediately after an atherosclerotic ischemic stroke exerts a lowering effect on immune-inflammatory activation of the acute phase of stroke and that its early use is associated to a better functional and prognostic profile. PMID:27043681
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation.
The Amordad database engine for metagenomics.
Behnam, Ehsan; Smith, Andrew D
2014-10-15
Several technical challenges in metagenomic data analysis, including assembling metagenomic sequence data or identifying operational taxonomic units, are both significant and well known. These forms of analysis are increasingly cited as conceptually flawed, given the extreme variation within traditionally defined species and rampant horizontal gene transfer. Furthermore, computational requirements of such analysis have hindered content-based organization of metagenomic data at large scale. In this article, we introduce the Amordad database engine for alignment-free, content-based indexing of metagenomic datasets. Amordad places the metagenome comparison problem in a geometric context, and uses an indexing strategy that combines random hashing with a regular nearest neighbor graph. This framework allows refinement of the database over time by continual application of random hash functions, with the effect of each hash function encoded in the nearest neighbor graph. This eliminates the need to explicitly maintain the hash functions in order for query efficiency to benefit from the accumulated randomness. Results on real and simulated data show that Amordad can support logarithmic query time for identifying similar metagenomes even as the database size reaches into the millions. Source code, licensed under the GNU general public license (version 3) is freely available for download from http://smithlabresearch.org/amordad andrewds@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Some Syndromes Among Suicidal People: The Problem of Suicide Potentiality.
ERIC Educational Resources Information Center
Wold, Carl I.
An on-going research project at the Los Angeles Suicide Prevention Center is attempting to describe the potential suicide. Comparisons on a rating scale were made among patients who commit suicide and a random sample of case histories from the coroner's office. Approximately 10 syndromes or subgroupings of people who commit suicide have been…
Impact of degree heterogeneity on the behavior of trapping in Koch networks
NASA Astrophysics Data System (ADS)
Zhang, Zhongzhi; Gao, Shuyang; Xie, Wenlei
2010-12-01
Previous work shows that the mean first-passage time (MFPT) for random walks to a given hub node (node with maximum degree) in uncorrelated random scale-free networks is closely related to the exponent γ of power-law degree distribution P(k )˜k-γ, which describes the extent of heterogeneity of scale-free network structure. However, extensive empirical research indicates that real networked systems also display ubiquitous degree correlations. In this paper, we address the trapping issue on the Koch networks, which is a special random walk with one trap fixed at a hub node. The Koch networks are power-law with the characteristic exponent γ in the range between 2 and 3, they are either assortative or disassortative. We calculate exactly the MFPT that is the average of first-passage time from all other nodes to the trap. The obtained explicit solution shows that in large networks the MFPT varies lineally with node number N, which is obviously independent of γ and is sharp contrast to the scaling behavior of MFPT observed for uncorrelated random scale-free networks, where γ influences qualitatively the MFPT of trapping problem.
Relationships among measures of managerial personality traits.
Miner, J B
1976-08-01
Comparisons were made to determine the degree of convergence among three measures associated with leadership success in large, hierarchic organizations in the business sector: the Miner Sentence Completion Scale; the Ghiselli Self-Description Inventory; and the F-Scale, Correlational analyses and comparisons between means were made using college students and business manager samples. The results indicated considerable convergence for the first two measures, but not for the F-Scale. The F-Scale was related to the Miner Sentence Completion Scale in the student group, but relationships were nonexistent among the managers. Analyses of the individual F-Scale items which produced the relationship among the students suggested that early family-related experiences and attitudes may contribute to the development of motivation to manage, but lose their relevance for it later, under the onslaught of actual managerial experience.
Connecting the large- and the small-scale magnetic fields of solar-like stars
NASA Astrophysics Data System (ADS)
Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.
2018-05-01
A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.
Large-scale data analysis of power grid resilience across multiple US service regions
NASA Astrophysics Data System (ADS)
Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert
2016-05-01
Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.
Rabini, Alessia; Piazzini, Diana B; Bertolini, Carlo; Deriu, Laura; Saccomanno, Maristella F; Santagada, Domenico A; Sgadari, Antonio; Bernabei, Roberto; Fabbriciani, Carlo; Marzetti, Emanuele; Milano, Giuseppe
2012-04-01
Single-blind randomized clinical trial, with a follow-up of 24 weeks. To determine the effects of hyperthermia via localized microwave diathermy on pain and disability in comparison to subacromial corticosteroid injections in patients with rotator cuff tendinopathy. Hyperthermia improves symptoms and function in several painful musculoskeletal disorders. However, the effects of microwave diathermy in rotator cuff tendinopathy have not yet been established. Ninety-two patients with rotator cuff tendinopathy and pain lasting for at least 3 months were recruited from the outpatient clinic of the Department of Orthopaedics and Traumatology, University Hospital, Rome, Italy. Participants were randomly allocated to either local microwave diathermy or subacromial corticosteroids. The primary outcome measure was the short form of the Disabilities of the Arm, Shoulder and Hand Questionnaire (QuickDASH). Secondary outcome measures were the Constant-Murley shoulder outcome score and a visual analog scale for pain assessment. At the end of treatment and at follow-up, both treatment groups experienced improvements in all outcome measures relative to baseline values. Changes over time in QuickDASH, Constant-Murley, and visual analog scale scores were not different between treatment arms. In patients with rotator cuff tendinopathy, the effects of localized microwave diathermy on disability, shoulder function, and pain are equivalent to those elicited by subacromial corticosteroid injections.
Implications of Small Samples for Generalization: Adjustments and Rules of Thumb
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hallberg, Kelly; Hedges, Larry V.; Chan, Wendy
2015-01-01
Policy-makers are frequently interested in understanding how effective a particular intervention may be for a specific (and often broad) population. In many fields, particularly education and social welfare, the ideal form of these evaluations is a large-scale randomized experiment. Recent research has highlighted that sites in these large-scale…
Nagel, Corey L; Kirby, Miles A; Zambrano, Laura D; Rosa, Ghislane; Barstow, Christina K; Thomas, Evan A; Clasen, Thomas F
2016-12-15
In Rwanda, pneumonia and diarrhea are the first and second leading causes of death, respectively, among children under five. Household air pollution (HAP) resultant from cooking indoors with biomass fuels on traditional stoves is a significant risk factor for pneumonia, while consumption of contaminated drinking water is a primary cause of diarrheal disease. To date, there have been no large-scale effectiveness trials of programmatic efforts to provide either improved cookstoves or household water filters at scale in a low-income country. In this paper we describe the design of a cluster-randomized trial to evaluate the impact of a national-level program to distribute and promote the use of improved cookstoves and advanced water filters to the poorest quarter of households in Rwanda. We randomly allocated 72 sectors (administratively defined units) in Western Province to the intervention, with the remaining 24 sectors in the province serving as controls. In the intervention sectors, roughly 100,000 households received improved cookstoves and household water filters through a government-sponsored program targeting the poorest quarter of households nationally. The primary outcome measures are the incidence of acute respiratory infection (ARI) and diarrhea among children under five years of age. Over a one-year surveillance period, all cases of acute respiratory infection (ARI) and diarrhea identified by health workers in the study area will be extracted from records maintained at health facilities and by community health workers (CHW). In addition, we are conducting intensive, longitudinal data collection among a random sample of households in the study area for in-depth assessment of coverage, use, environmental exposures, and additional health measures. Although previous research has examined the impact of providing household water treatment and improved cookstoves on child health, there have been no studies of national-level programs to deliver these interventions at scale in a developing country. The results of this study, the first RCT of a large-scale programmatic cookstove or household water filter intervention, will inform global efforts to reduce childhood morbidity and mortality from diarrheal disease and pneumonia. This trial is registered at Clinicaltrials.gov (NCT02239250).
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
Large scale structure in universes dominated by cold dark matter
NASA Technical Reports Server (NTRS)
Bond, J. Richard
1986-01-01
The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.
Sánchez, R; Carreras, B A; van Milligen, B Ph
2005-01-01
The fluid limit of a recently introduced family of nonintegrable (nonlinear) continuous-time random walks is derived in terms of fractional differential equations. In this limit, it is shown that the formalism allows for the modeling of the interaction between multiple transport mechanisms with not only disparate spatial scales but also different temporal scales. For this reason, the resulting fluid equations may find application in the study of a large number of nonlinear multiscale transport problems, ranging from the study of self-organized criticality to the modeling of turbulent transport in fluids and plasmas.
On supervised graph Laplacian embedding CA model & kernel construction and its application
NASA Astrophysics Data System (ADS)
Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong
2017-01-01
There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.
Polymer Dynamics from Synthetic to Biological Macromolecules
NASA Astrophysics Data System (ADS)
Richter, D.; Niedzwiedz, K.; Monkenbusch, M.; Wischnewski, A.; Biehl, R.; Hoffmann, B.; Merkel, R.
2008-02-01
High resolution neutron scattering together with a meticulous choice of the contrast conditions allows to access the large scale dynamics of soft materials including biological molecules in space and time. In this contribution we present two examples. One from the world of synthetic polymers, the other from biomolecules. First, we will address the peculiar dynamics of miscible polymer blends with very different component glass transition temperatures. Polymethylmetacrylate (PMMA), polyethyleneoxide (PEO) are perfectly miscible but exhibit a difference in the glass transition temperature by 200 K. We present quasielastic neutron scattering investigations on the dynamics of the fast component in the range from angströms to nanometers over a time frame of five orders of magnitude. All data may be consistently described in terms of a Rouse model with random friction, reflecting the random environment imposed by the nearly frozen PMMA matrix on the fast mobile PEO. In the second part we touch on some new developments relating to large scale internal dynamics of proteins by neutron spin echo. We will report results of some pioneering studies which show the feasibility of such experiments on large scale protein motion which will most likely initiate further studies in the future.
Reflections on experimental research in medical education.
Cook, David A; Beckman, Thomas J
2010-08-01
As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.
ERIC Educational Resources Information Center
Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.
2010-01-01
This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…
While large-scale, randomized surveys estimate the percentage of a region’s streams in poor ecological condition, identifying particular stream reaches or watersheds in poor condition is an equally important goal for monitoring and management. We built predictive models of strea...
The global reference atmospheric model, mod 2 (with two scale perturbation model)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Hargraves, W. R.
1976-01-01
The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.
Modeling fluid diffusion in cerebral white matter with random walks in complex environments
NASA Astrophysics Data System (ADS)
Levy, Amichai; Cwilich, Gabriel; Buldyrev, Sergey V.; Weeden, Van J.
2012-02-01
Recent studies with diffusion MRI have shown new aspects of geometric order in the brain, including complex path coherence within the cerebral cortex, and organization of cerebral white matter and connectivity across multiple scales. The main assumption of these studies is that water molecules diffuse along myelin sheaths of neuron axons in the white matter and thus the anisotropy of their diffusion tensor observed by MRI can provide information about the direction of the axons connecting different parts of the brain. We model the diffusion of particles confined in the space of between the bundles of cylindrical obstacles representing fibrous structures of various orientations. We have investigated the directional properties of the diffusion, by studying the angular distribution of the end point of the random walks as a function of their length, to understand the scale over which the distribution randomizes. We will show evidence of qualitative change in the behavior of the diffusion for different volume fractions of obstacles. Comparisons with three-dimensional MRI images will be illustrated.
Mehta, Kala M; Gallagher-Thompson, Dolores; Varghese, Mathew; Loganathan, Santosh; Baruah, Upasana; Seeher, Katrin; Zandi, Diana; Dua, Tarun; Pot, Anne Margriet
2018-05-08
Dementia has a huge physical, psychological, social and economic impact upon caregivers, families and societies at large. There has been a growing impetus to utilize Internet interventions given the potential scalability, and presumed cost-effectiveness and accessibility. In this paper, we describe the design of a randomized controlled trial (RCT) aiming to study the impact of online self-help programs on caregivers of people with dementia in India. The experimental group will receive an interactive training and support program and the comparison group will receive an education-only e-book. It will be among the first online support intervention RCTs for a mental health condition in a lower-middle income country. Two hundred and eight participants are expected to be recruited via several strategies (email, Internet and social media, telephone and face-to-face) starting in the Bangalore region of India. The inclusion criteria for participation in the trial are: (1) being 18 years or older, (2) being a self-reported caregiver of a person with dementia, (3) self-report that a family member has a diagnosis of dementia (AD8 ≥ 2), and experience caregiver distress (≥ 4 on a 1-item burden scale ranging from 1 to 10 or ≥ 4 or < 20 on the Center for Epidemiologic Study-Depression (CES-D) scale (10-item) or ≥ 4 or < 15 on the Generalized Anxiety Disorder Scale (7-item). The intervention group will be offered iSupport, an online self-help training and support program, enabling a personalized education plan with a maximum of 23 lessons. These modules present a range of topics from "what is dementia?" to "dealing with challenging behaviors like aggression." The comparison group will receive an education-only e-book containing similar content. The outcomes of this trial are: caregiver burden as measured by the 22-item Zarit Burden Scale, depressive symptoms, anxiety symptoms (primary outcomes), quality of life, person-centered attitude, self-efficacy and mastery (secondary outcomes). Based on the findings of this trial, we will examine the potential use and scale up of iSupport for caregiver distress in India. This style of online self-help programs could be expanded to other regions or countries or to other suitable caregiver groups. Clinical Trials Registry-India (CTRI), ID: CTRI/2017/02/007876 .
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
NASA Astrophysics Data System (ADS)
Kröger, Knut; Creutzburg, Reiner
2013-05-01
The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.
Haile, Sarah R; Guerra, Beniamino; Soriano, Joan B; Puhan, Milo A
2017-12-21
Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them. Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined. We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small. We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.
ERIC Educational Resources Information Center
Miller, Sarah; Connolly, Paul
2013-01-01
Tutoring is commonly employed to prevent early reading failure, and evidence suggests that it can have a positive effect. This article presents findings from a large-scale ("n" = 734) randomized controlled trial evaluation of the effect of "Time to Read"--a volunteer tutoring program aimed at children aged 8 to 9 years--on…
ERIC Educational Resources Information Center
Wheeler, Marc E.; Keller, Thomas E.; DuBois, David L.
2010-01-01
Between 2007 and 2009, reports were released on the results of three separate large-scale random assignment studies of the effectiveness of school-based mentoring programs for youth. The studies evaluated programs implemented by Big Brothers Big Sisters of America (BBBSA) affiliates (Herrera et al., 2007), Communities In Schools of San Antonio,…
The role of fanatics in consensus formation
NASA Astrophysics Data System (ADS)
Gündüç, Semra
2015-08-01
A model of opinion dynamics with two types of agents as social actors are presented, using the Ising thermodynamic model as the dynamics template. The agents are considered as opportunists which live at sites and interact with the neighbors, or fanatics/missionaries which move from site to site randomly in persuasion of converting agents of opposite opinion with the help of opportunists. Here, the moving agents act as an external influence on the opportunists to convert them to the opposite opinion. It is shown by numerical simulations that such dynamics of opinion formation may explain some details of consensus formation even when one of the opinions are held by a minority. Regardless the distribution of the opinion, different size societies exhibit different opinion formation behavior and time scales. In order to understand general behavior, the scaling relations obtained by comparing opinion formation processes observed in societies with varying population and number of randomly moving agents are studied. For the proposed model two types of scaling relations are observed. In fixed size societies, increasing the number of randomly moving agents give a scaling relation for the time scale of the opinion formation process. The second type of scaling relation is due to the size dependent information propagation in finite but large systems, namely finite-size scaling.
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
2002-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, H.S.; Stone, C.M.; Krieg, R.D.
Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less
Groups of galaxies in the Center for Astrophysics redshift survey
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1989-01-01
By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
Xu, Yinlin; Ma, Qianli D Y; Schmitt, Daniel T; Bernaola-Galván, Pedro; Ivanov, Plamen Ch
2011-11-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences.
Xu, Yinlin; Ma, Qianli D.Y.; Schmitt, Daniel T.; Bernaola-Galván, Pedro; Ivanov, Plamen Ch.
2014-01-01
We investigate how various coarse-graining (signal quantization) methods affect the scaling properties of long-range power-law correlated and anti-correlated signals, quantified by the detrended fluctuation analysis. Specifically, for coarse-graining in the magnitude of a signal, we consider (i) the Floor, (ii) the Symmetry and (iii) the Centro-Symmetry coarse-graining methods. We find that for anti-correlated signals coarse-graining in the magnitude leads to a crossover to random behavior at large scales, and that with increasing the width of the coarse-graining partition interval Δ, this crossover moves to intermediate and small scales. In contrast, the scaling of positively correlated signals is less affected by the coarse-graining, with no observable changes when Δ < 1, while for Δ > 1 a crossover appears at small scales and moves to intermediate and large scales with increasing Δ. For very rough coarse-graining (Δ > 3) based on the Floor and Symmetry methods, the position of the crossover stabilizes, in contrast to the Centro-Symmetry method where the crossover continuously moves across scales and leads to a random behavior at all scales; thus indicating a much stronger effect of the Centro-Symmetry compared to the Floor and the Symmetry method. For coarse-graining in time, where data points are averaged in non-overlapping time windows, we find that the scaling for both anti-correlated and positively correlated signals is practically preserved. The results of our simulations are useful for the correct interpretation of the correlation and scaling properties of symbolic sequences. PMID:25392599
Topology of large-scale structure. IV - Topology in two dimensions
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Cohen, Alexander P.; Hamilton, Andrew J. S.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
In a recent series of papers, an algorithm was developed for quantitatively measuring the topology of the large-scale structure of the universe and this algorithm was applied to numerical models and to three-dimensional observational data sets. In this paper, it is shown that topological information can be derived from a two-dimensional cross section of a density field, and analytic expressions are given for a Gaussian random field. The application of a two-dimensional numerical algorithm for measuring topology to cross sections of three-dimensional models is demonstrated.
A comparison of OCO-2 XCO2 Observations to GOSAT and Models
NASA Astrophysics Data System (ADS)
O'Dell, C.; Eldering, A.; Crisp, D.; Gunson, M. R.; Fisher, B.; Mandrake, L.; McDuffie, J. L.; Baker, D. F.; Wennberg, P. O.
2016-12-01
With their high spatial resolution and dense sampling density, observations of atmospheric carbon dioxide (CO2) from space-based sensors such as the Orbiting Carbon Observatory-2 (OCO-2) have the potential to revolutionize our understanding of carbon sources and sinks. To achieve this goal, however, requires the observations to have sub-ppm systematic errors; the large data density of OCO-2 generally reduces the importance of random errors in the retrieval of of regional scale fluxes. In this work, the Atmospheric Carbon Observations from Space (ACOS) algorithm has been applied to both OCO-2 and GOSAT observations, which overlap for the period spanning Sept 2014 to present (2+ years). Previous activities utilizing TCCON and aircraft data have shown the ACOS/GOSAT B3.5 product to be quite accurate (1-2 ppm) over both land and ocean. In this work, we apply nearly identical versions of the ACOS retrieval algorithm to both OCO-2 and GOSAT to enable comparisons during the period of overlap, and to minimize algorithm-induced differences. GOSAT/OCO-2 comparisons are used to explore potential biases in the OCO-2 data, and to better understand the nature of the bias correction required for each product. Finally, each product is compared to an ensemble of models in order to evaluate their relative consistency, a critical activity before both can be used simultaneously in carbon flux inversions with confidence.
Effects of electro-acupuncture on personality traits in depression: a randomized controlled study.
Wang, Wei-dong; Lu, Xue-yu; Ng, Siu-man; Hong, Lan; Zhao, Yang; Lin, Ying-na; Wang, Fang
2013-10-01
To explore the personality-adjusting effect of electro-acupuncture treatment for depression and compared this treatment with paroxetine treatment. A non-blinded, randomized controlled trial was adopted. Sixty depressed patients, who met trial criteria, were randomly assigned to the treatment and the control groups. In the treatment group, electro-acupuncture treatment was used, and paroxetine treatment was used in the control group. During the 24-week study period, 12 patients dropped out and 48 patients completed the study. The Minnesota Multiple Personality Inventory (MMPI) was adopted as the evaluation tool. At the same time, the Self-rating Depression Scale (SDS), Self-rating Anxiety Scale (SAS) and Montgomery-Asberg Depression Rating Scale (MADRS) were used to evaluate the psychological state. Evaluations were done before and after treatment. After treatment, patients' psychological state improved significantly in both groups (P<0.01). For the treatment group, within-group comparison between baseline and after 24 weeks of treatment showed that severity of depression had significantly decreased (P<0.01). MADRS and SDS scores decreased significantly (P<0.05) and MMPI subscale scores for hypochondriasis, depression, psychopathic deviate, psychasthenia, social introversion and fake decreased significantly (P<0.05). For the control group, severity of depression also decreased significantly. MADRS and SDS scores decreased significantly (P<0.05); and MMPI subscale scores for hypochondriasis, depression, hysteria, paranoia, and psychasthenia decreased significantly (P<0.05). Between-group comparison demonstrated that for the MMPI subscales paranoia and social introversion, the decrease of score was greater in the treatment group than in the control group (P<0.05). However, there were no other significant differences between the control group and the treatment group. Electro-acupuncture is effective for treating depression and affects personality traits.
ERIC Educational Resources Information Center
Carney, Amy G.; Merrell, Kenneth W.
2005-01-01
This study examined teachers' behavioral ratings of young children (ages 5 and 6) with and without attention-deficit/hyperactivity disorder (ADHD). A study group consisting of 30 children with formal diagnoses of ADHD and a comparison group of 30 children without ADHD were developed using randomized matching procedures. Teachers of these children…
ERIC Educational Resources Information Center
Bohnstedt, Bradley N.; Kronenberger, William G.; Dunn, David W.; Giauque, Ann L.; Wood, Elisabeth A.; Rembusch, Mary E.; Lafata, Deborah
2005-01-01
This study compared investigator ratings of ADHD symptoms based on interviews with parents and teachers during a doubleblind, placebo-controlled study of atomoxetine. Investigators completed the ADHD Rating Scale: Investigator (ADHDRS-I) based on separate semistructured interviews with the primary caretaker and teacher of the participant.…
Stability of knotted vortices in wave chaos
NASA Astrophysics Data System (ADS)
Taylor, Alexander; Dennis, Mark
Large scale tangles of disordered filaments occur in many diverse physical systems, from turbulent superfluids to optical volume speckle to liquid crystal phases. They can exhibit particular large scale random statistics despite very different local physics. We have previously used the topological statistics of knotting and linking to characterise the large scale tangling, using the vortices of three-dimensional wave chaos as a universal model system whose physical lengthscales are set only by the wavelength. Unlike geometrical quantities, the statistics of knotting depend strongly on the physical system and boundary conditions. Although knotting patterns characterise different systems, the topology of vortices is highly unstable to perturbation, under which they may reconnect with one another. In systems of constructed knots, these reconnections generally rapidly destroy the knot, but for vortex tangles the topological statistics must be stable. Using large scale simulations of chaotic eigenfunctions, we numerically investigate the prevalence and impact of reconnection events, and their effect on the topology of the tangle.
Shanazi, Mahnaz; Farshbaf Khalili, Azizeh; Kamalifard, Mahin; Asghari Jafarabadi, Mohammad; Masoudin, Kazhal; Esmaeli, Fariba
2015-12-01
Traumatic nipple is among the most common problems of the breastfeeding period which leads to early cessation of breastfeeding. The study aimed to compare the effects of the lanolin, peppermint, and dexpanthenol creams on the treatment of traumatic nipples. This double-blind randomized controlled trial was carried out on 126 breastfeeding mothers. The mothers had visited at the health centers and children's hospitals in Sanandaj City. The selected participants were randomly divided into the following three groups of lanolin, peppermint, and dexpanthenol cream groups. Nipple pain was measured using the Store scale while trauma was measured with the Champion scale. Analyses were carried out through the Kruskal-Wallis test, Chi-square, ANOVA, and repeated measures ANOVA by using SPSS software ver. 13. The result showed that the mean score of nipple pain and nipple trauma at the prior to intervention stage, third, seventh, and fourteenth days of intervention was not significantly different between three groups. But, repeated measures ANOVA showed a significant difference in comparison of the four time periods of intervention in each group. RESULTS of this study revealed that the lanolin, peppermint, and dexpanthenol medicines had similar therapeutic effects on traumatic nipple.
Viswas, Rajadurai; Ramachandran, Rejeeshkumar; Korde Anantkumar, Payal
2012-01-01
Objective. To compare the effectiveness of supervised exercise program and Cyriax physiotherapy in the treatment of tennis elbow (lateral epicondylitis). Design. Randomized clinical trial. Setting. Physiotherapy and rehabilitation centre. Subjects. This study was carried out with 20 patients, who had tennis elbow (lateral epicondylitis). Intervention. Group A (n = 10) had received supervised exercise program. Group B (n = 10) was treated with Cyriax physiotherapy. All patients received three treatment sessions per week for four weeks (12 treatment sessions). Outcome measures. Pain was evaluated using a visual analogue scale (VAS), and functional status was evaluated by completion of the Tennis Elbow Function Scale (TEFS) which were recorded at base line and at the end of fourth week. Results. Both the supervised exercise program and Cyriax physiotherapy were found to be significantly effective in reduction of pain and in the improvement of functional status. The supervised exercise programme resulted in greater improvement in comparison to those who received Cyriax physiotherapy. Conclusion. The results of this clinical trial demonstrate that the supervised exercise program may be the first treatment choice for therapist in managing tennis elbow. PMID:22629225
Robbins, Blaine
2013-01-01
Sociologists, political scientists, and economists all suggest that culture plays a pivotal role in the development of large-scale cooperation. In this study, I used generalized trust as a measure of culture to explore if and how culture impacts intentional homicide, my operationalization of cooperation. I compiled multiple cross-national data sets and used pooled time-series linear regression, single-equation instrumental-variables linear regression, and fixed- and random-effects estimation techniques on an unbalanced panel of 118 countries and 232 observations spread over a 15-year time period. Results suggest that culture and large-scale cooperation form a tenuous relationship, while economic factors such as development, inequality, and geopolitics appear to drive large-scale cooperation. PMID:23527211
Use of Second Generation Coated Conductors for Efficient Shielding of dc Magnetic Fields (Postprint)
2010-07-15
layer of superconducting film, can attenuate an external magnetic field of up to 5 mT by more than an order of magnitude. For comparison purposes...appears to be especially promising for the realization of large scale high-Tc superconducting screens. 15. SUBJECT TERMS magnetic screens, current...realization of large scale high-Tc superconducting screens. © 2010 American Institute of Physics. doi:10.1063/1.3459895 I. INTRODUCTION Magnetic screening
Ralph Alig; Darius Adams; John Mills; Richard Haynes; Peter Ince; Robert Moulton
2001-01-01
The TAMM/NAPAP/ATLAS/AREACHANGE(TNAA) system and the Forest and Agriculture Sector Optimization Model (FASOM) are two large-scale forestry sector modeling systems that have been employed to analyze the U.S. forest resource situation. The TNAA system of static, spatial equilibrium models has been applied to make SO-year projections of the U.S. forest sector for more...
Measuring the Large-scale Solar Magnetic Field
NASA Astrophysics Data System (ADS)
Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.
2017-12-01
The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.
Intermittency of solar wind on scale 0.01-16 Hz.
NASA Astrophysics Data System (ADS)
Riazantseva, Maria; Zastenker, Georgy; Chernyshov, Alexander; Petrosyan, Arakel
Magnetosphere of the Earth is formed in the process of solar wind flow around earth's magnetic field. Solar wind is a flow of turbulent plasma that displays a multifractal structure and an intermittent character. That is why the study of the characteristics of solar wind turbulence is very important part of the solution of the problem of the energy transport from the solar wind to magnetosphere. A large degree of intermittency is observed in the solar wind ion flux and magnetic field time rows. We investigated the intermittency of solar wind fluctuations under large statistics of high time resolution measurements onboard Interball-1 spacecraft on scale from 0.01 to 16 Hz. Especially it is important that these investigation is carry out for the first time for the earlier unexplored (by plasma data) region of comparatively fast variations (frequency up to 16 Hz), so we significantly extend the range of intermittency observations for solar wind plasma. The intermittency practically absent on scale more then 1000 s and it grows to the small scales right up till t 30-60 s. The behavior of the intermittency for the scale less then 30-60 s is rather changeable. The boundary between these two rates of intermittency is quantitatively near to the well-known boundary between the dissipation and inertial scales of fluctuations, what may point to their possible relation. Special attention is given to a comparison of intermittency for solar wind observation intervals containing SCIF (Sudden Changes of Ion Flux) to ones for intervals without SCIF. Such a comparison allows one to reveal the fundamental turbulent properties of the solar wind regions in which SCIF is observed more frequently. We use nearly incompressible model of the solar wind turbulence for obtained data interpretation. The regime when density fluctuations are passive scalar in a hydrodynamic field of velocity is realized in turbulent solar wind flows according to this model. This hypothesis can be verified straightforwardly by investigating the density spectrum which should be slaved to the incompressible velocity spectrum. Density discontinuities on times up to t 30-60 s are defined by intermittency of velocity turbulent field. Solar wind intermittency and many or most of its discontinuities are produced by MHD turbulence in this time interval. It is possible that many or even most of the current structures in the solar wind, particularly inertial range structures that contribute to the tails of the PDFs. Complex non-gaussian behaviour on smaller times is described by dissipation rate nonhomogeneity of statistical moments for density field in a random flow.
Community turnover of wood-inhabiting fungi across hierarchical spatial scales.
Abrego, Nerea; García-Baquero, Gonzalo; Halme, Panu; Ovaskainen, Otso; Salcedo, Isabel
2014-01-01
For efficient use of conservation resources it is important to determine how species diversity changes across spatial scales. In many poorly known species groups little is known about at which spatial scales the conservation efforts should be focused. Here we examined how the community turnover of wood-inhabiting fungi is realised at three hierarchical levels, and how much of community variation is explained by variation in resource composition and spatial proximity. The hierarchical study design consisted of management type (fixed factor), forest site (random factor, nested within management type) and study plots (randomly placed plots within each study site). To examine how species richness varied across the three hierarchical scales, randomized species accumulation curves and additive partitioning of species richness were applied. To analyse variation in wood-inhabiting species and dead wood composition at each scale, linear and Permanova modelling approaches were used. Wood-inhabiting fungal communities were dominated by rare and infrequent species. The similarity of fungal communities was higher within sites and within management categories than among sites or between the two management categories, and it decreased with increasing distance among the sampling plots and with decreasing similarity of dead wood resources. However, only a small part of community variation could be explained by these factors. The species present in managed forests were in a large extent a subset of those species present in natural forests. Our results suggest that in particular the protection of rare species requires a large total area. As managed forests have only little additional value complementing the diversity of natural forests, the conservation of natural forests is the key to ecologically effective conservation. As the dissimilarity of fungal communities increases with distance, the conserved natural forest sites should be broadly distributed in space, yet the individual conserved areas should be large enough to ensure local persistence.
Community Turnover of Wood-Inhabiting Fungi across Hierarchical Spatial Scales
Abrego, Nerea; García-Baquero, Gonzalo; Halme, Panu; Ovaskainen, Otso; Salcedo, Isabel
2014-01-01
For efficient use of conservation resources it is important to determine how species diversity changes across spatial scales. In many poorly known species groups little is known about at which spatial scales the conservation efforts should be focused. Here we examined how the community turnover of wood-inhabiting fungi is realised at three hierarchical levels, and how much of community variation is explained by variation in resource composition and spatial proximity. The hierarchical study design consisted of management type (fixed factor), forest site (random factor, nested within management type) and study plots (randomly placed plots within each study site). To examine how species richness varied across the three hierarchical scales, randomized species accumulation curves and additive partitioning of species richness were applied. To analyse variation in wood-inhabiting species and dead wood composition at each scale, linear and Permanova modelling approaches were used. Wood-inhabiting fungal communities were dominated by rare and infrequent species. The similarity of fungal communities was higher within sites and within management categories than among sites or between the two management categories, and it decreased with increasing distance among the sampling plots and with decreasing similarity of dead wood resources. However, only a small part of community variation could be explained by these factors. The species present in managed forests were in a large extent a subset of those species present in natural forests. Our results suggest that in particular the protection of rare species requires a large total area. As managed forests have only little additional value complementing the diversity of natural forests, the conservation of natural forests is the key to ecologically effective conservation. As the dissimilarity of fungal communities increases with distance, the conserved natural forest sites should be broadly distributed in space, yet the individual conserved areas should be large enough to ensure local persistence. PMID:25058128
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
NASA Astrophysics Data System (ADS)
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
2014-01-01
The aim of this study was to verify the clinical responses to Thai massage (TM) and Thai herbal compression (THC) for treating osteoarthritis (OA) of the knee in comparison to oral ibuprofen. This study was a randomized, evaluator-blind, controlled trial. Sixty patients with OA of the knee were randomly assigned to receive either a one-hour session of TM or THC (three times weekly) or oral ibuprofen (three times daily). The duration of treatment was three weeks. The clinical assessments included visual analog scale assessing pain and stiffness, Lequesne's functional index, time for climbing up ten steps, and physician's and patient's overall opinions on improvement. In a within-group comparison, each treatment modality caused a significant improvement of all variables determined for outcome assessments. In an among group comparison, all modalities provided nearly comparable clinical efficacy after a three-week symptomatic treatment of OA of the knee, in which a trend toward greatest improvement was likely to be found in THC group. In conclusion, TM and THC generally provided comparable clinical efficacy to oral ibuprofen after three weeks of treatment and could be considered as complementary and alternative treatments for OA of the knee. PMID:25254207
Weak gravitational lensing due to large-scale structure of the universe
NASA Technical Reports Server (NTRS)
Jaroszynski, Michal; Park, Changbom; Paczynski, Bohdan; Gott, J. Richard, III
1990-01-01
The effect of the large-scale structure of the universe on the propagation of light rays is studied. The development of the large-scale density fluctuations in the omega = 1 universe is calculated within the cold dark matter scenario using a smooth particle approximation. The propagation of about 10 to the 6th random light rays between the redshift z = 5 and the observer was followed. It is found that the effect of shear is negligible, and the amplification of single images is dominated by the matter in the beam. The spread of amplifications is very small. Therefore, the filled-beam approximation is very good for studies of strong lensing by galaxies or clusters of galaxies. In the simulation, the column density was averaged over a comoving area of approximately (1/h Mpc)-squared. No case of a strong gravitational lensing was found, i.e., no 'over-focused' image that would suggest that a few images might be present. Therefore, the large-scale structure of the universe as it is presently known does not produce multiple images with gravitational lensing on a scale larger than clusters of galaxies.
Graphic matching based on shape contexts and reweighted random walks
NASA Astrophysics Data System (ADS)
Zhang, Mingxuan; Niu, Dongmei; Zhao, Xiuyang; Liu, Mingjun
2018-04-01
Graphic matching is a very critical issue in all aspects of computer vision. In this paper, a new graphics matching algorithm combining shape contexts and reweighted random walks was proposed. On the basis of the local descriptor, shape contexts, the reweighted random walks algorithm was modified to possess stronger robustness and correctness in the final result. Our main process is to use the descriptor of the shape contexts for the random walk on the iteration, of which purpose is to control the random walk probability matrix. We calculate bias matrix by using descriptors and then in the iteration we use it to enhance random walks' and random jumps' accuracy, finally we get the one-to-one registration result by discretization of the matrix. The algorithm not only preserves the noise robustness of reweighted random walks but also possesses the rotation, translation, scale invariance of shape contexts. Through extensive experiments, based on real images and random synthetic point sets, and comparisons with other algorithms, it is confirmed that this new method can produce excellent results in graphic matching.
Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors
NASA Astrophysics Data System (ADS)
Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.
2015-08-01
In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.
NASA Technical Reports Server (NTRS)
Pyle, K. R.; Simpson, J. A.
1985-01-01
Near solar maximum, a series of large radial solar wind shocks in June and July 1982 provided a unique opportunity to study the solar modulation of galactic cosmic rays with an array of spacecraft widely separated both in heliocentric radius and longitude. By eliminating hysteresis effects it is possible to begin to separate radial and azimuthal effects in the outer heliosphere. On the large scale, changes in modulation (both the increasing and recovery phases) propagate outward at close to the solar wind velocity, except for the near-term effects of solar wind shocks, which may propagate at a significantly higher velocity. In the outer heliosphere, azimuthal effects are small in comparison with radial effects for large-scale modulation at solar maximum.
Dhingra, Madhur S; Artois, Jean; Robinson, Timothy P; Linard, Catherine; Chaiban, Celia; Xenarios, Ioannis; Engler, Robin; Liechti, Robin; Kuznetsov, Dmitri; Xiao, Xiangming; Dobschuetz, Sophie Von; Claes, Filip; Newman, Scott H; Dauphin, Gwenaëlle; Gilbert, Marius
2016-01-01
Global disease suitability models are essential tools to inform surveillance systems and enable early detection. We present the first global suitability model of highly pathogenic avian influenza (HPAI) H5N1 and demonstrate that reliable predictions can be obtained at global scale. Best predictions are obtained using spatial predictor variables describing host distributions, rather than land use or eco-climatic spatial predictor variables, with a strong association with domestic duck and extensively raised chicken densities. Our results also support a more systematic use of spatial cross-validation in large-scale disease suitability modelling compared to standard random cross-validation that can lead to unreliable measure of extrapolation accuracy. A global suitability model of the H5 clade 2.3.4.4 viruses, a group of viruses that recently spread extensively in Asia and the US, shows in comparison a lower spatial extrapolation capacity than the HPAI H5N1 models, with a stronger association with intensively raised chicken densities and anthropogenic factors. DOI: http://dx.doi.org/10.7554/eLife.19571.001 PMID:27885988
Galpert, Deborah; del Río, Sara; Herrera, Francisco; Ancede-Gallardo, Evys; Antunes, Agostinho; Agüero-Chapin, Guillermin
2015-01-01
Orthology detection requires more effective scaling algorithms. In this paper, a set of gene pair features based on similarity measures (alignment scores, sequence length, gene membership to conserved regions, and physicochemical profiles) are combined in a supervised pairwise ortholog detection approach to improve effectiveness considering low ortholog ratios in relation to the possible pairwise comparison between two genomes. In this scenario, big data supervised classifiers managing imbalance between ortholog and nonortholog pair classes allow for an effective scaling solution built from two genomes and extended to other genome pairs. The supervised approach was compared with RBH, RSD, and OMA algorithms by using the following yeast genome pairs: Saccharomyces cerevisiae-Kluyveromyces lactis, Saccharomyces cerevisiae-Candida glabrata, and Saccharomyces cerevisiae-Schizosaccharomyces pombe as benchmark datasets. Because of the large amount of imbalanced data, the building and testing of the supervised model were only possible by using big data supervised classifiers managing imbalance. Evaluation metrics taking low ortholog ratios into account were applied. From the effectiveness perspective, MapReduce Random Oversampling combined with Spark SVM outperformed RBH, RSD, and OMA, probably because of the consideration of gene pair features beyond alignment similarities combined with the advances in big data supervised classification. PMID:26605337
NASA Astrophysics Data System (ADS)
Song, Dawei; Ponte Castañeda, P.
2018-06-01
In Part I of this work (Song and Ponte Castañeda, 2018a), a new homogenization model was developed for the macroscopic behavior of three-scale porous polycrystals consisting of random distributions of large pores in a fine-grained polycrystalline matrix. In this second part, the model is used to investigate both the instantaneous effective behavior and the finite-strain macroscopic response of porous FCC and HCP polycrystals for axisymmetric loading conditions. The stress triaxiality and Lode parameter are found to have significant effects on the evolution of the substructure, which in turn have important implications for the overall hardening/softening behavior of the porous polycrystal. The intrinsic effect of the texture evolution of the polycrystalline matrix is inferred by appropriate comparisons with corresponding results for porous isotropic materials, and found to be significant, especially at low triaxialities. In particular, the predictions of the model identify, for the first time, two disparate regimes for the macroscopic response of porous polycrystals: a porosity-controlled regime at high triaxialities, and a texture-controlled regime at low triaxialities. The transition between these two regimes is found to be quite sharp, taking place between triaxialities of 1 and 2.
Does progesterone improve outcome in diffuse axonal injury?
Soltani, Zahra; Shahrokhi, Nader; Karamouzian, Saeed; Khaksari, Mohammad; Mofid, Behshad; Nakhaee, Nouzar; Reihani, Hamed
2017-01-01
The benefits of progesterone have been demonstrated in the animal models of traumatic brain injury (TBI). However, the results of clinical studies are conflicting. Considering the heterogenic nature of TBI, the effect of progesterone in patients with diffuse axonal injury (DAI) was investigated in a clinical trial. In this study, 48 patients with DAI and Glasgow Coma Scale of 3-12, admitted within 4 hours after injury, were randomly assigned to the progesterone or control group. The dose of progesterone administration was 1 mg kg -1 per 12 hours for 5 days. The effect of progesterone was investigated using extended-Glasgow Outcome Scale (GOS-E), functional independence measure (FIM) scores and also mortality within the follow-up period. The progesterone group exhibited higher GOS-E and FIM scores in comparison to the control group at 6 months post-injury (p < 0.01 and p < 0.05, respectively). Mortality was also found in the control group (p < 0.05). The adverse events attributed to the progesterone administration were not found throughout the study. Findings of this study suggest that progesterone may be neuroprotective in patients with DAI. However, large clinical trials are needed to assess progesterone as a promising drug in DAI.
Galpert, Deborah; Del Río, Sara; Herrera, Francisco; Ancede-Gallardo, Evys; Antunes, Agostinho; Agüero-Chapin, Guillermin
2015-01-01
Orthology detection requires more effective scaling algorithms. In this paper, a set of gene pair features based on similarity measures (alignment scores, sequence length, gene membership to conserved regions, and physicochemical profiles) are combined in a supervised pairwise ortholog detection approach to improve effectiveness considering low ortholog ratios in relation to the possible pairwise comparison between two genomes. In this scenario, big data supervised classifiers managing imbalance between ortholog and nonortholog pair classes allow for an effective scaling solution built from two genomes and extended to other genome pairs. The supervised approach was compared with RBH, RSD, and OMA algorithms by using the following yeast genome pairs: Saccharomyces cerevisiae-Kluyveromyces lactis, Saccharomyces cerevisiae-Candida glabrata, and Saccharomyces cerevisiae-Schizosaccharomyces pombe as benchmark datasets. Because of the large amount of imbalanced data, the building and testing of the supervised model were only possible by using big data supervised classifiers managing imbalance. Evaluation metrics taking low ortholog ratios into account were applied. From the effectiveness perspective, MapReduce Random Oversampling combined with Spark SVM outperformed RBH, RSD, and OMA, probably because of the consideration of gene pair features beyond alignment similarities combined with the advances in big data supervised classification.
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Sankhe, A; Dalal, K; Save, D; Sarve, P
2017-12-01
The present study was conducted to assess the effect of spiritual care in patients with depression, anxiety or both in a randomized controlled design. The participants were randomized either to receive spiritual care or not and Hamilton anxiety rating scale-A (HAM-A), Hamilton depression rating scale-D (HAM-D), WHO-quality of life-Brief (WHOQOL-BREF) and Functional assessment of chronic illness therapy - Spiritual well-being (FACIT-Sp) were assessed before therapy and two follow-ups at 3 and 6 week. However, with regard to the spiritual care therapy group, statistically significant differences were observed in both HAM-A and HAM-D scales between the baseline and visit 2 (p < 0.001), thus significantly reducing symptoms of anxiety and depression, respectively. No statistically significant differences were observed for any of the scales during the follow-up periods for the control group of participants. When the scores were compared between the study groups, HAM-A, HAM-D and FACIT-Sp 12 scores were significantly lower in the interventional group as compared to the control group at both third and sixth weeks. This suggests a significant improvement in symptoms of anxiety and depression in the spiritual care therapy group than the control group; however, large randomized controlled trials with robust design are needed to confirm the same.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
Aromatherapy for the treatment of PONV in children: a pilot RCT.
Kiberd, Mathew B; Clarke, Suzanne K; Chorney, Jill; d'Eon, Brandon; Wright, Stuart
2016-11-09
Postoperative nausea and vomiting (PONV) is one of the most common postoperative complications of general anesthesia in pediatrics. Aromatherapy has been shown to be effective in treating PONV in adults. Given the encouraging results of the adult studies, we planned to determine feasibility of doing a large-scale study in the pediatric population. Our group conducted a pilot randomized controlled trial examining the effect of aromatherapy on post-operative nausea and vomiting in patients 4-16 undergoing ambulatory surgery at a single center. Nausea was defined as a score of 4/10 on the Baxter Retching Faces Scale (BARF scale). A clinically significant reduction was defined as a two-point reduction in Nausea. Post operatively children were administered the BARF scale in 15 min internals until discharge home or until nausea score of 4/10 or greater. Children with nausea were randomized to saline placebo group or aromatherapy QueaseEase™ (Soothing Scents, Inc, Enterprise, AL: blend of ginger, lavender, mint and spearmint). Nausea scores were recorded post intervention. A total of 162 subjects were screened for inclusion in the study. Randomization occurred in 41 subjects of which 39 were included in the final analysis. For the primary outcome, 14/18 (78 %) of controls reached primary outcome compared to 19/21 (90 %) in the aromatherapy group (p = 0.39, Eta 0.175). Other outcomes included use of antiemetic in PACU (control 44 %, aromatherapy 52 % P = 0.75, Eta 0.08), emesis (Control 11 %, 9 % aromatherapy, P = 0.87, Eta = 0.03). There was a statistically significant difference in whether subjects continued to use the intervention (control 28 %, aromatherapy 66 %, p-value 0.048, Eta 0.33). Aromatherapy had a small non-significant effect size in treating postoperative nausea and vomiting compared with control. A large-scale randomized control trial would not be feasible at our institution and would be of doubtful utility. ClinicalTrials.gov NCT02663154 .
Diffusion of strongly magnetized cosmic ray particles in a turbulent medium
NASA Technical Reports Server (NTRS)
Ptuskin, V. S.
1985-01-01
Cosmic ray (CR) propagation in a turbulent medium is usually considered in the diffusion approximation. Here, the diffusion equation is obtained for strongly magnetized particles in the general form. The influence of a large-scale random magnetic field on CR propagation in interstellar medium is discussed. Cosmic rays are assumed to propagate in a medium with a regular field H and an ensemble of random MHD waves. The energy density of waves on scales smaller than the free path 1 of CR particles is small. The collision integral of the general form which describes interaction between relativistic particles and waves in the quasilinear approximation is used.
NASA Astrophysics Data System (ADS)
Nampally, Subhadra; Padhy, Simanchal; Dimri, Vijay P.
2018-01-01
The nature of spatial distribution of heterogeneities in the source area of the 2015 Nepal earthquake is characterized based on the seismic b-value and fractal analysis of its aftershocks. The earthquake size distribution of aftershocks gives a b-value of 1.11 ± 0.08, possibly representing the highly heterogeneous and low stress state of the region. The aftershocks exhibit a fractal structure characterized by a spectrum of generalized dimensions, Dq varying from D2 = 1.66 to D22 = 0.11. The existence of a fractal structure suggests that the spatial distribution of aftershocks is not a random phenomenon, but it self-organizes into a critical state, exhibiting a scale-independent structure governed by a power-law scaling, where a small perturbation in stress is sufficient enough to trigger aftershocks. In order to obtain the bias in fractal dimensions resulting from finite data size, we compared the multifractal spectrum for the real data and random simulations. On comparison, we found that the lower limit of bias in D2 is 0.44. The similarity in their multifractal spectra suggests the lack of long-range correlation in the data, with an only weakly multifractal or a monofractal with a single correlation dimension D2 characterizing the data. The minimum number of events required for a multifractal process with an acceptable error is discussed. We also tested for a possible correlation between changes in D2 and energy released during the earthquakes. The values of D2 rise during the two largest earthquakes (M > 7.0) in the sequence. The b- and D2 values are related by D2 = 1.45 b that corresponds to the intermediate to large earthquakes. Our results provide useful constraints on the spatial distribution of b- and D2-values, which are useful for seismic hazard assessment in the aftershock area of a large earthquake.
ERIC Educational Resources Information Center
Sheridan, Susan M.; Witte, Amanda L.; Holmes, Shannon R.; Coutts, Michael J.; Dent, Amy L.; Kunz, Gina M.; Wu, ChaoRong
2017-01-01
The results of a large-scale randomized controlled trial of Conjoint Behavioral Consultation (CBC) on student outcomes and teacher-parent relationships in rural schools are presented. CBC is an indirect service delivery model that addresses concerns shared by teachers and parents about students. In the present study, the intervention was aimed at…
NASA Astrophysics Data System (ADS)
Nunes, A.; Ivanov, V. Y.
2014-12-01
Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.
Radiation breakage of DNA: a model based on random-walk chromatin structure
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Sachs, R. K.
2001-01-01
Monte Carlo computer software, called DNAbreak, has recently been developed to analyze observed non-random clustering of DNA double strand breaks in chromatin after exposure to densely ionizing radiation. The software models coarse-grained configurations of chromatin and radiation tracks, small-scale details being suppressed in order to obtain statistical results for larger scales, up to the size of a whole chromosome. We here give an analytic counterpart of the numerical model, useful for benchmarks, for elucidating the numerical results, for analyzing the assumptions of a more general but less mechanistic "randomly-located-clusters" formalism, and, potentially, for speeding up the calculations. The equations characterize multi-track DNA fragment-size distributions in terms of one-track action; an important step in extrapolating high-dose laboratory results to the much lower doses of main interest in environmental or occupational risk estimation. The approach can utilize the experimental information on DNA fragment-size distributions to draw inferences about large-scale chromatin geometry during cell-cycle interphase.
Miller, Lucy Jane; Coll, Joseph R; Schoen, Sarah A
2007-01-01
A pilot randomized controlled trial (RCT) of the effectiveness of occupational therapy using a sensory integration approach (OT-SI) was conducted with children who had sensory modulation disorders (SMDs). This study evaluated the effectiveness of three treatment groups. In addition, sample size estimates for a large scale, multisite RCT were calculated. Twenty-four children with SMD were randomly assigned to one of three treatment conditions; OT-SI, Activity Protocol, and No Treatment. Pretest and posttest measures of behavior, sensory and adaptive functioning, and physiology were administered. The OT-SI group, compared to the other two groups, made significant gains on goal attainment scaling and on the Attention subtest and the Cognitive/Social composite of the Leiter International Performance Scale-Revised. Compared to the control groups, OT-SI improvement trends on the Short Sensory Profile, Child Behavior Checklist, and electrodermal reactivity were in the hypothesized direction. Findings suggest that OT-SI may be effective in ameliorating difficulties of children with SMD.
Computed narrow-band azimuthal time-reversing array retrofocusing in shallow water.
Dungan, M R; Dowling, D R
2001-10-01
The process of acoustic time reversal sends sound waves back to their point of origin in reciprocal acoustic environments even when the acoustic environment is unknown. The properties of the time-reversed field commonly depend on the frequency of the original signal, the characteristics of the acoustic environment, and the configuration of the time-reversing transducer array (TRA). In particular, vertical TRAs are predicted to produce horizontally confined foci in environments containing random volume refraction. This article validates and extends this prediction to shallow water environments via monochromatic Monte Carlo propagation simulations (based on parabolic equation computations using RAM). The computational results determine the azimuthal extent of a TRA's retrofocus in shallow-water sound channels either having random bottom roughness or containing random internal-wave-induced sound speed fluctuations. In both cases, randomness in the environment may reduce the predicted azimuthal angular width of the vertical TRA retrofocus to as little as several degrees (compared to 360 degrees for uniform environments) for source-array ranges from 5 to 20 km at frequencies from 500 Hz to 2 kHz. For both types of randomness, power law scalings are found to collapse the calculated azimuthal retrofocus widths for shallow sources over a variety of acoustic frequencies, source-array ranges, water column depths, and random fluctuation amplitudes and correlation scales. Comparisons are made between retrofocusing on shallow and deep sources, and in strongly and mildly absorbing environments.
Mapping the universe in three dimensions
Haynes, Martha P.
1996-01-01
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin. PMID:11607714
Mapping the universe in three dimensions.
Haynes, M P
1996-12-10
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble's law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.
A random-walk/giant-loop model for interphase chromosomes.
Sachs, R K; van den Engh, G; Trask, B; Yokota, H; Hearst, J E
1995-01-01
Fluorescence in situ hybridization data on distances between defined genomic sequences are used to construct a quantitative model for the overall geometric structure of a human chromosome. We suggest that the large-scale geometry during the G0/G1 part of the cell cycle may consist of flexible chromatin loops, averaging approximately 3 million bp, with a random-walk backbone. A fully explicit, three-parametric polymer model of this random-walk/giant-loop structure can account well for the data. More general models consistent with the data are briefly discussed. PMID:7708711
Bathymetric comparisons adjacent to the Louisiana barrier islands: Processes of large-scale change
List, J.H.; Jaffe, B.E.; Sallenger, A.H.; Hansen, M.E.
1997-01-01
This paper summarizes the results of a comparative bathymetric study encompassing 150 km of the Louisiana barrier-island coast. Bathymetric data surrounding the islands and extending to 12 m water depth were processed from three survey periods: the 1880s, the 1930s, and the 1980s. Digital comparisons between surveys show large-scale, coherent patterns of sea-floor erosion and accretion related to the rapid erosion and disintegration of the islands. Analysis of the sea-floor data reveals two primary processes driving this change: massive longshore transport, in the littoral zone and at shoreface depths; and increased sediment storage in ebb-tidal deltas. Relative sea-level rise, although extraordinarily high in the study area, is shown to be an indirect factor in causing the area's rapid shoreline retreat rates.
Unmet Need: Improving mHealth Evaluation Rigor to Build the Evidence Base.
Mookherji, Sangeeta; Mehl, Garrett; Kaonga, Nadi; Mechael, Patricia
2015-01-01
mHealth-the use of mobile technologies for health-is a growing element of health system activity globally, but evaluation of those activities remains quite scant, and remains an important knowledge gap for advancing mHealth activities. In 2010, the World Health Organization and Columbia University implemented a small-scale survey to generate preliminary data on evaluation activities used by mHealth initiatives. The authors describe self-reported data from 69 projects in 29 countries. The majority (74%) reported some sort of evaluation activity, primarily nonexperimental in design (62%). The authors developed a 6-point scale of evaluation rigor comprising information on use of comparison groups, sample size calculation, data collection timing, and randomization. The mean score was low (2.4); half (47%) were conducting evaluations with a minimum threshold (4+) of rigor, indicating use of a comparison group, while less than 20% had randomized the mHealth intervention. The authors were unable to assess whether the rigor score was appropriate for the type of mHealth activity being evaluated. What was clear was that although most data came from mHealth projects pilots aimed for scale-up, few had designed evaluations that would support crucial decisions on whether to scale up and how. Whether the mHealth activity is a strategy to improve health or a tool for achieving intermediate outcomes that should lead to better health, mHealth evaluations must be improved to generate robust evidence for cost-effectiveness assessment and to allow for accurate identification of the contribution of mHealth initiatives to health systems strengthening and the impact on actual health outcomes.
Fernandes, M T; Vaez, S C; Lima, C M; Nahsan, F P; Loguércio, A D; Faria-E-Silva, A L
A triple-blind, randomized, crossover clinical trial evaluated prior use of nonsteroidal anti-inflammatory naproxen on sensitivity reported by patients undergoing in-office tooth bleaching. Fifty patients were subjected to two sessions of in-office tooth bleaching with 35% hydrogen peroxide in a single application of 40 minutes for two sessions, with an interval of seven days between applications. One hour prior to the procedure, each patient randomly received a single dose of naproxen (500 mg) or placebo. The patient's sensitivity level was evaluated during and immediately after the bleaching using two scales (verbal and visual analog); the verbal scale only was repeated after 24 hours. The effectiveness of the bleaching procedures was evaluated with the Bleachedguide scale. Relative risk to sensitivity was calculated and adjusted by session, while comparison of overall risk was performed by the McNemar test. Data on the sensitivity level for both scales and shade were subjected to the Friedman, Wilcoxon, and Mann-Whitney tests (α=0.05). The use of naproxen only decreased the absolute risk and intensity of tooth sensitivity reported immediately after the second session. On the other hand, no measurable effect was observed during or 24 hours after either session. The sequence of drug administration did not affect the bleaching effectiveness. Preemptive use of naproxen only reduced tooth sensitivity reported by patients immediately after the second session of bleaching.
Ionospheric scintillation by a random phase screen Spectral approach
NASA Technical Reports Server (NTRS)
Rufenach, C. L.
1975-01-01
The theory developed by Briggs and Parkin, given in terms of an anisotropic gaussian correlation function, is extended to a spectral description specified as a continuous function of spatial wavenumber with an intrinsic outer scale as would be expected from a turbulent medium. Two spectral forms were selected for comparison: (1) a power-law variation in wavenumber with a constant three-dimensional index equal to 4, and (2) Gaussian spectral variation. The results are applied to the F-region ionosphere with an outer-scale wavenumber of 2 per km (approximately equal to the Fresnel wavenumber) for the power-law variation, and 0.2 per km for the Gaussian spectral variation. The power-law form with a small outer-scale wavenumber is consistent with recent F-region in-situ measurements, whereas the gaussian form is mathematically convenient and, hence, mostly used in the previous developments before the recent in-situ measurements. Some comparison with microwave scintillation in equatorial areas is made.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
Large-scale modeling of rain fields from a rain cell deterministic model
NASA Astrophysics Data System (ADS)
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric cond...
Statistical model of exotic rotational correlations in emergent space-time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig; Kwon, Ohkyung; Richardson, Jonathan
2017-06-06
A statistical model is formulated to compute exotic rotational correlations that arise as inertial frames and causal structure emerge on large scales from entangled Planck scale quantum systems. Noncommutative quantum dynamics are represented by random transverse displacements that respect causal symmetry. Entanglement is represented by covariance of these displacements in Planck scale intervals defined by future null cones of events on an observer's world line. Light that propagates in a nonradial direction inherits a projected component of the exotic rotational correlation that accumulates as a random walk in phase. A calculation of the projection and accumulation leads to exact predictionsmore » for statistical properties of exotic Planck scale correlations in an interferometer of any configuration. The cross-covariance for two nearly co-located interferometers is shown to depart only slightly from the autocovariance. Specific examples are computed for configurations that approximate realistic experiments, and show that the model can be rigorously tested.« less
NASA Technical Reports Server (NTRS)
Smith, Charlee C., Jr.; Lovell, Powell M., Jr.
1954-01-01
An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.
Artz, Neil; Dixon, Samantha; Wylde, Vikki; Marques, Elsa; Beswick, Andrew D; Lenguerrand, Erik; Blom, Ashley W; Gooberman-Hill, Rachael
2017-04-01
To evaluate the feasibility of conducting a randomized controlled trial comparing group-based outpatient physiotherapy with usual care in patients following total knee replacement. A feasibility study for a randomized controlled trial. One secondary-care hospital orthopaedic centre, Bristol, UK. A total of 46 participants undergoing primary total knee replacement. The intervention group were offered six group-based exercise sessions after surgery. The usual care group received standard postoperative care. Participants were not blinded to group allocation. Feasibility was assessed by recruitment, reasons for non-participation, attendance, and completion rates of study questionnaires that included the Lower Extremity Functional Scale and Knee Injury and Osteoarthritis Outcome Score. Recruitment rate was 37%. Five patients withdrew or were no longer eligible to participate. Intervention attendance was high (73%) and 84% of group participants reported they were 'very satisfied' with the exercises. Return of study questionnaires at six months was lower in the usual care (75%) than in the intervention group (100%). Mean (standard deviation) Lower Extremity Functional Scale scores at six months were 45.0 (20.8) in the usual care and 57.8 (15.2) in the intervention groups. Recruitment and retention of participants in this feasibility study was good. Group-based physiotherapy was acceptable to participants. Questionnaire return rates were lower in the usual care group, but might be enhanced by telephone follow-up. The Lower Extremity Functional Scale had high responsiveness and completion rates. Using this outcome measure, 256 participants would be required in a full-scale randomized controlled trial.
NASA Astrophysics Data System (ADS)
Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen
2010-12-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
Culture rather than genes provides greater scope for the evolution of large-scale human prosociality
Bell, Adrian V.; Richerson, Peter J.; McElreath, Richard
2009-01-01
Whether competition among large groups played an important role in human social evolution is dependent on how variation, whether cultural or genetic, is maintained between groups. Comparisons between genetic and cultural differentiation between neighboring groups show how natural selection on large groups is more plausible on cultural rather than genetic variation. PMID:19822753
Bjertnaes, Oyvind Andresen; Iversen, Hilde Hestad
2012-08-01
To compare two ways of combining postal and electronic data collection for a maternity services user-experience survey. Cross-sectional survey. Maternity services in Norway. All women who gave birth at a university hospital in Norway between 1 June and 27 July 2010. Patients were randomized into the following groups (n= 752): Group A, who were posted questionnaires with both electronic and paper response options for both the initial and reminder postal requests; and Group B, who were posted questionnaires with an electronic response option for the initial request, and both electronic and paper response options for the reminder postal request. Response rate, the amount of difference in background variables between respondents and non-respondents, main study results and estimated cost-effectiveness. The final response rate was significantly higher in Group A (51.9%) than Group B (41.1%). None of the background variables differed significantly between the respondents and non-respondents in Group A, while two variables differed significantly between the respondents and non-respondents in Group B. None of the 11 user-experience scales differed significantly between Groups A and B. The estimated costs per response for the forthcoming national survey was €11.7 for data collection Model A and €9.0 for Model B. The model with electronic-only response option in the first request had lowest response rate. However, this model performed equal to the other model on non-response bias and better on estimated cost-effectiveness, and is the better of the two models in large-scale user experiences surveys with maternity services.
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
ERIC Educational Resources Information Center
Akabayashi, Hideo; Nakamura, Ryosuke; Naoi, Michio; Shikishima, Chizuru
2016-01-01
In the past decades, income inequality has risen in most developed countries. There is growing interest among economists in international comparisons of economic and educational mobility. This is aided by the availability of internationally comparable, large-scale data. The present paper aims to make three contributions. First, we introduce the…
Witt, Claudia M; Michalsen, Andreas; Roll, Stephanie; Morandi, Antonio; Gupta, Shivnarain; Rosenberg, Mark; Kronpass, Ludwig; Stapelfeldt, Elmar; Hissar, Syed; Müller, Matthias; Kessler, Christian
2013-05-23
Traditional Indian Ayurvedic medicine uses complex treatment approaches, including manual therapies, lifestyle and nutritional advice, dietary supplements, medication, yoga, and purification techniques. Ayurvedic strategies are often used to treat osteoarthritis (OA) of the knee; however, no systematic data are available on their effectiveness in comparison with standard care. The aim of this study is to evaluate the effectiveness of complex Ayurvedic treatment in comparison with conventional methods of treating OA symptoms in patients with knee osteoarthritis. In a prospective, multicenter, randomized controlled trial, 150 patients between 40 and 70 years, diagnosed with osteoarthritis of the knee, following American College of Rheumatology criteria and an average pain intensity of ≥40 mm on a 100 mm visual analog scale in the affected knee at baseline will be randomized into two groups. In the Ayurveda group, treatment will include tailored combinations of manual treatments, massages, dietary and lifestyle advice, consideration of selected foods, nutritional supplements, yoga posture advice, and knee massage. Patients in the conventional group will receive self-care advice, pain medication, weight-loss advice (if overweight), and physiotherapy following current international guidelines. Both groups will receive 15 treatment sessions over 12 weeks. Outcomes will be evaluated after 6 and 12 weeks and 6 and 12 months. The primary endpoint is a change in the score on the Western Ontario and McMaster University Osteoarthritis Index (WOMAC) after 12 weeks. Secondary outcome measurements will use WOMAC subscales, a pain disability index, a visual analog scale for pain and sleep quality, a pain experience scale, a quality-of-life index, a profile of mood states, and Likert scales for patient satisfaction, patient diaries, and safety. Using an adapted PRECIS scale, the trial was identified as lying mainly in the middle of the efficacy-effectiveness continuum. This trial is the first to compare the effectiveness of a complex Ayurvedic intervention with a complex conventional intervention in a Western medical setting in patients with knee osteoarthritis. During the trial design, aspects of efficacy and effectiveness were discussed. The resulting design is a compromise between rigor and pragmatism. NCT01225133.
ERIC Educational Resources Information Center
Piper, Benjamin
2016-01-01
If children do not learn how to read in the first few years of primary school, they at greater risk of dropping out. It is therefore crucial to identify and test interventions that have the potential of making a large impact, can be implemented quickly, and are affordable to be taken to scale by the Kenyan government. This paper presents the…
ERIC Educational Resources Information Center
Cortés, Maria José; Orejuela, Carmen; Castellví, Gemma; Folch, Annabel; Rovira, Lluís; Salvador-Carulla, Luis; Irazábal, Marcia; Muñoz, Silvia; Haro, Josep Maria; Vilella, Elisabet; Martínez-Leal, Rafael
2018-01-01
Strategies for the early detection of autism spectrum disorders (ASD) in people with intellectual developmental disorder (IDD) are urgently needed, but few specific tools have been developed. The present study examines the psychometric properties of the EVTEA-DI, a Spanish adaptation of the PDD-MRS, in a large randomized sample of 979 adults with…
Large-Scale Fabrication of Silicon Nanowires for Solar Energy Applications.
Zhang, Bingchang; Jie, Jiansheng; Zhang, Xiujuan; Ou, Xuemei; Zhang, Xiaohong
2017-10-11
The development of silicon (Si) materials during past decades has boosted up the prosperity of the modern semiconductor industry. In comparison with the bulk-Si materials, Si nanowires (SiNWs) possess superior structural, optical, and electrical properties and have attracted increasing attention in solar energy applications. To achieve the practical applications of SiNWs, both large-scale synthesis of SiNWs at low cost and rational design of energy conversion devices with high efficiency are the prerequisite. This review focuses on the recent progresses in large-scale production of SiNWs, as well as the construction of high-efficiency SiNW-based solar energy conversion devices, including photovoltaic devices and photo-electrochemical cells. Finally, the outlook and challenges in this emerging field are presented.
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
Comparative analysis of used car price evaluation models
NASA Astrophysics Data System (ADS)
Chen, Chuancan; Hao, Lulu; Xu, Cong
2017-05-01
An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.
Scale-dependent correlation of seabirds with schooling fish in a coastal ecosystem
Schneider, Davod C.; Piatt, John F.
1986-01-01
The distribution of piscivorous seabirds relative to schooling fish was investigated by repeated censusing of 2 intersecting transects in the Avalon Channel, which carries the Labrador Current southward along the east coast of Newfoundland. Murres (primarily common murres Uria aalge), Atlantic puffins Fratercula arctica, and schooling fish (primarily capelin Mallotus villosus) were highly aggregated at spatial scales ranging from 0.25 to 15 km. Patchiness of murres, puffins and schooling fish was scale-dependent, as indicated by significantly higher variance-to-mean ratios at large measurement distances than at the minimum distance, 0.25 km. Patch scale of puffins ranged from 2.5 to 15 km, of murres from 3 to 8.75 km, and of schooling fish from 1.25 to 15 km. Patch scale of birds and schooling fish was similar m 6 out of 9 comparisons. Correlation between seabirds and schooling birds was significant at the minimum measurement distance in 6 out of 12 comparisons. Correlation was scale-dependent, as indicated by significantly higher coefficients at large measurement distances than at the minimum distance. Tracking scale, as indicated by the maximum significant correlation between birds and schooling fish, ranged from 2 to 6 km. Our analysis showed that extended aggregations of seabirds are associated with extended aggregations of schooling fish and that correlation of these marine carnivores with their prey is scale-dependent.
Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.
2013-01-01
Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430
Resonance, criticality, and emergence in city traffic investigated in cellular automaton models.
Varas, A; Cornejo, M D; Toledo, B A; Muñoz, V; Rogan, J; Zarama, R; Valdivia, J A
2009-11-01
The complex behavior that occurs when traffic lights are synchronized is studied for a row of interacting cars. The system is modeled through a cellular automaton. Two strategies are considered: all lights in phase and a "green wave" with a propagating green signal. It is found that the mean velocity near the resonant condition follows a critical scaling law. For the green wave, it is shown that the mean velocity scaling law holds even for random separation between traffic lights and is not dependent on the density. This independence on car density is broken when random perturbations are considered in the car velocity. Random velocity perturbations also have the effect of leading the system to an emergent state, where cars move in clusters, but with an average velocity which is independent of traffic light switching for large injection rates.
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
Vita, Daniela; Passalacqua, Giovanni; Di Pasquale, Giuseppe; Caminiti, Lucia; Crisafulli, Giuseppe; Rulli, Imma; Pajno, Giovanni B
2007-11-01
Cow milk allergy is a common disease of infancy, often associated with atopic dermatitis (AD). Avoidance of cow milk (CM) implies the use of alternative dietary supports such as mammalian milks. In this study, we assessed the tolerability and clinical effect of ass's milk (AM), when compared with the largely used goat's milk (GM) in a single-blind, controlled, randomized crossover. Twenty-eight children with AD and ascertained allergy to CM were enrolled. The children were randomized to AM or GM for 6 months, then switched to the other milk for further 3 months. The SCORAD index (SI) and a visual analog scale (VAS) were evaluated blindly. After termination of the study, food challenges with GM and AM were performed. An SDS-PAGE analysis of different milks was performed. Two children from the GM group dropped out after randomization and 26 completed the study. Ass milk invariantly led to a significant improvement of SI and VAS of symptoms (p < 0.03 vs. baseline and inter-group), whereas GM had no measurable clinical effect. At the end of the study 23 of 26 children had a positive food challenge with GM and one of 26 with AM. Ass's milk had a protein profile closer to human milk than GM. Ass milk is better tolerated and more effective than GM in reducing symptoms of AD. It may represent a better substitute of CM than the currently used GM.
Stamatakis, Alexandros
2006-11-01
RAxML-VI-HPC (randomized axelerated maximum likelihood for high performance computing) is a sequential and parallel program for inference of large phylogenies with maximum likelihood (ML). Low-level technical optimizations, a modification of the search algorithm, and the use of the GTR+CAT approximation as replacement for GTR+Gamma yield a program that is between 2.7 and 52 times faster than the previous version of RAxML. A large-scale performance comparison with GARLI, PHYML, IQPNNI and MrBayes on real data containing 1000 up to 6722 taxa shows that RAxML requires at least 5.6 times less main memory and yields better trees in similar times than the best competing program (GARLI) on datasets up to 2500 taxa. On datasets > or =4000 taxa it also runs 2-3 times faster than GARLI. RAxML has been parallelized with MPI to conduct parallel multiple bootstraps and inferences on distinct starting trees. The program has been used to compute ML trees on two of the largest alignments to date containing 25,057 (1463 bp) and 2182 (51,089 bp) taxa, respectively. icwww.epfl.ch/~stamatak
A comparative analysis of rawinsonde and NIMBUS 6 and TIROS N satellite profile data
NASA Technical Reports Server (NTRS)
Scoggins, J. R.; Carle, W. E.; Knight, K.; Moyer, V.; Cheng, N. M.
1981-01-01
Comparisons are made between rawinsonde and satellite profiles in seven areas for a wide range of surface and weather conditions. Variables considered include temperature, dewpoint temperature, thickness, precipitable water, lapse rate of temperature, stability, geopotential height, mixing ratio, wind direction, wind speed, and kinematic parameters, including vorticity and the advection of vorticity and temperature. In addition, comparisons are made in the form of cross sections and synoptic fields for selected variables. Sounding data from the NIMBUS 6 and TIROS N satellites were used. Geostrophic wind computed from smoothed geopotential heights provided large scale flow patterns that agreed well with the rawinsonde wind fields. Surface wind patterns as well as magnitudes computed by use of the log law to extrapolate wind to a height of 10 m agreed with observations. Results of this study demonstrate rather conclusively that satellite profile data can be used to determine characteristics of large scale systems but that small scale features, such as frontal zones, cannot yet be resolved.
NASA Astrophysics Data System (ADS)
Khouider, B.; Goswami, B. B.; Majda, A.; Krishna, R. P. M. M.; Mukhopadhyay, P.
2016-12-01
Improvements in the capability of climate models to realistically capture the synoptic and intra-seasonnal variability, associated with tropical rainfall, are conditioned by improvement in the representation of the subgrid variability due to organized convection and the underlying two-way interactions through multiple scales and thus breaking with the quasi-equilibrium bottleneck. By design, the stochastic multi-cloud model (SMCM) mimics the life cycle of organized tropical convective systems and the interactions of the associated cloud types with each other and with large scales, as it is observed. It is based a lattice particle interaction model for predefined microscopic (subgrid) sites that make random transitions from one cloud type to another conditional to the large scale state. In return the SMCM provides the cloud type area fractions on the form of a Markov chain model which can be run in parallel with the climate model without any significant computational overhead. The SMCM was previously successfully tested in both reduced complexity tropical models and an aquaplanet global atmospheric model. Here, we report for the first time the results of its implementation in the fully coupled NCEP climate model (CFSv2) through the used of prescribed vertical profiles of heating and drying obtained from observations. While many known biases in CFSv2 have been slightly improved there are no noticeable degradation in the simulated mean climatology. Nonetheless, comparison with observations show that the improvements in terms of synoptic and intra-seasonnal variability are spectacular, despite the fact that CFSv2 is one of the best models in this regard. In particular, while CFSv2 exaggerates the intra-seasonnal variance at the expense of the synoptic contribution, the CFS-SMCM shows a good balance between the two as in the observations.
NASA Astrophysics Data System (ADS)
Song, Dawei; Ponte Castañeda, P.
2018-06-01
We make use of the recently developed iterated second-order homogenization method to obtain finite-strain constitutive models for the macroscopic response of porous polycrystals consisting of large pores randomly distributed in a fine-grained polycrystalline matrix. The porous polycrystal is modeled as a three-scale composite, where the grains are described by single-crystal viscoplasticity and the pores are assumed to be large compared to the grain size. The method makes use of a linear comparison composite (LCC) with the same substructure as the actual nonlinear composite, but whose local properties are chosen optimally via a suitably designed variational statement. In turn, the effective properties of the resulting three-scale LCC are determined by means of a sequential homogenization procedure, utilizing the self-consistent estimates for the effective behavior of the polycrystalline matrix, and the Willis estimates for the effective behavior of the porous composite. The iterated homogenization procedure allows for a more accurate characterization of the properties of the matrix by means of a finer "discretization" of the properties of the LCC to obtain improved estimates, especially at low porosities, high nonlinearties and high triaxialities. In addition, consistent homogenization estimates for the average strain rate and spin fields in the pores and grains are used to develop evolution laws for the substructural variables, including the porosity, pore shape and orientation, as well as the "crystallographic" and "morphological" textures of the underlying matrix. In Part II of this work has appeared in Song and Ponte Castañeda (2018b), the model will be used to generate estimates for both the instantaneous effective response and the evolution of the microstructure for porous FCC and HCP polycrystals under various loading conditions.
Motion estimation under location uncertainty for turbulent fluid flows
NASA Astrophysics Data System (ADS)
Cai, Shengze; Mémin, Etienne; Dérian, Pierre; Xu, Chao
2018-01-01
In this paper, we propose a novel optical flow formulation for estimating two-dimensional velocity fields from an image sequence depicting the evolution of a passive scalar transported by a fluid flow. This motion estimator relies on a stochastic representation of the flow allowing to incorporate naturally a notion of uncertainty in the flow measurement. In this context, the Eulerian fluid flow velocity field is decomposed into two components: a large-scale motion field and a small-scale uncertainty component. We define the small-scale component as a random field. Subsequently, the data term of the optical flow formulation is based on a stochastic transport equation, derived from the formalism under location uncertainty proposed in Mémin (Geophys Astrophys Fluid Dyn 108(2):119-146, 2014) and Resseguier et al. (Geophys Astrophys Fluid Dyn 111(3):149-176, 2017a). In addition, a specific regularization term built from the assumption of constant kinetic energy involves the very same diffusion tensor as the one appearing in the data transport term. Opposite to the classical motion estimators, this enables us to devise an optical flow method dedicated to fluid flows in which the regularization parameter has now a clear physical interpretation and can be easily estimated. Experimental evaluations are presented on both synthetic and real world image sequences. Results and comparisons indicate very good performance of the proposed formulation for turbulent flow motion estimation.
Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N.
2015-01-01
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics. PMID:25954306
NASA Astrophysics Data System (ADS)
Longoria, Raul Gilberto
An experimental apparatus has been developed which can be used to generate a general time-dependent planar flow across a cylinder. A mass of water enclosed with no free surface within a square cross-section tank and two spring pre-loaded pistons is oscillated using a hydraulic actuator. A circular cylinder is suspended horizontally in the tank by two X-Y force transducers used to simultaneously measure the total in-line and transverse forces. Fluid motion is measured using a differential pressure transducer for instantaneous acceleration and an LVDT for displacement. This investigation provides measurement of forces on cylinders subjected to planar fluid flow velocity with a time (and frequency) dependence which more accurately represent the random conditions encountered in a natural ocean environment. The use of the same apparatus for both sinusoidal and random experiments provides a quantified assessment of the applicability of sinusoidal planar oscillatory flow data in offshore structure design methods. The drag and inertia coefficients for a Morison equation representation of the inline force are presented for both sinusoidal and random flow. Comparison of the sinusoidal results is favorable with those of previous investigations. The results from random experiments illustrates the difference in the force mechanism by contrasting the force transfer coefficients for the inline and transverse forces. It is found that application of sinusoidal results to random hydrodynamic inline force prediction using the Morison equation wrongly weighs the drag and inertia components, and the transverse force is overpredicted. The use of random planar oscillatory flow in the laboratory, contrasted with sinusoidal planar oscillatory flow, quantifies the accepted belief that the force transfer coefficients from sinusoidal flow experiments are conservative for prediction of forces on cylindrical structures subjected to random sea waves and the ensuing forces. Further analysis of data is conducted in the frequency domain to illustrate models used for predicting the power spectral density of the inline force including a nonlinear describing function method. It is postulated that the large-scale vortex activity prominent in sinusoidal oscillatory flow is subdued in random flow conditions.
Klein, Julie; Eales, James; Zürbig, Petra; Vlahou, Antonia; Mischak, Harald; Stevens, Robert
2013-04-01
In this study, we have developed Proteasix, an open-source peptide-centric tool that can be used to predict in silico the proteases involved in naturally occurring peptide generation. We developed a curated cleavage site (CS) database, containing 3500 entries about human protease/CS combinations. On top of this database, we built a tool, Proteasix, which allows CS retrieval and protease associations from a list of peptides. To establish the proof of concept of the approach, we used a list of 1388 peptides identified from human urine samples, and compared the prediction to the analysis of 1003 randomly generated amino acid sequences. Metalloprotease activity was predominantly involved in urinary peptide generation, and more particularly to peptides associated with extracellular matrix remodelling, compared to proteins from other origins. In comparison, random sequences returned almost no results, highlighting the specificity of the prediction. This study provides a tool that can facilitate linking of identified protein fragments to predicted protease activity, and therefore into presumed mechanisms of disease. Experiments are needed to confirm the in silico hypotheses; nevertheless, this approach may be of great help to better understand molecular mechanisms of disease, and define new biomarkers, and therapeutic targets. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-11-01
The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.
Smith, Neil R; Clark, Charlotte; Fahy, Amanda E; Tharmaratnam, Vanathi; Lewis, Daniel J; Thompson, Claire; Renton, Adrian; Moore, Derek G; Bhui, Kamaldeep S; Taylor, Stephanie J C; Eldridge, Sandra; Petticrew, Mark; Greenhalgh, Tricia; Stansfeld, Stephen A; Cummins, Steven
2012-01-01
Recent systematic reviews suggest that there is a dearth of evidence on the effectiveness of large-scale urban regeneration programmes in improving health and well-being and alleviating health inequalities. The development of the Olympic Park in Stratford for the London 2012 Olympic and Paralympic Games provides the opportunity to take advantage of a natural experiment to examine the impact of large-scale urban regeneration on the health and well-being of young people and their families. A prospective school-based survey of adolescents (11-12 years) with parent data collected through face-to-face interviews at home. Adolescents will be recruited from six randomly selected schools in an area receiving large-scale urban regeneration (London Borough of Newham) and compared with adolescents in 18 schools in three comparison areas with no equivalent regeneration (London Boroughs of Tower Hamlets, Hackney and Barking & Dagenham). Baseline data will be completed prior to the start of the London Olympics (July 2012) with follow-up at 6 and 18 months postintervention. Primary outcomes are: pre-post change in adolescent and parent mental health and well-being, physical activity and parental employment status. Secondary outcomes include: pre-post change in social cohesion, smoking, alcohol use, diet and body mass index. The study will account for individual and environmental contextual effects in evaluating changes to identified outcomes. A nested longitudinal qualitative study will explore families' experiences of regeneration in order to unpack the process by which regeneration impacts on health and well-being. The study has approval from Queen Mary University of London Ethics Committee (QMREC2011/40), the Association of Directors of Children's Services (RGE110927) and the London Boroughs Research Governance Framework (CERGF113). Fieldworkers have had advanced Criminal Records Bureau clearance. Findings will be disseminated through peer-reviewed publications, national and international conferences, through participating schools and the study website (http://www.orielproject.co.uk).
Lundh, Anna; Kowalski, Jan; Sundberg, Carl Johan; Landén, Mikael
2012-11-01
The aim of this study was to compare two methods to conduct CGAS rater training. A total of 648 raters were randomized to training (CD or seminar), and rated five cases before and 12 months after training. The ICC at baseline/end of study was 0.71/0.78 (seminar), 0.76/0.78 (CD), and 0.67/0.79 (comparison). There were no differences in training effect in terms of agreement with expert ratings, which speaks in favor of using the less resource-demanding CD. However, the effect was modest in both groups, and untrained comparison group improved of the same order of magnitude, which proposes more extensive training.
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
A randomized controlled trial comparing EMDR and CBT for obsessive-compulsive disorder.
Marsden, Zoe; Lovell, Karina; Blore, David; Ali, Shehzad; Delgadillo, Jaime
2018-01-01
This study aimed to evaluate eye movement desensitization and reprocessing (EMDR) as a treatment for obsessive-compulsive disorder (OCD), by comparison to cognitive behavioural therapy (CBT) based on exposure and response prevention. This was a pragmatic, feasibility randomized controlled trial in which 55 participants with OCD were randomized to EMDR (n = 29) or CBT (n = 26). The Yale-Brown obsessive-compulsive scale was completed at baseline, after treatment and at 6 months follow-up. Treatment completion and response rates were compared using chi-square tests. Effect size was examined using Cohen's d and multilevel modelling. Overall, 61.8% completed treatment and 30.2% attained reliable and clinically significant improvement in OCD symptoms, with no significant differences between groups (p > .05). There were no significant differences between groups in Yale-Brown obsessive-compulsive scale severity post-treatment (d = -0.24, p = .38) or at 6 months follow-up (d = -0.03, p = .90). EMDR and CBT had comparable completion rates and clinical outcomes. Copyright © 2017 John Wiley & Sons, Ltd.
Evaluation of uncertainty in the adjustment of fundamental constants
NASA Astrophysics Data System (ADS)
Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza
2016-02-01
Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.
Fang, Jianqiao; Chen, Lifang; Ma, Ruijie; Keeler, Crystal Lynn; Shen, Laihua; Bao, Yehua; Xu, Shouyu
2016-01-01
To determine whether integrative medicine rehabilitation (IMR) that combines conventional rehabilitation (CR) with acupuncture and Chinese herbal medicine has better effects for subacute stroke than CR alone, we conducted a multicenter randomized controlled trial that involved three hospitals in China. Three hundred sixty patients with subacute stroke were randomized into IMR and CR groups. The primary outcome was the Modified Barthel Index (MBI). The secondary outcomes were the National Institutes of Health Stroke Scale (NIHSS), the Fugl-Meyer Assessment (FMA), the mini-mental state examination (MMSE), the Montreal Cognitive Assessment (MoCA), Hamilton’s Depression Scale (HAMD), and the Self-Rating Depression Scale (SDS). All variables were evaluated at week 0 (baseline), week 4 (half-way of intervention), week 8 (after treatment) and week 20 (follow-up). In comparison with the CR group, the IMR group had significantly better improvements (P < 0.01 or P < 0.05) in all the primary and secondary outcomes. There were also significantly better changes from baseline in theses outcomes in the IMR group than in the CR group (P < 0.01). A low incidence of adverse events with mild symptoms was observed in the IMR group. We conclude that conventional rehabilitation combined with integrative medicine is safe and more effective for subacute stroke rehabilitation. PMID:27174221
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.
2000-01-01
DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
McEwen, Sara; Polatajko, Helene; Baum, Carolyn; Rios, Jorge; Cirone, Dianne; Doherty, Meghan; Wolf, Timothy
2014-01-01
Purpose The purpose of this study was to estimate the effect of the Cognitive Orientation to daily Occupational Performance (CO-OP) approach compared to usual outpatient rehabilitation on activity and participation in people less than 3 months post stroke. Methods An exploratory, single blind, randomized controlled trial with a usual care control arm was conducted. Participants referred to 2 stroke rehabilitation outpatient programs were randomized to receive either Usual Care or CO-OP. The primary outcome was actual performance of trained and untrained self-selected activities, measured using the Performance Quality Rating Scale (PQRS). Additional outcomes included the Canadian Occupational Performance Measure (COPM), the Stroke Impact Scale Participation Domain, the Community Participation Index, and the Self Efficacy Gauge. Results Thirty-five (35) eligible participants were randomized; 26 completed the intervention. Post-intervention, PQRS change scores demonstrated CO-OP had a medium effect over Usual Care on trained self-selected activities (d=0.5) and a large effect on untrained (d=1.2). At a 3 month follow-up, PQRS change scores indicated a large effect of CO-OP on both trained (d=1.6) and untrained activities (d=1.1). CO-OP had a small effect on COPM and a medium effect on the Community Participation Index perceived control and the Self-Efficacy Gauge. Conclusion CO-OP was associated with a large treatment effect on follow up performances of self-selected activities, and demonstrated transfer to untrained activities. A larger trial is warranted. PMID:25416738
McEwen, Sara; Polatajko, Helene; Baum, Carolyn; Rios, Jorge; Cirone, Dianne; Doherty, Meghan; Wolf, Timothy
2015-07-01
The purpose of this study was to estimate the effect of the Cognitive Orientation to daily Occupational Performance (CO-OP) approach compared with usual outpatient rehabilitation on activity and participation in people <3 months poststroke. An exploratory, single-blind, randomized controlled trial, with a usual-care control arm, was conducted. Participants referred to 2 stroke rehabilitation outpatient programs were randomized to receive either usual care or CO-OP. The primary outcome was actual performance of trained and untrained self-selected activities, measured using the Performance Quality Rating Scale (PQRS). Additional outcomes included the Canadian Occupational Performance Measure (COPM), the Stroke Impact Scale Participation Domain, the Community Participation Index, and the Self-Efficacy Gauge. A total of 35 eligible participants were randomized; 26 completed the intervention. Post intervention, PQRS change scores demonstrated that CO-OP had a medium effect over usual care on trained self-selected activities (d = 0.5) and a large effect on untrained activities (d = 1.2). At a 3-month follow-up, PQRS change scores indicated a large effect of CO-OP on both trained (d = 1.6) and untrained activities (d = 1.1). CO-OP had a small effect on COPM and a medium effect on the Community Participation Index perceived control and on the Self-Efficacy Gauge. CO-OP was associated with a large treatment effect on follow-up performances of self-selected activities and demonstrated transfer to untrained activities. A larger trial is warranted. © The Author(s) 2014.
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Quantum probability, choice in large worlds, and the statistical structure of reality.
Ross, Don; Ladyman, James
2013-06-01
Classical probability models of incentive response are inadequate in "large worlds," where the dimensions of relative risk and the dimensions of similarity in outcome comparisons typically differ. Quantum probability models for choice in large worlds may be motivated pragmatically - there is no third theory - or metaphysically: statistical processing in the brain adapts to the true scale-relative structure of the universe.
Dispersion in Fractures with Ramified Dissolution Patterns
NASA Astrophysics Data System (ADS)
Xu, Le; Marks, Benjy; Toussaint, Renaud; Flekkøy, Eirik G.; Måløy, Knut J.
2018-04-01
The injection of a reactive fluid into an open fracture may modify the fracture surface locally and create a ramified structure around the injection point. This structure will have a significant impact on the dispersion of the injected fluid due to increased permeability, which will introduce large velocity fluctuations into the fluid. Here, we have injected a fluorescent tracer fluid into a transparent artificial fracture with such a ramified structure. The transparency of the model makes it possible to follow the detailed dispersion of the tracer concentration. The experiments have been compared to two dimensional (2D) computer simulations which include both convective motion and molecular diffusion. A comparison was also performed between the dispersion from an initially ramified dissolution structure and the dispersion from an initially circular region. A significant difference was seen both at small and large length scales. At large length scales, the persistence of the anisotropy of the concentration distribution far from the ramified structure is discussed with reference to some theoretical considerations and comparison with simulations.
Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter
2014-01-13
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.
Global diffusion of cosmic rays in random magnetic fields
NASA Astrophysics Data System (ADS)
Snodin, A. P.; Shukurov, A.; Sarson, G. R.; Bushby, P. J.; Rodrigues, L. F. S.
2016-04-01
The propagation of charged particles, including cosmic rays, in a partially ordered magnetic field is characterized by a diffusion tensor whose components depend on the particle's Larmor radius RL and the degree of order in the magnetic field. Most studies of the particle diffusion presuppose a scale separation between the mean and random magnetic fields (e.g. there being a pronounced minimum in the magnetic power spectrum at intermediate scales). Scale separation is often a good approximation in laboratory plasmas, but not in most astrophysical environments such as the interstellar medium (ISM). Modern simulations of the ISM have numerical resolution of the order of 1 pc, so the Larmor radius of the cosmic rays that dominate in energy density is at least 106 times smaller than the resolved scales. Large-scale simulations of cosmic ray propagation in the ISM thus rely on oversimplified forms of the diffusion tensor. We take the first steps towards a more realistic description of cosmic ray diffusion for such simulations, obtaining direct estimates of the diffusion tensor from test particle simulations in random magnetic fields (with the Larmor radius scale being fully resolved), for a range of particle energies corresponding to 10-2 ≲ RL/lc ≲ 103, where lc is the magnetic correlation length. We obtain explicit expressions for the cosmic ray diffusion tensor for RL/lc ≪ 1, that might be used in a sub-grid model of cosmic ray diffusion. The diffusion coefficients obtained are closely connected with existing transport theories that include the random walk of magnetic lines.
Global Behavior in Large Scale Systems
2013-12-05
release. AIR FORCE RESEARCH LABORATORY AF OFFICE OF SCIENTIFIC RESEARCH (AFOSR)/RSL ARLINGTON, VIRGINIA 22203 AIR FORCE MATERIEL COMMAND AFRL-OSR-VA...and Research 875 Randolph Street, Suite 325 Room 3112, Arlington, VA 22203 December 3, 2013 1 Abstract This research attained two main achievements: 1...microscopic random interactions among the agents. 2 1 Introduction In this research we considered two main problems: 1) large deviation error performance in
Spielmanns, M; Müller, K; Schott, N; Winkler, A; Polanski, H; Nell, C; Boeselt, T; Koczulla, A R; Storre, J H; Windisch, W; Magnet, F S; Baum, K
2017-06-01
Objective Exercise training provides a cornerstone of pulmonary rehabilitation (PR) in COPD-patients. However, the components of the training are not yet fully investigated. We conducted a randomized controlled trial to investigate the effectiveness of a sensory-motoric training (SMT) in comparison to a conventional strength training (KT) according to the physical performance. Patients and Methods: 43 COPD patients were randomized and participated either in the intervention group (SMT = 30 minutes SMT per day) or in the control group (KT = 30 minutes KT per day). The SMT was performed as circuit training with five stations. The primary endpoint was the difference between T1 (start of the PR) and T2 (end of the PR) in 5-Times Sit-to-stand test (5-STST) in the intergroup comparison. Secondary endpoints were the intra- and intergroup comparisons of T1 and T2 in the 6-Minute Walk Test (6-MWT), COPD Assessment Test (CAT), St. George Respiratory Questionnaire (SGRQ), Hospital Anxiety- and Depression Scale (HADS) and in lung function. Results No significant differences were seen in the results of the 5-STST between the groups. Likewise, in the 6-MWT, SGRQ, CAT, HADS and lung function. The intragroup comparison between T1 and T2 showed significant differences in 5-STST, 6-MWT, SGRQ, CAT and HADS in both groups. The differences in lung function were not significantly, neither in the inter- nor in the intragroup comparison. Conclusion Similarly to a conventional strength training improvements in exercise capacity could be achieved with a SMT during PR in COPD patients. Further studies are necessary to define the role of the SMT in regards to postural control. © Georg Thieme Verlag KG Stuttgart · New York.
Chemical Processing of Electrons and Holes.
ERIC Educational Resources Information Center
Anderson, Timothy J.
1990-01-01
Presents a synopsis of four lectures given in an elective senior-level electronic material processing course to introduce solid state electronics. Provides comparisons of a large scale chemical processing plant and an integrated circuit. (YP)
A comparison of three methods for measuring local urban tree canopy cover
Kristen L. King; Dexter H. Locke
2013-01-01
Measurements of urban tree canopy cover are crucial for managing urban forests and required for the quantification of the benefits provided by trees. These types of data are increasingly used to secure funding and justify large-scale planting programs in urban areas. Comparisons of tree canopy measurement methods have been conducted before, but a rapidly evolving set...
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
This study presents a method for obtaining the true rms peculiar flow in the universe on scales up to 100-120/h Mpc using APM data as an input assuming only that peculiar motions are caused by peculiar gravity. The comparison to the local (Great Attractor) flow is expected to give clear information on the density parameter, Omega, and the local bias parameter, b. The observed peculiar flows in the Great Attractor region are found to be in better agreement with the open (Omega = 0.1) universe in which light traces mass (b = 1) than with a flat (Omega = 1) universe unless the bias parameter is unrealistically large (b is not less than 4). Constraints on Omega from a comparison of the APM and PV samples are discussed.
Weiser, Mark; Levi, Linda; Burshtein, Shimon; Hagin, Michal; Matei, Valentin P; Podea, Delia; Micluția, Ioana; Tiugan, Alexandru; Păcală, Bogdan; Grecu, Iosif Gabos; Noy, Adam; Zamora, Daisy; Davis, John M
2017-07-01
Several single-center studies have found raloxifene, an estrogen agonist, to be effective in ameliorating symptoms of schizophrenia in stable patients as augmentation of antipsychotics. This multicenter study assessed whether raloxifene plus antipsychotic treatment, in comparison to placebo plus antipsychotics, improves symptoms or cognition in severely ill decompensated schizophrenia patients. In this 16-week, double-blind, randomized, placebo-controlled study, 200 severely ill, decompensated postmenopausal women who met DSM-IV-TR criteria for schizophrenia or schizoaffective disorder were recruited from January 2011 to December 2012 and were randomized to receive either raloxifene 120 mg/d plus antipsychotics or placebo plus antipsychotics. The primary outcome measure was Positive and Negative Syndrome Scale (PANSS) total score at the end of the trial. The placebo plus antipsychotics group experienced statistically significant improvement in PANSS total score (P < .001) compared to the raloxifene plus antipsychotics group, using mixed models for repeated measures, with results favoring placebo by 4.5 points (95% CI, 2.3-6.7). These results were clearly outside the 95% confidence interval. This negative effect was more pronounced in patients who had more frequent relapses and in those with baseline PANSS scores of 100 or higher. There were no differences between groups in Clinical Global Impression Scale-Severity scores or Composite Brief Assessment of Cognition in Schizophrenia scores at 16 weeks (P > .3). Baseline follicle-stimulating hormone and estradiol levels did not alter the drug-placebo differences. Individuals in the active treatment arm showed worse outcome than those in the placebo arm, most likely as a result of chance variation, but the results unequivocally show no benefit of antipsychotics plus raloxifene versus antipsychotics plus placebo in this large randomized, double-blind, placebo-controlled trial in postmenopausal women. These data do not support the use of raloxifene in severely decompensated schizophrenia patients until reliable research identifies what subgroup of patients or domain of outcome is benefited. ClinicalTrials.gov identifier: NCT01280305. © Copyright 2017 Physicians Postgraduate Press, Inc.
Ouyang, Min; Tian, Hui; Wang, Zhenghua; Hong, Liu; Mao, Zijun
2017-01-17
This article studies a general type of initiating events in critical infrastructures, called spatially localized failures (SLFs), which are defined as the failure of a set of infrastructure components distributed in a spatially localized area due to damage sustained, while other components outside the area do not directly fail. These failures can be regarded as a special type of intentional attack, such as bomb or explosive assault, or a generalized modeling of the impact of localized natural hazards on large-scale systems. This article introduces three SLFs models: node centered SLFs, district-based SLFs, and circle-shaped SLFs, and proposes a SLFs-induced vulnerability analysis method from three aspects: identification of critical locations, comparisons of infrastructure vulnerability to random failures, topologically localized failures and SLFs, and quantification of infrastructure information value. The proposed SLFs-induced vulnerability analysis method is finally applied to the Chinese railway system and can be also easily adapted to analyze other critical infrastructures for valuable protection suggestions. © 2017 Society for Risk Analysis.
Semiautomated landscape feature extraction and modeling
NASA Astrophysics Data System (ADS)
Wasilewski, Anthony A.; Faust, Nickolas L.; Ribarsky, William
2001-08-01
We have developed a semi-automated procedure for generating correctly located 3D tree objects form overhead imagery. Cross-platform software partitions arbitrarily large, geocorrected and geolocated imagery into management sub- images. The user manually selected tree areas from one or more of these sub-images. Tree group blobs are then narrowed to lines using a special thinning algorithm which retains the topology of the blobs, and also stores the thickness of the parent blob. Maxima along these thinned tree grous are found, and used as individual tree locations within the tree group. Magnitudes of the local maxima are used to scale the radii of the tree objects. Grossly overlapping trees are culled based on a comparison of tree-tree distance to combined radii. Tree color is randomly selected based on the distribution of sample tree pixels, and height is estimated form tree radius. The final tree objects are then inserted into a terrain database which can be navigated by VGIS, a high-resolution global terrain visualization system developed at Georgia Tech.
Hoover, Brian G; Gamiz, Victor L
2006-02-01
The scalar bidirectional reflectance distribution function (BRDF) due to a perfectly conducting surface with roughness and autocorrelation width comparable with the illumination wavelength is derived from coherence theory on the assumption of a random reflective phase screen and an expansion valid for large effective roughness. A general quadratic expansion of the two-dimensional isotropic surface autocorrelation function near the origin yields representative Cauchy and Gaussian BRDF solutions and an intermediate general solution as the sum of an incoherent component and a nonspecular coherent component proportional to an integral of the plasma dispersion function in the complex plane. Plots illustrate agreement of the derived general solution with original bistatic BRDF data due to a machined aluminum surface, and comparisons are drawn with previously published data in the examination of variations with incident angle, roughness, illumination wavelength, and autocorrelation coefficients in the bistatic and monostatic geometries. The general quadratic autocorrelation expansion provides a BRDF solution that smoothly interpolates between the well-known results of the linear and parabolic approximations.
NASA Astrophysics Data System (ADS)
Ma, Ligang; Ma, Fenglan; Li, Jiadan; Gu, Qing; Yang, Shengtian; Ding, Jianli
2017-04-01
Land degradation, specifically soil salinization has rendered large areas of China west sterile and unproductive while diminishing the productivity of adjacent lands and other areas where salting is less severe. Up to now despite decades of research in soil mapping, few accurate and up-to-date information on the spatial extent and variability of soil salinity are available for large geographic regions. This study explores the po-tentials of assessing soil salinity via linear and random forest modeling of remote sensing based environmental factors and indirect indicators. A case study is presented for the arid oases of Tarim and Junggar Basin, Xinjiang, China using time series land surface temperature (LST), evapotranspiration (ET), TRMM precipitation (TRM), DEM product and vegetation indexes as well as their second order products. In par-ticular, the location of the oasis, the best feature sets, different salinity degrees and modeling approaches were fully examined. All constructed models were evaluated for their fit to the whole data set and their performance in a leave-one-field-out spatial cross-validation. In addition, the Kruskal-Wallis rank test was adopted for the statis-tical comparison of different models. Overall, the random forest model outperformed the linear model for the two basins, all salinity degrees and datasets. As for feature set, LST and ET were consistently identified to be the most important factors for two ba-sins while the contribution of vegetation indexes vary with location. What's more, models performances are promising for the salinity ranges that are most relevant to agricultural productivity.
Single-shot stand-off chemical identification of powders using random Raman lasing
Hokr, Brett H.; Bixler, Joel N.; Noojin, Gary D.; Thomas, Robert J.; Rockwell, Benjamin A.; Yakovlev, Vladislav V.; Scully, Marlan O.
2014-01-01
The task of identifying explosives, hazardous chemicals, and biological materials from a safe distance is the subject we consider. Much of the prior work on stand-off spectroscopy using light has been devoted to generating a backward-propagating beam of light that can be used drive further spectroscopic processes. The discovery of random lasing and, more recently, random Raman lasing provide a mechanism for remotely generating copious amounts of chemically specific Raman scattered light. The bright nature of random Raman lasing renders directionality unnecessary, allowing for the detection and identification of chemicals from large distances in real time. In this article, the single-shot remote identification of chemicals at kilometer-scale distances is experimentally demonstrated using random Raman lasing. PMID:25114231
Extended self-similarity in the two-dimensional metal-insulator transition
NASA Astrophysics Data System (ADS)
Moriconi, L.
2003-09-01
We show that extended self-similarity, a scaling phenomenon first observed in classical turbulent flows, holds for a two-dimensional metal-insulator transition that belongs to the universality class of random Dirac fermions. Deviations from multifractality, which in turbulence are due to the dominance of diffusive processes at small scales, appear in the condensed-matter context as a large-scale, finite-size effect related to the imposition of an infrared cutoff in the field theory formulation. We propose a phenomenological interpretation of extended self-similarity in the metal-insulator transition within the framework of the random β-model description of multifractal sets. As a natural step, our discussion is bridged to the analysis of strange attractors, where crossovers between multifractal and nonmultifractal regimes are found and extended self-similarity turns out to be verified as well.
Stability of large-scale systems with stable and unstable subsystems.
NASA Technical Reports Server (NTRS)
Grujic, Lj. T.; Siljak, D. D.
1972-01-01
The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.
The Use of Weighted Graphs for Large-Scale Genome Analysis
Zhou, Fang; Toivonen, Hannu; King, Ross D.
2014-01-01
There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061
Time-Aware Service Ranking Prediction in the Internet of Things Environment
Huang, Yuze; Huang, Jiwei; Cheng, Bo; He, Shuqing; Chen, Junliang
2017-01-01
With the rapid development of the Internet of things (IoT), building IoT systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets. PMID:28448451
Time-Aware Service Ranking Prediction in the Internet of Things Environment.
Huang, Yuze; Huang, Jiwei; Cheng, Bo; He, Shuqing; Chen, Junliang
2017-04-27
With the rapid development of the Internet of things (IoT), building IoT systems with high quality of service (QoS) has become an urgent requirement in both academia and industry. During the procedures of building IoT systems, QoS-aware service selection is an important concern, which requires the ranking of a set of functionally similar services according to their QoS values. In reality, however, it is quite expensive and even impractical to evaluate all geographically-dispersed IoT services at a single client to obtain such a ranking. Nevertheless, distributed measurement and ranking aggregation have to deal with the high dynamics of QoS values and the inconsistency of partial rankings. To address these challenges, we propose a time-aware service ranking prediction approach named TSRPred for obtaining the global ranking from the collection of partial rankings. Specifically, a pairwise comparison model is constructed to describe the relationships between different services, where the partial rankings are obtained by time series forecasting on QoS values. The comparisons of IoT services are formulated by random walks, and thus, the global ranking can be obtained by sorting the steady-state probabilities of the underlying Markov chain. Finally, the efficacy of TSRPred is validated by simulation experiments based on large-scale real-world datasets.
Colen, Sascha; van den Bekerom, Michel P J; Bellemans, Johan; Mulier, Michiel
2010-11-16
Although intra-articular hyaluronic acid is well established as a treatment for osteoarthritis of the knee, its use in hip osteoarthritis is not based on large randomized controlled trials. There is a need for more rigorously designed studies on hip osteoarthritis treatment as this subject is still very much under debate. Randomized, controlled trial with a three-armed, parallel-group design. Approximately 315 patients complying with the inclusion and exclusion criteria will be randomized into one of the following treatment groups: infiltration of the hip joint with hyaluronic acid, with a corticosteroid or with 0.125% bupivacaine.The following outcome measure instruments will be assessed at baseline, i.e. before the intra-articular injection of one of the study products, and then again at six weeks, 3 and 6 months after the initial injection: Pain (100 mm VAS), Harris Hip Score and HOOS, patient assessment of their clinical status (worse, stable or better then at the time of enrollment) and intake of pain rescue medication (number per week). In addition patients will be asked if they have complications/adverse events. The six-month follow-up period for all patients will begin on the date the first injection is administered. This randomized, controlled, three-arm study will hopefully provide robust information on two of the intra-articular treatments used in hip osteoarthritis, in comparison to bupivacaine. NCT01079455.
Shanazi, Mahnaz; Farshbaf Khalili, Azizeh; Kamalifard, Mahin; Asghari Jafarabadi, Mohammad; Masoudin, Kazhal; Esmaeli, Fariba
2015-01-01
Introduction: Traumatic nipple is among the most common problems of the breastfeeding period which leads to early cessation of breastfeeding. The study aimed to compare the effects of the lanolin, peppermint, and dexpanthenol creams on the treatment of traumatic nipples. Methods: This double-blind randomized controlled trial was carried out on 126 breastfeeding mothers. The mothers had visited at the health centers and children’s hospitals in Sanandaj City. The selected participants were randomly divided into the following three groups of lanolin, peppermint, and dexpanthenol cream groups. Nipple pain was measured using the Store scale while trauma was measured with the Champion scale. Analyses were carried out through the Kruskal–Wallis test, Chi-square, ANOVA, and repeated measures ANOVA by using SPSS software ver. 13. Results: The result showed that the mean score of nipple pain and nipple trauma at the prior to intervention stage, third, seventh, and fourteenth days of intervention was not significantly different between three groups. But, repeated measures ANOVA showed a significant difference in comparison of the four time periods of intervention in each group. Conclusion: Results of this study revealed that the lanolin, peppermint, and dexpanthenol medicines had similar therapeutic effects on traumatic nipple. PMID:26744729
Similarity spectra analysis of high-performance jet aircraft noise.
Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M
2013-04-01
Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.
Wang, WeiBo; Sun, Wei; Wang, Wei; Szatkiewicz, Jin
2018-03-01
The application of high-throughput sequencing in a broad range of quantitative genomic assays (e.g., DNA-seq, ChIP-seq) has created a high demand for the analysis of large-scale read-count data. Typically, the genome is divided into tiling windows and windowed read-count data is generated for the entire genome from which genomic signals are detected (e.g. copy number changes in DNA-seq, enrichment peaks in ChIP-seq). For accurate analysis of read-count data, many state-of-the-art statistical methods use generalized linear models (GLM) coupled with the negative-binomial (NB) distribution by leveraging its ability for simultaneous bias correction and signal detection. However, although statistically powerful, the GLM+NB method has a quadratic computational complexity and therefore suffers from slow running time when applied to large-scale windowed read-count data. In this study, we aimed to speed up substantially the GLM+NB method by using a randomized algorithm and we demonstrate here the utility of our approach in the application of detecting copy number variants (CNVs) using a real example. We propose an efficient estimator, the randomized GLM+NB coefficients estimator (RGE), for speeding up the GLM+NB method. RGE samples the read-count data and solves the estimation problem on a smaller scale. We first theoretically validated the consistency and the variance properties of RGE. We then applied RGE to GENSENG, a GLM+NB based method for detecting CNVs. We named the resulting method as "R-GENSENG". Based on extensive evaluation using both simulated and empirical data, we concluded that R-GENSENG is ten times faster than the original GENSENG while maintaining GENSENG's accuracy in CNV detection. Our results suggest that RGE strategy developed here could be applied to other GLM+NB based read-count analyses, i.e. ChIP-seq data analysis, to substantially improve their computational efficiency while preserving the analytic power.
NASA Technical Reports Server (NTRS)
Alexandrov, Mikhail Dmitrievic; Geogdzhayev, Igor V.; Tsigaridis, Konstantinos; Marshak, Alexander; Levy, Robert; Cairns, Brian
2016-01-01
A novel model for the variability in aerosol optical thickness (AOT) is presented. This model is based on the consideration of AOT fields as realizations of a stochastic process, that is the exponent of an underlying Gaussian process with a specific autocorrelation function. In this approach AOT fields have lognormal PDFs and structure functions having the correct asymptotic behavior at large scales. The latter is an advantage compared with fractal (scale-invariant) approaches. The simple analytical form of the structure function in the proposed model facilitates its use for the parameterization of AOT statistics derived from remote sensing data. The new approach is illustrated using a month-long global MODIS AOT dataset (over ocean) with 10 km resolution. It was used to compute AOT statistics for sample cells forming a grid with 5deg spacing. The observed shapes of the structure functions indicated that in a large number of cases the AOT variability is split into two regimes that exhibit different patterns of behavior: small-scale stationary processes and trends reflecting variations at larger scales. The small-scale patterns are suggested to be generated by local aerosols within the marine boundary layer, while the large-scale trends are indicative of elevated aerosols transported from remote continental sources. This assumption is evaluated by comparison of the geographical distributions of these patterns derived from MODIS data with those obtained from the GISS GCM. This study shows considerable potential to enhance comparisons between remote sensing datasets and climate models beyond regional mean AOTs.
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@harbor.kobe-u.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
2014-10-10
The mechanism of large-scale dynamos in rigidly rotating stratified convection is explored by direct numerical simulations (DNS) in Cartesian geometry. A mean-field dynamo model is also constructed using turbulent velocity profiles consistently extracted from the corresponding DNS results. By quantitative comparison between the DNS and our mean-field model, it is demonstrated that the oscillatory α{sup 2} dynamo wave, excited and sustained in the convection zone, is responsible for large-scale magnetic activities such as cyclic polarity reversal and spatiotemporal migration. The results provide strong evidence that a nonuniformity of the α-effect, which is a natural outcome of rotating stratified convection, canmore » be an important prerequisite for large-scale stellar dynamos, even without the Ω-effect.« less
Mortimer, James A.; Ding, Ding; Borenstein, Amy R.; DeCarli, Charles; Guo, Qihao; Wu, Yougui; Zhao, Qianhua; Chu, Shugang
2013-01-01
Physical exercise has been shown to increase brain volume and improve cognition in randomized trials of non-demented elderly. Although greater social engagement was found to reduce dementia risk in observational studies, randomized trials of social interventions have not been reported. A representative sample of 120 elderly from Shanghai, China was randomized to four groups (Tai Chi, Walking, Social Interaction, No Intervention) for 40 weeks. Two MRIs were obtained, one before the intervention period, the other after. A neuropsychological battery was administered at baseline, 20 weeks, and 40 weeks. Comparison of changes in brain volumes in intervention groups with the No Intervention group were assessed by t-tests. Time-intervention group interactions for neuropsychological measures were evaluated with repeated-measures mixed models. Compared to the No Intervention group, significant increases in brain volume were seen in the Tai Chi and Social Intervention groups (p < 0.05). Improvements also were observed in several neuropsychological measures in the Tai Chi group, including the Mattis Dementia Rating Scale score (p = 0.004), the Trailmaking Test A (p = 0.002) and B (p = 0.0002), the Auditory Verbal Learning Test (p = 0.009), and verbal fluency for animals (p = 0.01). The Social Interaction group showed improvement on some, but fewer neuropsychological indices. No differences were observed between the Walking and No Intervention groups. The findings differ from previous clinical trials in showing increases in brain volume and improvements in cognition with a largely non-aerobic exercise (Tai Chi). In addition, intellectual stimulation through social interaction was associated with increases in brain volume as well as with some cognitive improvements. PMID:22451320
Mortimer, James A; Ding, Ding; Borenstein, Amy R; DeCarli, Charles; Guo, Qihao; Wu, Yougui; Zhao, Qianhua; Chu, Shugang
2012-01-01
Physical exercise has been shown to increase brain volume and improve cognition in randomized trials of non-demented elderly. Although greater social engagement was found to reduce dementia risk in observational studies, randomized trials of social interventions have not been reported. A representative sample of 120 elderly from Shanghai, China was randomized to four groups (Tai Chi, Walking, Social Interaction, No Intervention) for 40 weeks. Two MRIs were obtained, one before the intervention period, the other after. A neuropsychological battery was administered at baseline, 20 weeks, and 40 weeks. Comparison of changes in brain volumes in intervention groups with the No Intervention group were assessed by t-tests. Time-intervention group interactions for neuropsychological measures were evaluated with repeated-measures mixed models. Compared to the No Intervention group, significant increases in brain volume were seen in the Tai Chi and Social Intervention groups (p < 0.05). Improvements also were observed in several neuropsychological measures in the Tai Chi group, including the Mattis Dementia Rating Scale score (p = 0.004), the Trailmaking Test A (p = 0.002) and B (p = 0.0002), the Auditory Verbal Learning Test (p = 0.009), and verbal fluency for animals (p = 0.01). The Social Interaction group showed improvement on some, but fewer neuropsychological indices. No differences were observed between the Walking and No Intervention groups. The findings differ from previous clinical trials in showing increases in brain volume and improvements in cognition with a largely non-aerobic exercise (Tai Chi). In addition, intellectual stimulation through social interaction was associated with increases in brain volume as well as with some cognitive improvements.
Buechner, Stanislaw A
2014-06-01
This study investigated the non-inferiority of efficacy and tolerance of 2% miconazole nitrate shampoo in comparison with 2% ketoconazole shampoo in the treatment of scalp seborrheic dermatitis. A randomized, double-blind, comparative, parallel group, multicenter study was done. A total of 274 patients (145 miconazole, 129 ketoconazole) were enrolled. Treatment was twice-weekly for 4 weeks. Safety and efficacy assessments were made at baseline and at weeks 2 and 4. Assessments included symptoms of erythema, itching, scaling ['Symptom Scale of Seborrhoeic Dermatitis' (SSSD)], disease severity and global change [Clinical Global Impressions (CGIs) and Patient Global Impressions (PGIs)]. Miconazole shampoo is at least as effective and safe as ketoconazole shampoo in treating scalp seborrheic dermatitis scalp.
NASA Astrophysics Data System (ADS)
Kenward, D. R.; Lessard, M.; Lynch, K. A.; Hysell, D. L.; Hampton, D. L.; Michell, R.; Samara, M.; Varney, R. H.; Oksavik, K.; Clausen, L. B. N.; Hecht, J. H.; Clemmons, J. H.; Fritz, B.
2017-12-01
The RENU2 sounding rocket (launched from Andoya rocket range on December 13th, 2015) observed Poleward Moving Auroral Forms within the dayside cusp. The ISINGLASS rockets (launched from Poker Flat rocket range on February 22, 2017 and March 2, 2017) both observed aurora during a substorm event. Despite observing very different events, both campaigns witnessed a high degree of small scale structuring within the larger auroral boundary, including Alfvenic signatures. These observations suggest a method of coupling large-scale energy input to fine scale structures within aurorae. During RENU2, small (sub-km) scale drivers persist for long (10s of minutes) time scales and result in large scale ionospheric (thermal electron) and thermospheric response (neutral upwelling). ISINGLASS observations show small scale drivers, but with short (minute) time scales, with ionospheric response characterized by the flight's thermal electron instrument (ERPA). The comparison of the two flights provides an excellent opportunity to examine ionospheric and thermospheric response to small scale drivers over different integration times.
Conradsson, David; Löfgren, Niklas; Nero, Håkan; Hagströmer, Maria; Ståhle, Agneta; Lökk, Johan; Franzén, Erika
2015-10-01
Highly challenging exercises have been suggested to induce neuroplasticity in individuals with Parkinson's disease (PD); however, its effect on clinical outcomes remains largely unknown. To evaluate the short-term effects of the HiBalance program, a highly challenging balance-training regimen that incorporates both dual-tasking and PD-specific balance components, compared with usual care in elderly with mild to moderate PD. Participants with PD (n = 100) were randomized, either to the 10-week HiBalance program (n = 51) or to the control group (n = 49). Participants were evaluated before and after the intervention. The main outcomes were balance performance (Mini-BESTest), gait velocity (during normal and dual-task gait), and concerns about falling (Falls Efficacy Scale-International). Performance of a cognitive task while walking, physical activity level (average steps per day), and activities of daily living were secondary outcomes. A total of 91 participants completed the study. After the intervention, the between group comparison showed significantly improved balance and gait performance in the training group. Moreover, although no significant between group difference was observed regarding gait performance during dual-tasking; the participants in the training group improved their performance of the cognitive task while walking, as compared with the control group. Regarding physical activity levels and activities of daily living, in comparison to the control group, favorable results were found for the training group. No group differences were found for concerns about falling. The HiBalance program significantly benefited balance and gait abilities when compared with usual care and showed promising transfer effects to everyday living. Long-term follow-up assessments will further explore these effects. © The Author(s) 2015.
Bourmaud, Aurelie; Soler-Michel, Patricia; Oriol, Mathieu; Regnier, Véronique; Tinquaut, Fabien; Nourissat, Alice; Bremond, Alain; Moumjid, Nora; Chauvin, Franck
2016-01-01
Controversies regarding the benefits of breast cancer screening programs have led to the promotion of new strategies taking into account individual preferences, such as decision aid. The aim of this study was to assess the impact of a decision aid leaflet on the participation of women invited to participate in a national breast cancer screening program. This Randomized, multicentre, controlled trial. Women aged 50 to 74 years, were randomly assigned to receive either a decision aid or the usual invitation letter. Primary outcome was the participation rate 12 months after the invitation. 16 000 women were randomized and 15 844 included in the modified intention-to-treat analysis. The participation rate in the intervention group was 40.25% (3174/7885 women) compared with 42.13% (3353/7959) in the control group (p = 0.02). Previous attendance for screening (RR = 6.24; [95%IC: 5.75-6.77]; p < 0.0001) and medium household income (RR = 1.05; [95%IC: 1.01-1.09]; p = 0.0074) were independently associated with attendance for screening. This large-scale study demonstrates that the decision aid reduced the participation rate. The decision aid activate the decision making process of women toward non-attendance to screening. These results show the importance of promoting informed patient choices, especially when those choices cannot be anticipated. PMID:26883201
Mirror Instability in the Turbulent Solar Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hellinger, Petr; Landi, Simone; Verdini, Andrea
2017-04-01
The relationship between a decaying strong turbulence and the mirror instability in a slowly expanding plasma is investigated using two-dimensional hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and a vanishing correlation between the two fields. A turbulent cascade rapidly develops, magnetic field fluctuations exhibit a Kolmogorov-like power-law spectrum at large scales and a steeper spectrum at sub-ion scales. The imposed expansion (taking a strictly transverse ambient magnetic field) leadsmore » to the generation of an important perpendicular proton temperature anisotropy that eventually drives the mirror instability. This instability generates large-amplitude, nonpropagating, compressible, pressure-balanced magnetic structures in a form of magnetic enhancements/humps that reduce the perpendicular temperature anisotropy.« less
Large-scale environments of narrow-line Seyfert 1 galaxies
NASA Astrophysics Data System (ADS)
Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.
2017-09-01
Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.
Large Scale Winter Time Disturbances in Meteor Winds over Central and Eastern Europe
NASA Technical Reports Server (NTRS)
Greisiger, K. M.; Portnyagin, Y. I.; Lysenko, I. A.
1984-01-01
Daily zonal wind data of the four pre-MAP-winters 1978/79 to 1981/82 obtained over Central Europe and Eastern Europe by the radar meteor method were studied. Available temperature and satellite radiance data of the middle and upper stratosphere were used for comparison, as well as wind data from Canada. The existence or nonexistence of coupling between the observed large scale zonal wind disturbances in the upper mesopause region (90 to 100 km) and corresponding events in the stratosphere are discussed.
Non-Hookean statistical mechanics of clamped graphene ribbons
NASA Astrophysics Data System (ADS)
Bowick, Mark J.; Košmrlj, Andrej; Nelson, David R.; Sknepnek, Rastko
2017-03-01
Thermally fluctuating sheets and ribbons provide an intriguing forum in which to investigate strong violations of Hooke's Law: Large distance elastic parameters are in fact not constant but instead depend on the macroscopic dimensions. Inspired by recent experiments on free-standing graphene cantilevers, we combine the statistical mechanics of thin elastic plates and large-scale numerical simulations to investigate the thermal renormalization of the bending rigidity of graphene ribbons clamped at one end. For ribbons of dimensions W ×L (with L ≥W ), the macroscopic bending rigidity κR determined from cantilever deformations is independent of the width when W <ℓth , where ℓth is a thermal length scale, as expected. When W >ℓth , however, this thermally renormalized bending rigidity begins to systematically increase, in agreement with the scaling theory, although in our simulations we were not quite able to reach the system sizes necessary to determine the fully developed power law dependence on W . When the ribbon length L >ℓp , where ℓp is the W -dependent thermally renormalized ribbon persistence length, we observe a scaling collapse and the beginnings of large scale random walk behavior.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Raghunath, Ramanujam; Fu, Lee-Lueng
1998-03-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equation model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to January 1994. The physical nature of sea level's temporal variability from periods of days to a year is examined on the basis of spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements. The study elucidates and diagnoses the inhomogeneous physics of sea level change in space and frequency domain. At midlatitudes, large-scale sea level variability is primarily due to steric changes associated with the seasonal heating and cooling cycle of the surface layer. In comparison, changes in the tropics and high latitudes are mainly wind driven. Wind-driven variability exhibits a strong latitudinal dependence in itself. Wind-driven changes are largely baroclinic in the tropics but barotropic at higher latitudes. Baroclinic changes are dominated by the annual harmonic of the first baroclinic mode and is largest off the equator; variabilities associated with equatorial waves are smaller in comparison. Wind-driven barotropic changes exhibit a notable enhancement over several abyssal plains in the Southern Ocean, which is likely due to resonant planetary wave modes in basins semienclosed by discontinuities in potential vorticity. Otherwise, barotropic sea level changes are typically dominated by high frequencies with as much as half the total variance in periods shorter than 20 days, reflecting the frequency spectra of wind stress curl. Implications of the findings with regards to analyzing observations and data assimilation are discussed.
Uncertainty in eddy covariance measurements and its application to physiological models
D.Y. Hollinger; A.D. Richardson; A.D. Richardson
2005-01-01
Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...
Beginning School Math Competence: Minority and Majority Comparisons. Report No. 34.
ERIC Educational Resources Information Center
Entwisle, Doris R.; Alexander, Karl L.
This paper uses a structural model with a large random sample of urban children to explain children's competence in math concepts and computation at the time they begin first grade. These two aspects of math ability respond differently to environmental resources, with math concepts much more responsive to family factors before formal schooling…
Group Hypnotizability Of Inpatient Adolescents.
Quant, Michael; Schilder, Steffanie; Sapp, Marty; Zhang, Bo; Baskin, Thomas; Arndt, Leah Rouse
2017-01-01
This study investigated group hypnotizability in 167 adolescents (ages 13-17) in an inpatient behavioral healthcare setting through use of the Waterloo-Stanford Group Scale, Form C. It also investigated the influence of hypnotic inductions on group hypnotizability. Adolescents were randomly assigned to either a group session of hypnosis (n = 84) with a hypnotic induction or a comparison "no-induction" group (n = 83) that received identical suggestions without a hypnotic induction. Adolescents' imaginative absorption and dissociation were measured to examine their influence on hypnotizability. A between-group comparison showed the induction condition had a significantly higher score than the no-induction group on both behavioral and subjective measures of hypnotizability.
Gandolfi, Marialuisa; Geroin, Christian; Picelli, Alessandro; Munari, Daniele; Waldner, Andreas; Tamburin, Stefano; Marchioretto, Fabio; Smania, Nicola
2014-01-01
Background: Extensive research on both healthy subjects and patients with central nervous damage has elucidated a crucial role of postural adjustment reactions and central sensory integration processes in generating and “shaping” locomotor function, respectively. Whether robotic-assisted gait devices might improve these functions in Multiple sclerosis (MS) patients is not fully investigated in literature. Purpose: The aim of this study was to compare the effectiveness of end-effector robot-assisted gait training (RAGT) and sensory integration balance training (SIBT) in improving walking and balance performance in patients with MS. Methods: Twenty-two patients with MS (EDSS: 1.5–6.5) were randomly assigned to two groups. The RAGT group (n = 12) underwent end-effector system training. The SIBT group (n = 10) underwent specific balance exercises. Each patient received twelve 50-min treatment sessions (2 days/week). A blinded rater evaluated patients before and after treatment as well as 1 month post treatment. Primary outcomes were walking speed and Berg Balance Scale. Secondary outcomes were the Activities-specific Balance Confidence Scale, Sensory Organization Balance Test, Stabilometric Assessment, Fatigue Severity Scale, cadence, step length, single and double support time, Multiple Sclerosis Quality of Life-54. Results: Between groups comparisons showed no significant differences on primary and secondary outcome measures over time. Within group comparisons showed significant improvements in both groups on the Berg Balance Scale (P = 0.001). Changes approaching significance were found on gait speed (P = 0.07) only in the RAGT group. Significant changes in balance task-related domains during standing and walking conditions were found in the SIBT group. Conclusion: Balance disorders in patients with MS may be ameliorated by RAGT and by SIBT. PMID:24904361
An innovative large scale integration of silicon nanowire-based field effect transistors
NASA Astrophysics Data System (ADS)
Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.
2018-05-01
Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.
NASA Astrophysics Data System (ADS)
Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.
2017-12-01
In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Naim, Eli; Krapivsky, Paul
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Fu, Juanjuan; Ding, Hong; Yang, Haimiao; Huang, Yuhong
2017-01-01
Background Common cold is one of the most frequently occurring illnesses in primary healthcare services and represents considerable disease burden. Common cold of Qi-deficiency syndrome (CCQDS) is an important but less addressed traditional Chinese medicine (TCM) pattern. We designed a protocol to explore the efficacy, safety, and optimal dose of Shen Guo Lao Nian Granule (SGLNG) for treating CCQDS. Methods/Design This is a multicenter, randomized, double-blind, placebo-controlled, phase II clinical trial. A total of 240 eligible patients will be recruited from five centers. Patients are randomly assigned to high-dose group, middle-dose group, low-dose group, or control group in a 1 : 1 : 1 : 1 ratio. All drugs are required to be taken 3 times daily for 5 days with a 5-day follow-up period. Primary outcomes are duration of all symptoms, total score reduction on Jackson's scale, and TCM symptoms scale. Secondary outcomes include every single TCM symptom duration and score reduction, TCM main symptoms disappearance rate, curative effects, and comparison between Jackson's scale and TCM symptom scale. Ethics and Trial Registration This study protocol was approved by the Ethics Committee of Clinical Trials and Biomedicine of West China Hospital of Sichuan University (number IRB-2014-12) and registered with the Chinese Clinical Trial Registry (ChiCTR-IPR-15006349). PMID:29430253
John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan
2010-01-01
Assessing forest carbon storage and cycling over large areas is a growing challenge that is complicated by the inherent heterogeneity of forest systems. Field measurements must be conducted and analyzed appropriately to generate precise estimates at scales large enough for mapping or comparison with remote sensing data. In this study we examined...
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Mohamed, Somaia; Rosenheck, Robert A; Lin, Haiqun; Swartz, Marvin; McEvoy, Joseph; Stroup, Scott
2015-07-01
No large-scale randomized trial has compared the effect of different second-generation antipsychotic drugs and any first-generation drug on alcohol, drug and nicotine use in patients with schizophrenia. The Clinical Antipsychotic Trial of Intervention Effectiveness study randomly assigned 1432 patients formally diagnosed with schizophrenia to four second-generation antipsychotic drugs (olanzapine, risperidone quetiapine, and ziprasidone) and one first-generation antipsychotic (perphenazine) and followed them for up to 18 months. Secondary outcome data documented cigarettes smoked in the past week and alcohol and drug use severity ratings. At baseline, 61% of patients smoked, 35% used alcohol, and 23% used illicit drugs. Although there were significant effects of time showing reduction in substance use over the 18 months (all p < 0.0001), this study found no evidence that any antipsychotic was robustly superior to any other in a secondary analysis of data on substance use outcomes from a large 18-month randomized schizophrenia trial.
Zheng, Wei; Zheng, Ying-Jun; Li, Xian-Bin; Tang, Yi-Lang; Wang, Chuan-Yue; Xiang, Ying-Qiang; de Leon, Jose
2016-12-01
This meta-analysis of randomized controlled trials (RCTs) evaluated the efficacy and safety of adding aripiprazole to other antipsychotics in schizophrenia. A systematic computer search identified 55 RCTs including 4457 patients who were randomized to aripiprazole (14.0 ± 7.0 mg/d) versus placebo (18 RCTs) or open antipsychotic treatment (37 RCTs). Aripiprazole significantly outperformed the comparison interventions based on psychiatric scales: (1) total score in 43 RCTs (N = 3351) with a standardized mean difference (SMD) of -0.48 (95% confidence interval [CI], -0.68 to -0.28; P < 0.00001; I = 88%), (2) negative symptom score in 30 RCTs (N = 2294) with an SMD of -0.61(95% CI, -0.91 to -0.31; P < 0.00001; I = 91%), and (3) general psychopathology score in 13 RCTs (N = 1138) with a weighted mean difference (WMD) of -4.02 (95% CI, -7.23 to -0.81; P = 0.01; I = 99%), but not in positive symptoms in 29 RCTs (N = 2223) with a SMD of -0.01 (95% CI, 0.26 to 0.25; P = 0.95; I = 88%). Differences in total score based on psychiatric scales may be explained by the use of an antipsychotic for comparison rather than placebo in 31 RCTs with a nonblind design. Aripiprazole outperformed the comparison interventions for body weight in 9 RCTs (N = 505) with a WMD of -5.08 kg (95% CI, -7.14 to -3.02; P < 0.00001; I = 35%) and for body mass index (BMI) in 14 RCTs (N = 809) with a WMD of -1.78 (CI: -2.25 to -1.31; P < 0.00001; I = 54%). The BMI meta-regression analysis indicated aripiprazole's association with lower BMI was stronger in females. Adjunctive aripiprazole appears safe but better RCTs are needed to demonstrate efficacy. Chinese journals and scientific societies should encourage the publication of high-quality RCTs and require registration in a centralized Chinese database.
NASA Astrophysics Data System (ADS)
Chang, Wen-Li
2010-01-01
We investigate the influence of blurred ways on pattern recognition of a Barabási-Albert scale-free Hopfield neural network (SFHN) with a small amount of errors. Pattern recognition is an important function of information processing in brain. Due to heterogeneous degree of scale-free network, different blurred ways have different influences on pattern recognition with same errors. Simulation shows that among partial recognition, the larger loading ratio (the number of patterns to average degree P/langlekrangle) is, the smaller the overlap of SFHN is. The influence of directed (large) way is largest and the directed (small) way is smallest while random way is intermediate between them. Under the ratio of the numbers of stored patterns to the size of the network P/N is less than 0. 1 conditions, there are three families curves of the overlap corresponding to directed (small), random and directed (large) blurred ways of patterns and these curves are not associated with the size of network and the number of patterns. This phenomenon only occurs in the SFHN. These conclusions are benefit for understanding the relation between neural network structure and brain function.
Roman, Horace; Bubenheim, Michael; Huet, Emmanuel; Bridoux, Valérie; Zacharopoulou, Chrysoula; Daraï, Emile; Collinet, Pierre; Tuech, Jean-Jacques
2018-01-01
Is there a difference in functional outcome between conservative versus radical rectal surgery in patients with large deep endometriosis infiltrating the rectum 2 years postoperatively? No evidence was found that functional outcomes differed when conservative surgery was compared to radical rectal surgery for deeply invasive endometriosis involving the bowel. Adopting a conservative approach to the surgical management of deep endometriosis infiltrating the rectum, by employing shaving or disc excision, appears to yield improved digestive functional outcomes. However, previous comparative studies were not randomized, introducing a possible bias regarding the presumed superiority of conservative techniques due to the inclusion of patients with more severe deep endometriosis who underwent colorectal resection. From March 2011 to August 2013, we performed a 2-arm randomized trial, enroling 60 patients with deep endometriosis infiltrating the rectum up to 15 cm from the anus, measuring more than 20 mm in length, involving at least the muscular layer in depth and up to 50% of rectal circumference. No women were lost to follow-up. Patients were enroled in three French university hospitals and had either conservative surgery, by shaving or disc excision, or radical rectal surgery, by segmental resection. Randomization was performed preoperatively using sequentially numbered, opaque, sealed envelopes, and patients were informed of the results of randomization. The primary endpoint was the proportion of patients experiencing one of the following symptoms: constipation (1 stool/>5 consecutive days), frequent bowel movements (≥3 stools/day), defecation pain, anal incontinence, dysuria or bladder atony requiring self-catheterization 24 months postoperatively. Secondary endpoints were the values of the Visual Analog Scale (VAS), Knowles-Eccersley-Scott-Symptom Questionnaire (KESS), the Gastrointestinal Quality of Life Index (GIQLI), the Wexner scale, the Urinary Symptom Profile (USP) and the Short Form 36 Health Survey (SF36). A total of 60 patients were enroled. Among the 27 patients in the conservative surgery arm, two were converted to segmental resection (7.4%). In each group, 13 presented with at least one functional problem at 24 months after surgery (48.1 versus 39.4%, OR = 0.70, 95% CI 0.22-2.21). The intention-to-treat comparison of the overall scores on KESS, GIQLI, Wexner, USP and SF36 did not reveal significant differences between the two arms. Segmental resection was associated with a significant risk of bowel stenosis. The inclusion of only large infiltrations of the rectum does not allow the extrapolation of conclusions to small nodules of <20 mm in length. The presumption of a 40% difference favourable to conservative surgery in terms of postoperative functional outcomes resulted in a lack of power to demonstrate a difference for the primary endpoint. Conservative surgery is feasible in patients managed for large deep rectal endometriosis. The trial does not show a statistically significant superiority of conservative surgery for mid-term functional digestive and urinary outcomes in this specific population of women with large involvement of the rectum. There is a higher risk of rectal stenosis after segmental resection, requiring additional endoscopic or surgical procedures. This work was supported by a grant from the clinical research programme for hospitals (PHRC) in France. The authors declare no competing interests related to this study. This study is registered with ClinicalTrials.gov, number NCT 01291576. 31 January 2011. 7 March 2011. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology.
Comparison of WAIS-III Short Forms for Measuring Index and Full-Scale Scores
ERIC Educational Resources Information Center
Girard, Todd A.; Axelrod, Bradley N.; Wilkins, Leanne K.
2010-01-01
This investigation assessed the ability of the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) short forms to estimate both index and IQ scores in a large, mixed clinical sample (N = 809). More specifically, a commonly used modification of Ward's seven-subtest short form (SF7-A), a recently proposed index-based SF7-C and eight-subtest…
The earth's foreshock, bow shock, and magnetosheath
NASA Technical Reports Server (NTRS)
Onsager, T. G.; Thomsen, M. F.
1991-01-01
Studies directly pertaining to the earth's foreshock, bow shock, and magnetosheath are reviewed, and some comparisons are made with data on other planets. Topics considered in detail include the electron foreshock, the ion foreshock, the quasi-parallel shock, the quasi-perpendicular shock, and the magnetosheath. Information discussed spans a broad range of disciplines, from large-scale macroscopic plasma phenomena to small-scale microphysical interactions.
Theory of rumour spreading in complex social networks
NASA Astrophysics Data System (ADS)
Nekovee, M.; Moreno, Y.; Bianconi, G.; Marsili, M.
2007-01-01
We introduce a general stochastic model for the spread of rumours, and derive mean-field equations that describe the dynamics of the model on complex social networks (in particular, those mediated by the Internet). We use analytical and numerical solutions of these equations to examine the threshold behaviour and dynamics of the model on several models of such networks: random graphs, uncorrelated scale-free networks and scale-free networks with assortative degree correlations. We show that in both homogeneous networks and random graphs the model exhibits a critical threshold in the rumour spreading rate below which a rumour cannot propagate in the system. In the case of scale-free networks, on the other hand, this threshold becomes vanishingly small in the limit of infinite system size. We find that the initial rate at which a rumour spreads is much higher in scale-free networks than in random graphs, and that the rate at which the spreading proceeds on scale-free networks is further increased when assortative degree correlations are introduced. The impact of degree correlations on the final fraction of nodes that ever hears a rumour, however, depends on the interplay between network topology and the rumour spreading rate. Our results show that scale-free social networks are prone to the spreading of rumours, just as they are to the spreading of infections. They are relevant to the spreading dynamics of chain emails, viral advertising and large-scale information dissemination algorithms on the Internet.
Comparison of Observations of Sporadic-E Layers in the Nighttime and Daytime Mid-Latitude Ionosphere
NASA Technical Reports Server (NTRS)
Pfaff, R.; Freudenreich, H.; Rowland, D.; Klenzing, J.; Clemmons, J.; Larsen, M.; Kudeki, E.; Franke, S.; Urbina, J.; Bullett, T.
2012-01-01
A comparison of numerous rocket experiments to investigate mid-latitude sporadic-E layers is presented. Electric field and plasma density data gathered on sounding rockets launched in the presence of sporadic-E layers and QP radar echoes reveal a complex electrodynamics including both DC parameters and plasma waves detected over a large range of scales. We show both DC and wave electric fields and discuss their relationship to intense sporadic-E layers in both nighttime and daytime conditions. Where available, neutral wind observations provide the complete electrodynamic picture revealing an essential source of free energy that both sets up the layers and drives them unstable. Electric field data from the nighttime experiments reveal the presence of km-scale waves as well as well-defined packets of broadband (10's of meters to meters) irregularities. What is surprising is that in both the nighttime and daytime experiments, neither the large scale nor short scale waves appear to be distinctly organized by the sporadic-E density layer itself. The observations are discussed in the context of current theories regarding sporadic-E layer generation and quasi-periodic echoes.
Lao, Annabelle Y; Sharma, Vijay K; Tsivgoulis, Georgios; Frey, James L; Malkoff, Marc D; Navarro, Jose C; Alexandrov, Andrei V
2008-10-01
International Consensus Criteria (ICC) consider right-to-left shunt (RLS) present when Transcranial Doppler (TCD) detects even one microbubble (microB). Spencer Logarithmic Scale (SLS) offers more grades of RLS with detection of >30 microB corresponding to a large shunt. We compared the yield of ICC and SLS in detection and quantification of a large RLS. We prospectively evaluated paradoxical embolism in consecutive patients with ischemic strokes or transient ischemic attack (TIA) using injections of 9 cc saline agitated with 1 cc of air. Results were classified according to ICC [negative (no microB), grade I (1-20 microB), grade II (>20 microB or "shower" appearance of microB), and grade III ("curtain" appearance of microB)] and SLS criteria [negative (no microB), grade I (1-10 microB), grade II (11-30 microB), grade III (31100 microB), grade IV (101300 microB), grade V (>300 microB)]. The RLS size was defined as large (>4 mm) using diameter measurement of the septal defects on transesophageal echocardiography (TEE). TCD comparison to TEE showed 24 true positive, 48 true negative, 4 false positive, and 2 false negative cases (sensitivity 92.3%, specificity 92.3%, positive predictive value (PPV) 85.7%, negative predictive value (NPV) 96%, and accuracy 92.3%) for any RLS presence. Both ICC and SLS were 100% sensitive for detection of large RLS. ICC and SLS criteria yielded a false positive rate of 24.4% and 7.7%, respectively when compared to TEE. Although both grading scales provide agreement as to any shunt presence, using the Spencer Scale grade III or higher can decrease by one-half the number of false positive TCD diagnoses to predict large RLS on TEE.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Transposon facilitated DNA sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berg, D.E.; Berg, C.M.; Huang, H.V.
1990-01-01
The purpose of this research is to investigate and develop methods that exploit the power of bacterial transposable elements for large scale DNA sequencing: Our premise is that the use of transposons to put primer binding sites randomly in target DNAs should provide access to all portions of large DNA fragments, without the inefficiencies of methods involving random subcloning and attendant repetitive sequencing, or of sequential synthesis of many oligonucleotide primers that are used to match systematically along a DNA molecule. Two unrelated bacterial transposons, Tn5 and {gamma}{delta}, are being used because they have both proven useful for molecular analyses,more » and because they differ sufficiently in mechanism and specificity of transposition to merit parallel development.« less
Gaesser, Amy H; Karan, Orv C
2017-02-01
The objective of this pilot study was to compare the efficacy of Emotional Freedom Techniques (EFT) with that of Cognitive-Behavioral Therapy (CBT) in reducing adolescent anxiety. Randomized controlled study. This study took place in 10 schools (8 public/2 private; 4 high schools/6 middle schools) in 2 northeastern states in the United States. Sixty-three high-ability students in grades 6-12, ages 10-18 years, who scored in the moderate to high ranges for anxiety on the Revised Children's Manifest Anxiety Scale-2 (RCMAS-2) were randomly assigned to CBT (n = 21), EFT (n = 21), or waitlist control (n = 21) intervention groups. CBT is the gold standard of anxiety treatment for adolescent anxiety. EFT is an evidence-based treatment for anxiety that incorporates acupoint stimulation. Students assigned to the CBT or EFT treatment groups received three individual sessions of the identified protocols from trained graduate counseling, psychology, or social work students enrolled at a large northeastern research university. The RCMAS-2 was used to assess preintervention and postintervention anxiety levels in participants. EFT participants (n = 20; M = 52.16, SD = 9.23) showed significant reduction in anxiety levels compared with the waitlist control group (n = 21; M = 57.93, SD = 6.02) (p = 0.005, d = 0.74, 95% CI [-9.76, -1.77]) with a moderate to large effect size. CBT participants (n = 21; M = 54.82, SD = 5.81) showed reduction in anxiety but did not differ significantly from the EFT (p = 0.18, d = 0.34; 95% CI [-6.61, 1.30]) or control (p = 0.12, d = 0.53, 95% CI [-7.06, .84]). EFT is an efficacious intervention to significantly reduce anxiety for high-ability adolescents.
Variability and Maintenance of Turbulence in the Very Stable Boundary Layer
NASA Astrophysics Data System (ADS)
Mahrt, Larry
2010-04-01
The relationship of turbulence quantities to mean flow quantities, such as the Richardson number, degenerates substantially for strong stability, at least in those studies that do not place restrictions on minimum turbulence or non-stationarity. This study examines the large variability of the turbulence for very stable conditions by analyzing four months of turbulence data from a site with short grass. Brief comparisons are made with three additional sites, one over short grass on flat terrain and two with tall vegetation in complex terrain. For very stable conditions, any dependence of the turbulence quantities on the mean wind speed or bulk Richardson number becomes masked by large scatter, as found in some previous studies. The large variability of the turbulence quantities is due to random variations and other physical influences not represented by the bulk Richardson number. There is no critical Richardson number above which the turbulence vanishes. For very stable conditions, the record-averaged vertical velocity variance and the drag coefficient increase with the strength of the submeso motions (wave motions, solitary waves, horizontal modes and numerous more complex signatures). The submeso motions are on time scales of minutes and not normally considered part of the mean flow. The generation of turbulence by such unpredictable motions appears to preclude universal similarity theory for predicting the surface stress for very stable conditions. Large variation of the stress direction with respect to the wind direction for the very stable regime is also examined. Needed additional work is noted.
Large-scale anisotropy in stably stratified rotating flows
Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...
2014-08-28
We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less
A two-stage model of fracture of rocks
Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.
1996-01-01
In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.
The luminosity function for the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1989-01-01
The luminosity function for two complete slices of the extension of the CfA redshift survey is calculated. The nonparametric technique of Lynden-Bell (1971) and Turner (1979) is used to determine the shape for the luminosity function of the 12 deg slice of the redshift survey. The amplitude of the luminosity function is determined, taking large-scale inhomogeneities into account. The effects of the Malmquist bias on a magnitude-limited redshift survey are examined, showing that the random errors in the magnitudes for the 12 deg slice affect both the determination of the luminosity function and the spatial density constrast of large scale structures.
Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane
NASA Technical Reports Server (NTRS)
Syberg, J.; Koncsek, J. L.
1977-01-01
Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.
Cho, Hee Ju; Chung, Jae Hoon; Jo, Jung Ki; Kang, Dong Hyuk; Cho, Jeong Man; Yoo, Tag Keun; Lee, Seung Wook
2013-12-01
Randomized controlled trials are one of the most reliable resources for assessing the effectiveness and safety of medical treatments. Low quality randomized controlled trials carry a large bias that can ultimately impair the reliability of their conclusions. The present study aimed to evaluate the quality of randomized controlled trials published in International Journal of Urology by using multiple quality assessment tools. Randomized controlled trials articles published in International Journal of Urology were found using the PubMed MEDLINE database, and qualitative analysis was carried out with three distinct assessment tools: the Jadad scale, the van Tulder scale and the Cochrane Collaboration Risk of Bias Tool. The quality of randomized controlled trials was analyzed by publication year, type of subjects, intervention, presence of funding and whether an institutional review board reviewed the study. A total of 68 randomized controlled trial articles were published among a total of 1399 original articles in International Journal of Urology. Among these randomized controlled trials, 10 (2.70%) were from 1994 to 1999, 23 (4.10%) were from 2000 to 2005 and 35 (4.00%) were from 2006 to 2011 (P = 0.494). On the assessment with the Jadad and van Tulder scale, the numbers and percentage of high quality randomized controlled trials increased over time. The studies that had institutional review board reviews, funding resources or that were carried out in multiple institutions had an increased percentage of high quality articles. The numbers and percentage of high-quality randomized controlled trials published in International Journal of Urology have increased over time. Furthermore, randomized controlled trials with funding resources, institutional review board reviews or carried out in multiple institutions have been found to be of higher quality compared with others not presenting these features. © 2013 The Japanese Urological Association.
Please don't misuse the museum: 'declines' may be statistical
Grant, Evan H. Campbell
2015-01-01
Detecting declines in populations at broad spatial scales takes enormous effort, and long-term data are often more sparse than is desired for estimating trends, identifying drivers for population changes, framing conservation decisions or taking management actions. Museum records and historic data can be available at large scales across multiple decades, and are therefore an attractive source of information on the comparative status of populations. However, changes in populations may be real (e.g., in response to environmental covariates) or resulting from variation in our ability to observe the true population response (also possibly related to environmental covariates). This is a (statistical) nuisance in understanding the true status of a population. Evaluating statistical hypotheses alongside more interesting ecological ones is important in the appropriate use of museum data. Two statistical considerations are generally applicable to use of museum records: first without initial random sampling, comparison with contemporary results cannot provide inference to the entire range of a species, and second the availability of only some individuals in a population may respond to environmental changes. Changes in the availability of individuals may reduce the proportion of the population that is present and able to be counted on a given survey event, resulting in an apparent decline even when population size is stable.
Supersonic jet noise generated by large scale instabilities
NASA Technical Reports Server (NTRS)
Seiner, J. M.; Mclaughlin, D. K.; Liu, C. H.
1982-01-01
The role of large scale wavelike structures as the major mechanism for supersonic jet noise emission is examined. With the use of aerodynamic and acoustic data for low Reynolds number, supersonic jets at and below 70 thousand comparisons are made with flow fluctuation and acoustic measurements in high Reynolds number, supersonic jets. These comparisons show that a similar physical mechanism governs the generation of sound emitted in he principal noise direction. These experimental data are further compared with a linear instability theory whose prediction for the axial location of peak wave amplitude agrees satisfactorily with measured phased averaged flow fluctuation data in the low Reynolds number jets. The agreement between theory and experiment in the high Reynolds number flow differs as to the axial location for peak flow fluctuations and predicts an apparent origin for sound emission far upstream of the measured acoustic data.
NASA Astrophysics Data System (ADS)
Schweser, Ferdinand; Dwyer, Michael G.; Deistung, Andreas; Reichenbach, Jürgen R.; Zivadinov, Robert
2013-10-01
The assessment of abnormal accumulation of tissue iron in the basal ganglia nuclei and in white matter plaques using the gradient echo magnetic resonance signal phase has become a research focus in many neurodegenerative diseases such as multiple sclerosis or Parkinson’s disease. A common and natural approach is to calculate the mean high-pass-filtered phase of previously delineated brain structures. Unfortunately, the interpretation of such an analysis requires caution: in this paper we demonstrate that regional gray matter atrophy, which is concomitant with many neurodegenerative diseases, may itself directly result in a phase shift seemingly indicative of increased iron concentration even without any real change in the tissue iron concentration. Although this effect is relatively small results of large-scale group comparisons may be driven by anatomical changes rather than by changes of the iron concentration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boore, Jeffrey L.
2004-11-27
Although the phylogenetic relationships of many organisms have been convincingly resolved by the comparisons of nucleotide or amino acid sequences, others have remained equivocal despite great effort. Now that large-scale genome sequencing projects are sampling many lineages, it is becoming feasible to compare large data sets of genome-level features and to develop this as a tool for phylogenetic reconstruction that has advantages over conventional sequence comparisons. Although it is unlikely that these will address a large number of evolutionary branch points across the broad tree of life due to the infeasibility of such sampling, they have great potential for convincinglymore » resolving many critical, contested relationships for which no other data seems promising. However, it is important that we recognize potential pitfalls, establish reasonable standards for acceptance, and employ rigorous methodology to guard against a return to earlier days of scenario-driven evolutionary reconstructions.« less
Multi-field inflation with a random potential
NASA Astrophysics Data System (ADS)
Tye, S.-H. Henry; Xu, Jiajun; Zhang, Yang
2009-04-01
Motivated by the possibility of inflation in the cosmic landscape, which may be approximated by a complicated potential, we study the density perturbations in multi-field inflation with a random potential. The random potential causes the inflaton to undergo a Brownian-like motion with a drift in the D-dimensional field space, allowing entropic perturbation modes to continuously and randomly feed into the adiabatic mode. To quantify such an effect, we employ a stochastic approach to evaluate the two-point and three-point functions of primordial perturbations. We find that in the weakly random scenario where the stochastic scatterings are frequent but mild, the resulting power spectrum resembles that of the single field slow-roll case, with up to 2% more red tilt. The strongly random scenario, in which the coarse-grained motion of the inflaton is significantly slowed down by the scatterings, leads to rich phenomenologies. The power spectrum exhibits primordial fluctuations on all angular scales. Such features may already be hiding in the error bars of observed CMB TT (as well as TE and EE) power spectrum and have been smoothed out by binning of data points. With more data coming in the future, we expect these features can be detected or falsified. On the other hand the tensor power spectrum itself is free of fluctuations and the tensor to scalar ratio is enhanced by the large ratio of the Brownian-like motion speed over the drift speed. In addition a large negative running of the power spectral index is possible. Non-Gaussianity is generically suppressed by the growth of adiabatic perturbations on super-horizon scales, and is negligible in the weakly random scenario. However, non-Gaussianity can possibly be enhanced by resonant effects in the strongly random scenario or arise from the entropic perturbations during the onset of (p)reheating if the background inflaton trajectory exhibits particular properties. The formalism developed in this paper can be applied to a wide class of multi-field inflation models including, e.g. the N-flation scenario.
Comparison of concentric needle versus hooked-wire electrodes in the canine larynx.
Jaffe, D M; Solomon, N P; Robinson, R A; Hoffman, H T; Luschei, E S
1998-05-01
The use of a specific electrode type in laryngeal electromyography has not been standardized. Laryngeal electromyography is usually performed with hooked-wire electrodes or concentric needle electrodes. Hooked-wire electrodes have the advantage of allowing laryngeal movement with ease and comfort, whereas the concentric needle electrodes have benefits from a technical aspect and may be advanced, withdrawn, or redirected during attempts to appropriately place the electrode. This study examines whether hooked-wire electrodes permit more stable recordings than standard concentric needle electrodes at rest and after large-scale movements of the larynx and surrounding structures. A histologic comparison of tissue injury resulting from placement and removal of the two electrode types is also made by evaluation of the vocal folds. Electrodes were percutaneously placed into the thyroarytenoid muscles of 10 adult canines. Amplitude of electromyographic activity was measured and compared during vagal stimulation before and after large-scale laryngeal movements. Signal consistency over time was examined. Animals were killed and vocal fold injury was graded and compared histologically. Waveform morphology did not consistently differ between electrode types. The variability of electromyographic amplitude was greater for the hooked-wire electrode (p < 0.05), whereas the mean amplitude measures before and after large-scale laryngeal movements did not differ (p > 0.05). Inflammatory responses and hematoma formation were also similar. Waveform morphology of electromyographic signals registered from both electrode types show similar complex action potentials. There is no difference between the hooked-wire electrode and the concentric needle electrode in terms of electrode stability or vocal fold injury in the thyroarytenoid muscle after large-scale laryngeal movements.
The Variance of Intraclass Correlations in Three- and Four-Level Models
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, E. C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in populations with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
The Variance of Intraclass Correlations in Three and Four Level
ERIC Educational Resources Information Center
Hedges, Larry V.; Hedberg, Eric C.; Kuyper, Arend M.
2012-01-01
Intraclass correlations are used to summarize the variance decomposition in popula- tions with multilevel hierarchical structure. There has recently been considerable interest in estimating intraclass correlations from surveys or designed experiments to provide design parameters for planning future large-scale randomized experiments. The large…
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
Robustness of Controllability for Networks Based on Edge-Attack
Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong
2014-01-01
We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components. PMID:24586507
Robustness of controllability for networks based on edge-attack.
Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong
2014-01-01
We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components.
Nesting ecology of Spectacled Eiders Somateria fischeri on the Indigirka River Delta, Russia
Pearce, John M.; Esler, Daniel N.; Degtyarev, Andrei G.
1998-01-01
In 1994 and 1995 we investigated breeding biology and nest site habitat of Spectacled Eiders on two study areas within the coastal fringe of the Indigirka River Delta, Russia (71°20' N, 150°20' E). Spectacled Eiders were first observed on 6 June in both years and nesting commenced by mid-June. Average clutch size declined with later nest initiation dates by 0.10 eggs per day; clutches were larger in 1994 than 1995 and were slightly larger on a coastal island study area compared to an interior area. Nesting success varied substantially between years, with estimates of 1.6% in 1994 and 27.6% in 1995. Total egg loss, through avian or mammalian predation, occurred more frequently than partial egg loss. Partial egg loss was detected in 16 nests and appeared unrelated to nest initiation date or clutch size. We found no difference among survival rates of nests visited weekly, biweekly, and those at which the hen was never flushed, suggesting that researcher presence did not adversely affect nesting success. A comparison of nine habitat variables within each study area revealed little difference between nest sites and a comparable number of randomly located sites, leading us to conclude that Spectacled Eiders nest randomly with respect to most small scale habitat features. We propose that large scale landscape features are more important indicators of nesting habitat as they may afford greater protection from land-based predators, such as the Arctic Fox. Demographic data collected during this study, along with recent conservation measures implemented by the Republic of Sakha (Yakutia), lead us to conclude that there are few threats to the Indigirka River Delta Spectacled Eider population. Presently, the Indigirka River Delta contains the largest concentration of nesting Spectacled Eiders and deserves continued monitoring and conservation.
Lazarov, Amit; Marom, Sofi; Yahalom, Naomi; Pine, Daniel S; Hermesh, Haggai; Bar-Haim, Yair
2017-12-20
Cognitive-behavioral group therapy (CBGT) is a first-line treatment for social anxiety disorder (SAD). However, since many patients remain symptomatic post-treatment, there is a need for augmenting procedures. This randomized controlled trial (RCT) examined the potential augmentation effect of attention bias modification (ABM) for CBGT. Fifty patients with SAD from three therapy groups were randomized to receive an 18-week standard CBGT with either ABM designed to shift attention away from threat (CBGT + ABM), or a placebo protocol not designed to modify threat-related attention (CBGT + placebo). Therapy groups took place in a large mental health center. Clinician and self-report measures of social anxiety and depression were acquired pre-treatment, post-treatment, and at 3-month follow-up. Attention bias was assessed at pre- and post-treatment. Patients randomized to the CBGT + ABM group, relative to those randomized to the CBGT + placebo group, showed greater reductions in clinician-rated SAD symptoms post-treatment, with effects maintained at 3-month follow-up. Group differences were not evident for self-report or attention-bias measures, with similar reductions in both groups. Finally, reduction in attention bias did not mediate the association between group and reduction in Liebowitz Social Anxiety Scale Structured Interview (LSAS) scores. This is the first RCT to examine the possible augmenting effect of ABM added to group-based cognitive-behavioral therapy for adult SAD. Training patients' attention away from threat might augment the treatment response to standard CBGT in SAD, a possibility that could be further evaluated in large-scale RCTs.
Age-related Cataract in a Randomized Trial of Vitamins E and C in Men
Christen, William G.; Glynn, Robert J.; Sesso, Howard D.; Kurth, Tobias; MacFadyen, Jean; Bubes, Vadim; Buring, Julie E.; Manson, JoAnn E.; Michael Gaziano, J.
2010-01-01
Objective To test whether supplementation with alternate day vitamin E or daily vitamin C affects the incidence of age-related cataract in a large-scale randomized trial of men. Design Randomized, double-masked, placebo-controlled trial. Participants Eleven thousand five hundred forty-five apparently healthy US male physicians aged 50 years or older who were without a diagnosis of cataract at baseline. Intervention Participants were randomly assigned to receive 400 IU of vitamin E or placebo on alternate days, and 500 mg of vitamin C or placebo daily. Main Outcome Measure Incident cataract responsible for a reduction in best-corrected visual acuity to 20/30 or worse based on self-report confirmed by medical record review. Results After 8 years of treatment and follow-up, a total of 1,174 incident cataracts were confirmed. There were 579 cataracts in the vitamin E treated group and 595 in the vitamin E placebo group (hazard ratio [HR], 0.99; 95 percent confidence interval [CI], 0.88 to 1.11). For vitamin C, there were 593 cataracts in the treated group and 581 in the placebo group (HR, 1.02; CI, 0.91 to 1.14). Conclusions In a large-scale randomized trial of US male physicians, long-term alternate day use of 400 IU of vitamin E and/or daily use of 500 mg of vitamin C had no significant beneficial or harmful effect on the risk of cataract. Application to Clinical Practice Long-term use of vitamin E and/or vitamin C supplements has no appreciable effect on cataract. PMID:21060040
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
The Importance of Large-Diameter Trees to Forest Structural Heterogeneity
Lutz, James A.; Larson, Andrew J.; Freund, James A.; Swanson, Mark E.; Bible, Kenneth J.
2013-01-01
Large-diameter trees dominate the structure, dynamics and function of many temperate and tropical forests. However, their attendant contributions to forest heterogeneity are rarely addressed. We established the Wind River Forest Dynamics Plot, a 25.6 ha permanent plot within which we tagged and mapped all 30,973 woody stems ≥1 cm dbh, all 1,966 snags ≥10 cm dbh, and all shrub patches ≥2 m2. Basal area of the 26 woody species was 62.18 m2/ha, of which 61.60 m2/ha was trees and 0.58 m2/ha was tall shrubs. Large-diameter trees (≥100 cm dbh) comprised 1.5% of stems, 31.8% of basal area, and 17.6% of the heterogeneity of basal area, with basal area dominated by Tsuga heterophylla and Pseudotsuga menziesii. Small-diameter subpopulations of Pseudotsuga menziesii, Tsuga heterophylla and Thuja plicata, as well as all tree species combined, exhibited significant aggregation relative to the null model of complete spatial randomness (CSR) up to 9 m (P≤0.001). Patterns of large-diameter trees were either not different from CSR (Tsuga heterophylla), or exhibited slight aggregation (Pseudotsuga menziesii and Thuja plicata). Significant spatial repulsion between large-diameter and small-diameter Tsuga heterophylla suggests that large-diameter Tsuga heterophylla function as organizers of tree demography over decadal timescales through competitive interactions. Comparison among two forest dynamics plots suggests that forest structural diversity responds to intermediate-scale environmental heterogeneity and disturbances, similar to hypotheses about patterns of species richness, and richness- ecosystem function. Large mapped plots with detailed within-plot environmental spatial covariates will be required to test these hypotheses. PMID:24376579
The importance of large-diameter trees to forest structural heterogeneity.
Lutz, James A; Larson, Andrew J; Freund, James A; Swanson, Mark E; Bible, Kenneth J
2013-01-01
Large-diameter trees dominate the structure, dynamics and function of many temperate and tropical forests. However, their attendant contributions to forest heterogeneity are rarely addressed. We established the Wind River Forest Dynamics Plot, a 25.6 ha permanent plot within which we tagged and mapped all 30,973 woody stems ≥ 1 cm dbh, all 1,966 snags ≥ 10 cm dbh, and all shrub patches ≥ 2 m(2). Basal area of the 26 woody species was 62.18 m(2)/ha, of which 61.60 m(2)/ha was trees and 0.58 m(2)/ha was tall shrubs. Large-diameter trees (≥ 100 cm dbh) comprised 1.5% of stems, 31.8% of basal area, and 17.6% of the heterogeneity of basal area, with basal area dominated by Tsuga heterophylla and Pseudotsuga menziesii. Small-diameter subpopulations of Pseudotsuga menziesii, Tsuga heterophylla and Thuja plicata, as well as all tree species combined, exhibited significant aggregation relative to the null model of complete spatial randomness (CSR) up to 9 m (P ≤ 0.001). Patterns of large-diameter trees were either not different from CSR (Tsuga heterophylla), or exhibited slight aggregation (Pseudotsuga menziesii and Thuja plicata). Significant spatial repulsion between large-diameter and small-diameter Tsuga heterophylla suggests that large-diameter Tsuga heterophylla function as organizers of tree demography over decadal timescales through competitive interactions. Comparison among two forest dynamics plots suggests that forest structural diversity responds to intermediate-scale environmental heterogeneity and disturbances, similar to hypotheses about patterns of species richness, and richness- ecosystem function. Large mapped plots with detailed within-plot environmental spatial covariates will be required to test these hypotheses.
NASA Astrophysics Data System (ADS)
Shao, Yaping; Liu, Shaofeng; Schween, Jan H.; Crewell, Susanne
2013-08-01
A model is developed for the large-eddy simulation (LES) of heterogeneous atmosphere and land-surface processes. This couples a LES model with a land-surface scheme. New developments are made to the land-surface scheme to ensure the adequate representation of atmosphere-land-surface transfers on the large-eddy scale. These include, (1) a multi-layer canopy scheme; (2) a method for flux estimates consistent with the large-eddy subgrid closure; and (3) an appropriate soil-layer configuration. The model is then applied to a heterogeneous region with 60-m horizontal resolution and the results are compared with ground-based and airborne measurements. The simulated sensible and latent heat fluxes are found to agree well with the eddy-correlation measurements. Good agreement is also found in the modelled and observed net radiation, ground heat flux, soil temperature and moisture. Based on the model results, we study the patterns of the sensible and latent heat fluxes, how such patterns come into existence, and how large eddies propagate and destroy land-surface signals in the atmosphere. Near the surface, the flux and land-use patterns are found to be closely correlated. In the lower boundary layer, small eddies bearing land-surface signals organize and develop into larger eddies, which carry the signals to considerably higher levels. As a result, the instantaneous flux patterns appear to be unrelated to the land-use patterns, but on average, the correlation between them is significant and persistent up to about 650 m. For a given land-surface type, the scatter of the fluxes amounts to several hundred W { m }^{-2}, due to (1) large-eddy randomness; (2) rapid large-eddy and surface feedback; and (3) local advection related to surface heterogeneity.
Stanley, Clayton; Byrne, Michael D
2016-12-01
The growth of social media and user-created content on online sites provides unique opportunities to study models of human declarative memory. By framing the task of choosing a hashtag for a tweet and tagging a post on Stack Overflow as a declarative memory retrieval problem, 2 cognitively plausible declarative memory models were applied to millions of posts and tweets and evaluated on how accurately they predict a user's chosen tags. An ACT-R based Bayesian model and a random permutation vector-based model were tested on the large data sets. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R's attentional weight term was linked to an entropy-weighting natural language processing method used to attenuate high-frequency words (e.g., articles and prepositions). Word order was not found to be a strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. The results of the large-scale exploration show how the architecture of the 2 memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Financial Management of a Large Multi-site Randomized Clinical Trial
Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.
2014-01-01
Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748
Financial management of a large multisite randomized clinical trial.
Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G
2014-08-01
The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.
Two-dimensional Ising model on random lattices with constant coordination number
NASA Astrophysics Data System (ADS)
Schrauth, Manuel; Richter, Julian A. J.; Portela, Jefferson S. E.
2018-02-01
We study the two-dimensional Ising model on networks with quenched topological (connectivity) disorder. In particular, we construct random lattices of constant coordination number and perform large-scale Monte Carlo simulations in order to obtain critical exponents using finite-size scaling relations. We find disorder-dependent effective critical exponents, similar to diluted models, showing thus no clear universal behavior. Considering the very recent results for the two-dimensional Ising model on proximity graphs and the coordination number correlation analysis suggested by Barghathi and Vojta [Phys. Rev. Lett. 113, 120602 (2014), 10.1103/PhysRevLett.113.120602], our results indicate that the planarity and connectedness of the lattice play an important role on deciding whether the phase transition is stable against quenched topological disorder.
The topology of large-scale structure. V - Two-dimensional topology of sky maps
NASA Astrophysics Data System (ADS)
Gott, J. R., III; Mao, Shude; Park, Changbom; Lahav, Ofer
1992-01-01
A 2D algorithm is applied to observed sky maps and numerical simulations. It is found that when topology is studied on smoothing scales larger than the correlation length, the topology is approximately in agreement with the random phase formula for the 2D genus-threshold density relation, G2(nu) varies as nu(e) exp-nu-squared/2. Some samples show small 'meatball shifts' similar to those seen in corresponding 3D observational samples and similar to those produced by biasing in cold dark matter simulations. The observational results are thus consistent with the standard model in which the structure in the universe today has grown from small fluctuations caused by random quantum noise in the early universe.
Open-label placebo treatment in chronic low back pain: a randomized controlled trial
Carvalho, Cláudia; Caetano, Joaquim Machado; Cunha, Lidia; Rebouta, Paula; Kaptchuk, Ted J.; Kirsch, Irving
2016-01-01
Abstract This randomized controlled trial was performed to investigate whether placebo effects in chronic low back pain could be harnessed ethically by adding open-label placebo (OLP) treatment to treatment as usual (TAU) for 3 weeks. Pain severity was assessed on three 0- to 10-point Numeric Rating Scales, scoring maximum pain, minimum pain, and usual pain, and a composite, primary outcome, total pain score. Our other primary outcome was back-related dysfunction, assessed on the Roland–Morris Disability Questionnaire. In an exploratory follow-up, participants on TAU received placebo pills for 3 additional weeks. We randomized 97 adults reporting persistent low back pain for more than 3 months' duration and diagnosed by a board-certified pain specialist. Eighty-three adults completed the trial. Compared to TAU, OLP elicited greater pain reduction on each of the three 0- to 10-point Numeric Rating Scales and on the 0- to 10-point composite pain scale (P < 0.001), with moderate to large effect sizes. Pain reduction on the composite Numeric Rating Scales was 1.5 (95% confidence interval: 1.0-2.0) in the OLP group and 0.2 (−0.3 to 0.8) in the TAU group. Open-label placebo treatment also reduced disability compared to TAU (P < 0.001), with a large effect size. Improvement in disability scores was 2.9 (1.7-4.0) in the OLP group and 0.0 (−1.1 to 1.2) in the TAU group. After being switched to OLP, the TAU group showed significant reductions in both pain (1.5, 0.8-2.3) and disability (3.4, 2.2-4.5). Our findings suggest that OLP pills presented in a positive context may be helpful in chronic low back pain. PMID:27755279
Open-label placebo treatment in chronic low back pain: a randomized controlled trial.
Carvalho, Cláudia; Caetano, Joaquim Machado; Cunha, Lidia; Rebouta, Paula; Kaptchuk, Ted J; Kirsch, Irving
2016-12-01
This randomized controlled trial was performed to investigate whether placebo effects in chronic low back pain could be harnessed ethically by adding open-label placebo (OLP) treatment to treatment as usual (TAU) for 3 weeks. Pain severity was assessed on three 0- to 10-point Numeric Rating Scales, scoring maximum pain, minimum pain, and usual pain, and a composite, primary outcome, total pain score. Our other primary outcome was back-related dysfunction, assessed on the Roland-Morris Disability Questionnaire. In an exploratory follow-up, participants on TAU received placebo pills for 3 additional weeks. We randomized 97 adults reporting persistent low back pain for more than 3 months' duration and diagnosed by a board-certified pain specialist. Eighty-three adults completed the trial. Compared to TAU, OLP elicited greater pain reduction on each of the three 0- to 10-point Numeric Rating Scales and on the 0- to 10-point composite pain scale (P < 0.001), with moderate to large effect sizes. Pain reduction on the composite Numeric Rating Scales was 1.5 (95% confidence interval: 1.0-2.0) in the OLP group and 0.2 (-0.3 to 0.8) in the TAU group. Open-label placebo treatment also reduced disability compared to TAU (P < 0.001), with a large effect size. Improvement in disability scores was 2.9 (1.7-4.0) in the OLP group and 0.0 (-1.1 to 1.2) in the TAU group. After being switched to OLP, the TAU group showed significant reductions in both pain (1.5, 0.8-2.3) and disability (3.4, 2.2-4.5). Our findings suggest that OLP pills presented in a positive context may be helpful in chronic low back pain.
Johnson, Cynthia R.; Turner, Kylan S.; Foldes, Emily; Brooks, Maria M.; Kronk, Rebecca; Wiggs, Luci
2013-01-01
Objectives A large percentage of children with autism spectrum disorders (ASD) have bedtime and sleep disturbances. However, the treatment of these disturbances has been understudied. The purpose of our study was to develop a manualized behavioral parent training (BPT) program for parents of young children with ASD and sleep disturbances and to test the feasibility, fidelity, and initial efficacy of the treatment in a small randomized controlled trial (RCT). Participants and methods Parents of a sample of 40 young children diagnosed with ASD with an average age of 3.5 years were enrolled in our study. Participants were randomized to either the BPT program group or a comparison group who were given nonsleep-related parent education. Each was individually administered a 5-session program delivered over the 8-week study. Outcome measures of feasibility, fidelity, and efficacy were collected at weeks 4 and 8 after the baseline time point. Children’s sleep was assessed by parent report and objectively by actigraphy. Results Of the 20 participants in each group, data were available for 15 participants randomized to BPT and 18 participants randomized to the comparison condition. Results supported the feasibility of the manualized parent training program and the comparison program. Treatment fidelity was high for both groups. The BPT program group significantly improved more than the comparison group based on the primary sleep outcome of parent report. There were no objective changes in sleep detected by actigraphy. Conclusions Our study is one of few RCTs of a BPT program to specifically target sleep disturbances in a well-characterized sample of young children with ASD and to demonstrate the feasibility of the approach. Initial efficacy favored the BPT program over the comparison group and suggested that this manualized parent training approach is worthy of further examination of the efficacy within a larger RCT. PMID:23993773
Johnson, Cynthia R; Turner, Kylan S; Foldes, Emily; Brooks, Maria M; Kronk, Rebecca; Wiggs, Luci
2013-10-01
A large percentage of children with autism spectrum disorders (ASD) have bedtime and sleep disturbances. However, the treatment of these disturbances has been understudied. The purpose of our study was to develop a manualized behavioral parent training (BPT) program for parents of young children with ASD and sleep disturbances and to test the feasibility, fidelity, and initial efficacy of the treatment in a small randomized controlled trial (RCT). Parents of a sample of 40 young children diagnosed with ASD with an average age of 3.5years were enrolled in our study. Participants were randomized to either the BPT program group or a comparison group who were given nonsleep-related parent education. Each participant was individually administered a 5-session program delivered over the 8-week study. Outcome measures of feasibility, fidelity, and efficacy were collected at weeks 4 and 8 after the baseline time point. Children's sleep was assessed by parent report and objectively by actigraphy. Of the 20 participants in each group, data were available for 15 participants randomized to BPT and 18 participants randomized to the comparison condition. Results supported the feasibility of the manualized parent training program and the comparison program. Treatment fidelity was high for both groups. The BPT program group significantly improved more than the comparison group based on the primary sleep outcome of parent report. There were no objective changes in sleep detected by actigraphy. Our study is one of few RCTs of a BPT program to specifically target sleep disturbances in a well-characterized sample of young children with ASD and to demonstrate the feasibility of the approach. Initial efficacy favored the BPT program over the comparison group and suggested that this manualized parent training approach is worthy of further examination of the efficacy within a larger RCT. Copyright © 2013 Elsevier B.V. All rights reserved.
Fractal Signals & Space-Time Cartoons
NASA Astrophysics Data System (ADS)
Oetama, H. C. Jakob; Maksoed, W. H.
2016-03-01
In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.
Capillary-tube-based extension of thermoacoustic theory for a random medium
NASA Astrophysics Data System (ADS)
Roh, Heui-Seol; Raspet, Richard; Bass, Henry E.
2005-09-01
Thermoacoustic theory for a single capillary tube is extended to random bulk medium on the basis of capillary tubes. The characteristics of the porous stack inside the resonator such as the tortuosity, dynamic shape factor, and porosity are introduced for the extension of wave equation by following Attenborough's approach. Separation of the dynamic shape factor for the viscous and thermal effect is adopted and scaling using the dynamic shape factor and tortuosity factor is demonstrated. The theoretical and experimental comparison of thermoviscous functions in reticulated vitreous carbon (RVC) and aluminum foam shows reasonable agreement. The extension is useful for investigations of the properties of a stack with arbitrary shapes of non-parallel pores.
NASA Astrophysics Data System (ADS)
McEwen, Malcolm; Sharpe, Peter; Vörös, Sándor
2015-04-01
When comparing absorbed dose standards from different laboratories (e.g. National Measurement Institutes, NMIs, for Key or Supplementary comparisons) it is rarely possible to carry out a direct comparison of primary standard instruments, and therefore some form of transfer detector is required. Historically, air-filled, unsealed ionization chambers have been used because of the long history of using these instruments, very good stability over many years, and ease of transport. However, the use of ion chambers for therapy-level comparisons is not without its problems. Findings from recent investigations suggest that ion chambers are prone to non-random variations, they are not completely robust to standard courier practices, and failure at any step in a comparison can render all measurements potentially useless. An alternative approach is to identify a transfer system that is insensitive to some of these concerns—effectively a dosimeter that is inexpensive, simple to use, robust, but with sufficient precision and of a size relevant to the disseminated quantity in question. The alanine dosimetry system has been successfully used in a number of situations as an audit dosimeter and therefore the purpose of this investigation was to determine whether alanine could also be used as the transfer detector for dosimetric comparisons, which require a lower value for the measurement uncertainty. A measurement protocol was developed for comparing primary standards of absorbed dose to water in high-energy electron beams using alanine pellets irradiated in a water-equivalent plastic phantom. A trial comparison has been carried out between three NMIs and has indicated that alanine is a suitable alternative to ion chambers, with the system used achieving a precision of 0.1%. Although the focus of the evaluation was on the performance of the dosimeter, the comparison results are encouraging, showing agreement at the level of the combined uncertainties (~0.6%). Based on this investigation, a large-scale comparison of primary standards for high-energy electron beams is currently being developed under the auspices of the BIPM.
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Accelerating large-scale protein structure alignments with graphics processing units
2012-01-01
Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132
Stochastic Fermi Energization of Coronal Plasma during Explosive Magnetic Energy Release
NASA Astrophysics Data System (ADS)
Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz; Tsiolis, Vassilis; Anastasiadis, Anastasios
2017-02-01
The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations (δB/B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points are acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker-Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker & Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path (λsc) of the particles between the scatterers inside the energization volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz
The aim of this study is to analyze the interaction of charged particles (ions and electrons) with randomly formed particle scatterers (e.g., large-scale local “magnetic fluctuations” or “coherent magnetic irregularities”) using the setup proposed initially by Fermi. These scatterers are formed by the explosive magnetic energy release and propagate with the Alfvén speed along the irregular magnetic fields. They are large-scale local fluctuations ( δB / B ≈ 1) randomly distributed inside the unstable magnetic topology and will here be called Alfvénic Scatterers (AS). We constructed a 3D grid on which a small fraction of randomly chosen grid points aremore » acting as AS. In particular, we study how a large number of test particles evolves inside a collection of AS, analyzing the evolution of their energy distribution and their escape-time distribution. We use a well-established method to estimate the transport coefficients directly from the trajectories of the particles. Using the estimated transport coefficients and solving the Fokker–Planck equation numerically, we can recover the energy distribution of the particles. We have shown that the stochastic Fermi energization of mildly relativistic and relativistic plasma can heat and accelerate the tail of the ambient particle distribution as predicted by Parker and Tidman and Ramaty. The temperature of the hot plasma and the tail of the energetic particles depend on the mean free path ( λ {sub sc}) of the particles between the scatterers inside the energization volume.« less
Super-resolving random-Gaussian apodized photon sieve.
Sabatyan, Arash; Roshaninejad, Parisa
2012-09-10
A novel apodized photon sieve is presented in which random dense Gaussian distribution is implemented to modulate the pinhole density in each zone. The random distribution in dense Gaussian distribution causes intrazone discontinuities. Also, the dense Gaussian distribution generates a substantial number of pinholes in order to form a large degree of overlap between the holes in a few innermost zones of the photon sieve; thereby, clear zones are formed. The role of the discontinuities on the focusing properties of the photon sieve is examined as well. Analysis shows that secondary maxima have evidently been suppressed, transmission has increased enormously, and the central maxima width is approximately unchanged in comparison to the dense Gaussian distribution. Theoretical results have been completely verified by experiment.
Correlation analysis of fracture arrangement in space
NASA Astrophysics Data System (ADS)
Marrett, Randall; Gale, Julia F. W.; Gómez, Leonel A.; Laubach, Stephen E.
2018-03-01
We present new techniques that overcome limitations of standard approaches to documenting spatial arrangement. The new techniques directly quantify spatial arrangement by normalizing to expected values for randomly arranged fractures. The techniques differ in terms of computational intensity, robustness of results, ability to detect anti-correlation, and use of fracture size data. Variation of spatial arrangement across a broad range of length scales facilitates distinguishing clustered and periodic arrangements-opposite forms of organization-from random arrangements. Moreover, self-organized arrangements can be distinguished from arrangements due to extrinsic organization. Traditional techniques for analysis of fracture spacing are hamstrung because they account neither for the sequence of fracture spacings nor for possible coordination between fracture size and position, attributes accounted for by our methods. All of the new techniques reveal fractal clustering in a test case of veins, or cement-filled opening-mode fractures, in Pennsylvanian Marble Falls Limestone. The observed arrangement is readily distinguishable from random and periodic arrangements. Comparison of results that account for fracture size with results that ignore fracture size demonstrates that spatial arrangement is dominated by the sequence of fracture spacings, rather than coordination of fracture size with position. Fracture size and position are not completely independent in this example, however, because large fractures are more clustered than small fractures. Both spatial and size organization of veins here probably emerged from fracture interaction during growth. The new approaches described here, along with freely available software to implement the techniques, can be applied with effect to a wide range of structures, or indeed many other phenomena such as drilling response, where spatial heterogeneity is an issue.
Bueno de Souza, Roberta Oliveira; Marcon, Liliane de Faria; Arruda, Alex Sandro Faria de; Pontes Junior, Francisco Luciano; Melo, Ruth Caldeira de
2018-06-01
The present meta-analysis aimed to examine evidence from randomized controlled trials to determine the effects of mat Pilates on measures of physical functional performance in the older population. A search was conducted in the MEDLINE/PubMed, Scopus, Scielo, and PEDro databases between February and March 2017. Only randomized controlled trials that were written in English, included subjects aged 60 yrs who used mat Pilates exercises, included a comparison (control) group, and reported performance-based measures of physical function (balance, flexibility, muscle strength, and cardiorespiratory fitness) were included. The methodological quality of the studies was analyzed according to the PEDro scale and the best-evidence synthesis. The meta-analysis was conducted with the Review Manager 5.3 software. The search retrieved 518 articles, nine of which fulfilled the inclusion criteria. High methodological quality was found in five of these studies. Meta-analysis indicated a large effect of mat Pilates on dynamic balance (standardized mean difference = 1.10, 95% confidence interval = 0.29-1.90), muscle strength (standardized mean difference = 1.13, 95% confidence interval = 0.30-1.96), flexibility (standardized mean difference = 1.22, 95% confidence interval = 0.39-2.04), and cardiorespiratory fitness (standardized mean difference = 1.48, 95% confidence interval = 0.42-2.54) of elderly subjects. There is evidence that mat Pilates improves dynamic balance, lower limb strength, hip and lower back flexibility, and cardiovascular endurance in elderly individuals. Furthermore, high-quality studies are necessary to clarify the effects of mat Pilates on other physical functional measurements among older adults.
The Development of Global and Local Processing: A Comparison of Children to Adults
ERIC Educational Resources Information Center
Peterson, Eric; Peterson, Robin L.
2014-01-01
In light of the adult model of a hemispheric asymmetry of global and local processing, we compared children (M [subscript age] = 8.4 years) to adults in a global-local reaction time (RT) paradigm. Hierarchical designs (large shapes made of small shapes) were presented randomly to each visual field, and participants were instructed to identify…
Thomas C. Brown; David Kingsley; George L. Peterson; Nicholas E. Flores; Andrea Clarke; Andrej Birjulin
2008-01-01
We examined the reliability of a large set of paired comparison value judgments involving public goods, private goods, and sums of money. As respondents progressed through a random sequence of paired choices they were each given, their response time decreased and they became more consistent, apparently fine-tuning their responses, suggesting that respondents tend to...
Siegrist, Michael; Orlow, Pascale; Keller, Carmen
2008-01-01
To evaluate various formats for the communication of prenatal test results. In study 1 (N=400), female students completed a questionnaire assessing risk perception, affect, and perceived usefulness of prenatal test results. A randomized, 2 (risk level; low, high) x 4 (format; ratio with numerator 1, ratio with denominator 1000, Paling Perspective Scale, pictograms) design was used. Study 2 (N=200) employed a 2 (risk level; low, high) x 2 (format; Paling Perspective Scale, risk comparisons in numerical format) design. In study 1, the Paling Perspective Scale resulted in a higher level of perceived risk across different risk levels compared with the other formats. Furthermore, participants in the low-risk group perceived the test results as less risky compared with participants in the high-risk group (P < 0.001) when the Paling Perspective Scale was used. No significant differences between low and high risks were observed for the other 3 formats. In study 2, the Paling Perspective Scale evoked higher levels of perceived risks relative to the numerical presentation of risk comparisons. For both formats, we found that participants confronted with a high risk perceived test results as more risky compared with participants confronted with a low risk. The Paling Perspective Scale resulted in a higher level of perceived risk compared with the other formats. This effect must be taken into account when choosing a graphical or numerical format for risk communication.
Tortuosity of lightning return stroke channels
NASA Technical Reports Server (NTRS)
Levine, D. M.; Gilson, B.
1984-01-01
Data obtained from photographs of lightning are presented on the tortuosity of return stroke channels. The data were obtained by making piecewise linear fits to the channels, and recording the cartesian coordinates of the ends of each linear segment. The mean change between ends of the segments was nearly zero in the horizontal direction and was about eight meters in the vertical direction. Histograms of these changes are presented. These data were used to create model lightning channels and to predict the electric fields radiated during return strokes. This was done using a computer generated random walk in which linear segments were placed end-to-end to form a piecewise linear representation of the channel. The computer selected random numbers for the ends of the segments assuming a normal distribution with the measured statistics. Once the channels were simulated, the electric fields radiated during a return stroke were predicted using a transmission line model on each segment. It was found that realistic channels are obtained with this procedure, but only if the model includes two scales of tortuosity: fine scale irregularities corresponding to the local channel tortuosity which are superimposed on large scale horizontal drifts. The two scales of tortuosity are also necessary to obtain agreement between the electric fields computed mathematically from the simulated channels and the electric fields radiated from real return strokes. Without large scale drifts, the computed electric fields do not have the undulations characteristics of the data.
Energetic Consistency and Coupling of the Mean and Covariance Dynamics
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2008-01-01
The dynamical state of the ocean and atmosphere is taken to be a large dimensional random vector in a range of large-scale computational applications, including data assimilation, ensemble prediction, sensitivity analysis, and predictability studies. In each of these applications, numerical evolution of the covariance matrix of the random state plays a central role, because this matrix is used to quantify uncertainty in the state of the dynamical system. Since atmospheric and ocean dynamics are nonlinear, there is no closed evolution equation for the covariance matrix, nor for the mean state. Therefore approximate evolution equations must be used. This article studies theoretical properties of the evolution equations for the mean state and covariance matrix that arise in the second-moment closure approximation (third- and higher-order moment discard). This approximation was introduced by EPSTEIN [1969] in an early effort to introduce a stochastic element into deterministic weather forecasting, and was studied further by FLEMING [1971a,b], EPSTEIN and PITCHER [1972], and PITCHER [1977], also in the context of atmospheric predictability. It has since fallen into disuse, with a simpler one being used in current large-scale applications. The theoretical results of this article make a case that this approximation should be reconsidered for use in large-scale applications, however, because the second moment closure equations possess a property of energetic consistency that the approximate equations now in common use do not possess. A number of properties of solutions of the second-moment closure equations that result from this energetic consistency will be established.
Toward a Framework for Learner Segmentation
ERIC Educational Resources Information Center
Azarnoush, Bahareh; Bekki, Jennifer M.; Runger, George C.; Bernstein, Bianca L.; Atkinson, Robert K.
2013-01-01
Effectively grouping learners in an online environment is a highly useful task. However, datasets used in this task often have large numbers of attributes of disparate types and different scales, which traditional clustering approaches cannot handle effectively. Here, a unique dissimilarity measure based on the random forest, which handles the…
Inquiry in the Physical Geology Classroom: Supporting Students' Conceptual Model Development
ERIC Educational Resources Information Center
Miller, Heather R.; McNeal, Karen S.; Herbert, Bruce E.
2010-01-01
This study characterizes the impact of an inquiry-based learning (IBL) module versus a traditionally structured laboratory exercise. Laboratory sections were randomized into experimental and control groups. The experimental group was taught using IBL pedagogical techniques and included manipulation of large-scale data-sets, use of multiple…
Partial Identification of Treatment Effects: Applications to Generalizability
ERIC Educational Resources Information Center
Chan, Wendy
2016-01-01
Results from large-scale evaluation studies form the foundation of evidence-based policy. The randomized experiment is often considered the gold standard among study designs because the causal impact of a treatment or intervention can be assessed without threats of confounding from external variables. Policy-makers have become increasingly…
Sequence analysis reveals genomic factors affecting EST-SSR primer performance and polymorphism
USDA-ARS?s Scientific Manuscript database
Search for simple sequence repeat (SSR) motifs and design of flanking primers in expressed sequence tag (EST) sequences can be easily done at a large scale using bioinformatics programs. However, failed amplification and/or detection, along with lack of polymorphism, is often seen among randomly sel...
61 FR 41385 - Notice of Government-Owned Inventions; Availability for Licensing
Federal Register 2010, 2011, 2012, 2013, 2014
1996-08-08
... PRESSURE VESSEL; filed 24 February 1995; patented 21 November 1995.// Patent 5,468,356: LARGE SCALE...,477,482: ULTRA HIGH DENSITY, NON- VOLATILE FERROMAGNETIC RANDOM ACCESS MEMORY; filed 1 October 1993....// Patent 5,483,017: HIGH TEMPERATURE THERMOSETS AND CERAMICS DERIVED FROM LINEAR CARBORANE-(SILOXANE OR...
Shape Memory Alloys for Vibration Isolation and Damping of Large-Scale Space Structures
2010-08-04
Portugal (2007) Figure 24 – Comparison of martensitic SMA with steel in sine upsweep 3.2.2.4 Dwell Test Comparison with Sine Sweep Results...International Conference on Experimental Vibration Analysis for Civil Engineering Structures (EVACES), Porto, Portugal (2007) † Lammering, Rolf...a unique jump in amplitude during a sine sweep if sufficient pre- stretch is applied. These results were significant, but investigation of more
Hoare, Jacqueline; Carey, Paul; Joska, John A; Carrara, Henri; Sorsdahl, Katherine; Stein, Dan J
2014-02-01
Depression can be a chronic and impairing illness in people with human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome. Large randomized studies of newer selective serotonin reuptake inhibitors such as escitalopram in the treatment of depression in HIV, examining comparative treatment efficacy and safety, have yet to be done in HIV-positive patients. This was a fixed-dose, placebo-controlled, randomized, double-blind study to investigate the efficacy of escitalopram in HIV-seropositive subjects with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, major depressive disorder. One hundred two participants were randomly assigned to either 10 mg of escitalopram or placebo for 6 weeks. An analysis of covariance of the completers found that there was no advantage for escitalopram over placebo on the Montgomery-Asberg Depression Rating Scale (p = 0.93). Sixty-two percent responded to escitalopram and 59% responded to placebo on the Clinical Global Impression Scale. Given the relatively high placebo response, future trials in this area need to be selective in participant recruitment and to be adequately powered.
Data for Room Fire Model Comparisons
Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121
Data for Room Fire Model Comparisons.
Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.
On the Feedback Phenomenon of an Impinging Jet
1979-09-01
the double-structured nature of turbulent flows: time dependent quasi- ordered large scale structures, and fine-scale random structures. Numerous ...downstream and upstream waves d Nozzle diameter f Frequency (Hz) Gf Normalized power si.c ,ur’ of i G ,(f) Normalized cr,- tr bee -en i(t) and J(t) I ,j xiv...1975) suggested that these quasi- ordered structures are deterministic, in the sense that they have a characteristic shape, size and convection motion
Gao, Shuang; Liu, Gang; Chen, Qilai; Xue, Wuhong; Yang, Huali; Shang, Jie; Chen, Bin; Zeng, Fei; Song, Cheng; Pan, Feng; Li, Run-Wei
2018-02-21
Resistive random access memory (RRAM) with inherent logic-in-memory capability exhibits great potential to construct beyond von-Neumann computers. Particularly, unipolar RRAM is more promising because its single polarity operation enables large-scale crossbar logic-in-memory circuits with the highest integration density and simpler peripheral control circuits. However, unipolar RRAM usually exhibits poor switching uniformity because of random activation of conducting filaments and consequently cannot meet the strict uniformity requirement for logic-in-memory application. In this contribution, a new methodology that constructs cone-shaped conducting filaments by using chemically a active metal cathode is proposed to improve unipolar switching uniformity. Such a peculiar metal cathode will react spontaneously with the oxide switching layer to form an interfacial layer, which together with the metal cathode itself can act as a load resistor to prevent the overgrowth of conducting filaments and thus make them more cone-like. In this way, the rupture of conducting filaments can be strictly limited to the tip region, making their residual parts favorable locations for subsequent filament growth and thus suppressing their random regeneration. As such, a novel "one switch + one unipolar RRAM cell" hybrid structure is capable to realize all 16 Boolean logic functions for large-scale logic-in-memory circuits.
2014-01-01
Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292
Large eddy simulations of compressible magnetohydrodynamic turbulence
NASA Astrophysics Data System (ADS)
Grete, Philipp
2017-02-01
Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.
Negative probability of random multiplier in turbulence
NASA Astrophysics Data System (ADS)
Bai, Xuan; Su, Weidong
2017-11-01
The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).
Rascol, Olivier; Zesiewicz, Theresa; Chaudhuri, K Ray; Asgharnejad, Mahnaz; Surmann, Erwin; Dohin, Elisabeth; Nilius, Sigrid; Bauer, Lars
2016-07-01
Pain is a troublesome nonmotor symptom of Parkinson's disease (PD). This double-blind exploratory pilot study (NCT01744496) was the first to specifically investigate the effect of a dopamine agonist on PD-associated pain as primary outcome. Patients with advanced PD (ie, receiving levodopa) and at least moderate PD-associated chronic pain (≥3 months, ≥4 points on 11-point Likert pain scale) were randomized to rotigotine (optimal/maximum dose ≤16 mg/24h) or placebo and maintained for 12 weeks. Primary efficacy variable was change in pain severity (Likert pain scale) from baseline to end of maintenance. Secondary variables included percentage of responders (≥2-point Likert pain scale reduction), King's PD Pain Scale (KPPS) domains, and PD Questionnaire (PDQ-8). Statistical analyses were exploratory. Of 68 randomized patients, 60 (rotigotine, 30; placebo, 30) were evaluable for efficacy. A numerical improvement in pain was observed in favor of rotigotine (Likert pain scale: least-squares mean [95%CI] treatment difference, -0.76 [-1.87 to 0.34]; P = .172), and proportion of responders was 18/30 (60%) rotigotine vs 14/30 (47%) placebo. An ∼2-fold numerical improvement in KPPS domain "fluctuation-related pain" was observed with rotigotine vs placebo. Rotigotine improved PDQ-8 vs placebo (-8.01 [-15.56 to -0.46]; P = .038). These results suggest rotigotine may improve PD-associated pain; a large-scale confirmatory study is needed. © 2015, The American College of Clinical Pharmacology.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
Wing, Rena R.; Tate, Deborah F.; Garcia, Katelyn R.; Bahnson, Judy; Lewis, Cora E.; Espeland, Mark A.
2017-01-01
Objective Weight gain occurs commonly in young adults and increases cardiovascular (CVD) risk. We previously reported that two self-regulation interventions reduced weight gain relative to control. Here we examine whether these interventions also benefit CVD risk factors. Methods SNAP (Study of Novel Approaches to Weight Gain Prevention) was a randomized trial in 2 academic settings (N=599; 18–35 years; body mass index 21–30 kg/m2) comparing two interventions (Self-Regulation with Small Changes; Self-Regulation with Large Changes) and Control. Small Changes taught participants to make daily small changes (approximately 100 calorie) in intake and activity. Large Changes taught participants to initially lose 5–10 pounds to buffer anticipated weight gains. CVD risk factors were assessed at baseline and 2 years in 471 participants. Results Although Large Changes was associated with more beneficial changes in glucose, insulin, and HOMA-IR than Control, these differences were not significant after adjusting for multiple comparisons or 2-year weight change. Comparison of participants grouped by percent weight change baseline to 2 years showed significant differences for several CVD risk factors, with no interaction with treatment condition. Conclusions Magnitude of weight change, rather than specific weight gain prevention interventions, was related to changes in CVD risk factors in young adults. PMID:28782918
Development of analog watch with minute repeater
NASA Astrophysics Data System (ADS)
Okigami, Tomio; Aoyama, Shigeru; Osa, Takashi; Igarashi, Kiyotaka; Ikegami, Tomomi
A complementary metal oxide semiconductor with large scale integration was developed for an electronic minute repeater. It is equipped with the synthetic struck sound circuit to generate natural struck sound necessary for the minute repeater. This circuit consists of an envelope curve drawing circuit, frequency mixer, polyphonic mixer, and booster circuit made by using analog circuit technology. This large scale integration is a single chip microcomputer with motor drivers and input ports in addition to the synthetic struck sound circuit, and it is possible to make an electronic system of minute repeater at a very low cost in comparison with the conventional type.
NASA Technical Reports Server (NTRS)
Dittmer, P. H.; Scherrer, P. H.; Wilcox, J. M.
1978-01-01
The large-scale solar velocity field has been measured over an aperture of radius 0.8 solar radii on 121 days between April and September, 1976. Measurements are made in the line Fe I 5123.730 A, employing a velocity subtraction technique similar to that of Severny et al. (1976). Comparisons of the amplitude and frequency of the five-minute resonant oscillation with the geomagnetic C9 index and magnetic sector boundaries show no evidence of any relationship between the oscillations and coronal holes or sector structure.
Impact of lateral boundary conditions on regional analyses
NASA Astrophysics Data System (ADS)
Chikhar, Kamel; Gauthier, Pierre
2017-04-01
Regional and global climate models are usually validated by comparison to derived observations or reanalyses. Using a model in data assimilation results in a direct comparison to observations to produce its own analyses that may reveal systematic errors. In this study, regional analyses over North America are produced based on the fifth-generation Canadian Regional Climate Model (CRCM5) combined with the variational data assimilation system of the Meteorological Service of Canada (MSC). CRCM5 is driven at its boundaries by global analyses from ERA-interim or produced with the global configuration of the CRCM5. Assimilation cycles for the months of January and July 2011 revealed systematic errors in winter through large values in the mean analysis increments. This bias is attributed to the coupling of the lateral boundary conditions of the regional model with the driving data particularly over the northern boundary where a rapidly changing large scale circulation created significant cross-boundary flows. Increasing the time frequency of the lateral driving and applying a large-scale spectral nudging improved significantly the circulation through the lateral boundaries which translated in a much better agreement with observations.
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Giusarma, Elena
2017-10-01
Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.
Drosg, B; Wirthensohn, T; Konrad, G; Hornbachner, D; Resch, C; Wäger, F; Loderer, C; Waltenberger, R; Kirchmayr, R; Braun, R
2008-01-01
A comparison of stillage treatment options for large-scale bioethanol plants was based on the data of an existing plant producing approximately 200,000 t/yr of bioethanol and 1,400,000 t/yr of stillage. Animal feed production--the state-of-the-art technology at the plant--was compared to anaerobic digestion. The latter was simulated in two different scenarios: digestion in small-scale biogas plants in the surrounding area versus digestion in a large-scale biogas plant at the bioethanol production site. Emphasis was placed on a holistic simulation balancing chemical parameters and calculating logistic algorithms to compare the efficiency of the stillage treatment solutions. For central anaerobic digestion different digestate handling solutions were considered because of the large amount of digestate. For land application a minimum of 36,000 ha of available agricultural area would be needed and 600,000 m(3) of storage volume. Secondly membrane purification of the digestate was investigated consisting of decanter, microfiltration, and reverse osmosis. As a third option aerobic wastewater treatment of the digestate was discussed. The final outcome was an economic evaluation of the three mentioned stillage treatment options, as a guide to stillage management for operators of large-scale bioethanol plants. Copyright IWA Publishing 2008.
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
NASA Astrophysics Data System (ADS)
Takamasu, Kiyoshi; Takahashi, Satoru; Kawada, Hiroki; Ikota, Masami
2018-03-01
LER (Line Edge Roughness) and LWR (Line Width Roughness) of the semiconductor device are an important evaluation scale of the performance of the device. Conventionally, LER and LWR is evaluated from CD-SEM (Critical Dimension Scanning Electron Microscope) images. However, CD-SEM measurement has a problem that high frequency random noise is large, and resolution is not sufficiently high. For random noise of CD-SEM measurement, some techniques are proposed. In these methods, it is necessary to set parameters for model and processing, and it is necessary to verify the correctness of these parameters using reference metrology. We have already proposed a novel reference metrology using FIB (Focused Ion Beam) process and planar-TEM (Transmission Electron Microscope) method. In this study, we applied the proposed method to three new samples such as SAQP (Self-Aligned Quadruple Patterning) FinFET device, EUV (Extreme Ultraviolet Lithography) conventional resist, and EUV new material resist. LWR and PSD (Power Spectral Density) of LWR are calculated from the edge positions on planar-TEM images. We confirmed that LWR and PSD of LWR can be measured with high accuracy and evaluated the difference by the proposed method. Furthermore, from comparisons with PSD of the same sample by CD-SEM, the validity of measurement of PSD and LWR by CD-SEM can be verified.
Head Start’s Impact is Contingent on Alternative Type of Care in Comparison Group
Brooks-Gunn, Jeanne; Waldfogel, Jane
2014-01-01
Using data (n = 3,790 with 2,119 in the 3-year-old cohort and 1,671 in the 4-year-old cohort) from 353 Head Start centers in the Head Start Impact Study, the only large-scale randomized experiment in Head Start history, this paper examined the impact of Head Start on children’s cognitive and parent-reported social-behavioral outcomes through first grade contingent on the child care arrangements used by children who were randomly assigned to the control group (i.e., parental care, relative/non-relative care, another Head Start program, or other center-based care). A principal score matching approach was adopted to identify children assigned to Head Start who were similar to children in the control group with a specific care arrangement. Overall, the results showed that the effects of Head Start varied substantially contingent on the alternative child care arrangements. Compared to children in parental care and relative/non-relative care, Head Start participants generally had better cognitive and parent-reported behavioral development, with some benefits of Head Start persisting through first grade; in contrast, few differences were found between Head Start and other center-based care. The results have implications regarding the children for whom Head Start is most beneficial as well as how well Head Start compares to other center-based programs. PMID:25329552
Lee, Ellen E; Della Selva, Megan P; Liu, Anson; Himelhoch, Seth
2015-01-01
Given the significant disability, morbidity and mortality associated with depression, the promising recent trials of ketamine highlight a novel intervention. A meta-analysis was conducted to assess the efficacy of ketamine in comparison with placebo for the reduction of depressive symptoms in patients who meet criteria for a major depressive episode. Two electronic databases were searched in September 2013 for English-language studies that were randomized placebo-controlled trials of ketamine treatment for patients with major depressive disorder or bipolar depression and utilized a standardized rating scale. Studies including participants receiving electroconvulsive therapy and adolescent/child participants were excluded. Five studies were included in the quantitative meta-analysis. The quantitative meta-analysis showed that ketamine significantly reduced depressive symptoms. The overall effect size at day 1 was large and statistically significant with an overall standardized mean difference of 1.01 (95% confidence interval 0.69-1.34) (P<.001), with the effects sustained at 7 days postinfusion. The heterogeneity of the studies was low and not statistically significant, and the funnel plot showed no publication bias. The large and statistically significant effect of ketamine on depressive symptoms supports a promising, new and effective pharmacotherapy with rapid onset, high efficacy and good tolerability. Copyright © 2015. Published by Elsevier Inc.
Single-trabecula building block for large-scale finite element models of cancellous bone.
Dagan, D; Be'ery, M; Gefen, A
2004-07-01
Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.
Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations
NASA Technical Reports Server (NTRS)
Kaiser, N.; Davis, M.
1985-01-01
It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Assessing Low-Intensity Relationships in Complex Networks
Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes
2016-01-01
Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes. PMID:27096435
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
Assessing Low-Intensity Relationships in Complex Networks.
Spitz, Andreas; Gimmler, Anna; Stoeck, Thorsten; Zweig, Katharina Anna; Horvát, Emőke-Ágnes
2016-01-01
Many large network data sets are noisy and contain links representing low-intensity relationships that are difficult to differentiate from random interactions. This is especially relevant for high-throughput data from systems biology, large-scale ecological data, but also for Web 2.0 data on human interactions. In these networks with missing and spurious links, it is possible to refine the data based on the principle of structural similarity, which assesses the shared neighborhood of two nodes. By using similarity measures to globally rank all possible links and choosing the top-ranked pairs, true links can be validated, missing links inferred, and spurious observations removed. While many similarity measures have been proposed to this end, there is no general consensus on which one to use. In this article, we first contribute a set of benchmarks for complex networks from three different settings (e-commerce, systems biology, and social networks) and thus enable a quantitative performance analysis of classic node similarity measures. Based on this, we then propose a new methodology for link assessment called z* that assesses the statistical significance of the number of their common neighbors by comparison with the expected value in a suitably chosen random graph model and which is a consistently top-performing algorithm for all benchmarks. In addition to a global ranking of links, we also use this method to identify the most similar neighbors of each single node in a local ranking, thereby showing the versatility of the method in two distinct scenarios and augmenting its applicability. Finally, we perform an exploratory analysis on an oceanographic plankton data set and find that the distribution of microbes follows similar biogeographic rules as those of macroorganisms, a result that rejects the global dispersal hypothesis for microbes.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Klein, Brennan J; Li, Zhi; Durgin, Frank H
2016-04-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Klein, Brennan J.; Li, Zhi; Durgin, Frank H.
2015-01-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides in order to dissociate egocentric from allocentric reference frames. In Experiment 1 it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. PMID:26594884
Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin
2006-01-01
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Promoting Handwashing Behavior: The Effects of Large-scale Community and School-level Interventions.
Galiani, Sebastian; Gertler, Paul; Ajzenman, Nicolas; Orsola-Vidal, Alexandra
2016-12-01
This paper analyzes a randomized experiment that uses novel strategies to promote handwashing with soap at critical points in time in Peru. It evaluates a large-scale comprehensive initiative that involved both community and school activities in addition to communication campaigns. The analysis indicates that the initiative was successful in reaching the target audience and in increasing the treated population's knowledge about appropriate handwashing behavior. These improvements translated into higher self-reported and observed handwashing with soap at critical junctures. However, no significant improvements in the health of children under the age of 5 years were observed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Universality of accelerating change
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Shlesinger, Michael F.
2018-03-01
On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
A 3D model of polarized dust emission in the Milky Way
NASA Astrophysics Data System (ADS)
Martínez-Solaeche, Ginés; Karakci, Ata; Delabrouille, Jacques
2018-05-01
We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations.
Deployment dynamics and control of large-scale flexible solar array system with deployable mast
NASA Astrophysics Data System (ADS)
Li, Hai-Quan; Liu, Xiao-Feng; Guo, Shao-Jing; Cai, Guo-Ping
2016-10-01
In this paper, deployment dynamics and control of large-scale flexible solar array system with deployable mast are investigated. The adopted solar array system is introduced firstly, including system configuration, deployable mast and solar arrays with several mechanisms. Then dynamic equation of the solar array system is established by the Jourdain velocity variation principle and a method for dynamics with topology changes is introduced. In addition, a PD controller with disturbance estimation is designed to eliminate the drift of spacecraft mainbody. Finally the validity of the dynamic model is verified through a comparison with ADAMS software and the deployment process and dynamic behavior of the system are studied in detail. Simulation results indicate that the proposed model is effective to describe the deployment dynamics of the large-scale flexible solar arrays and the proposed controller is practical to eliminate the drift of spacecraft mainbody.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, C.E.; Bass, B.R.; Keeney, J.A.
This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasismore » was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.« less
NASA Astrophysics Data System (ADS)
Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua
2015-12-01
Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.
A fast ergodic algorithm for generating ensembles of equilateral random polygons
NASA Astrophysics Data System (ADS)
Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.
2009-03-01
Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.
Skarin, Anna; Alam, Moudud
2017-06-01
Worldwide there is a rush toward wind power development and its associated infrastructure. In Fennoscandia, large-scale wind farms comprising several hundred windmills are currently built in important grazing ranges used for Sámi reindeer husbandry. In this study, reindeer habitat use was assessed using reindeer fecal pellet group counts in relation to two relatively small wind farms, with 8 and 10 turbines, respectively. In 2009, 1,315 15-m 2 plots were established and pellet groups were counted and cleaned from the plots. This was repeated once a year in May, during preconstruction, construction, and operation of the wind farms, covering 6 years (2009-2014) of reindeer habitat use in the area. We modeled the presence/absence of any pellets in a plot at both the local (wind farm site) and regional (reindeer calving to autumn range) scale with a hierarchical logistic regression, where spatial correlation was accounted for via random effects, using vegetation type, and the interaction between distance to wind turbine and time period as predictor variables. Our results revealed an absolute reduction in pellet groups by 66% and 86% around each wind farm, respectively, at local scale and by 61% at regional scale during the operation phase compared to the preconstruction phase. At the regional, scale habitat use declined close to the turbines in the same comparison. However, at the local scale, we observed increased habitat use close to the wind turbines at one of the wind farms during the operation phase. This may be explained by continued use of an important migration route close to the wind farm. The reduced use at the regional scale nevertheless suggests that there may be an overall avoidance of both wind farms during operation, but further studies of reindeer movement and behavior are needed to gain a better understanding of the mechanisms behind this suggested avoidance.
Nawata, Kengo
2014-06-01
Despite the widespread popular belief in Japan about a relationship between personality and ABO blood type, this association has not been empirically substantiated. This study provides more robust evidence that there is no relationship between blood type and personality, through a secondary analysis of large-scale survey data. Recent data (after 2000) were collected using large-scale random sampling from over 10,000 people in total from both Japan and the US. Effect sizes were calculated. Japanese datasets from 2004 (N = 2,878-2,938), and 2,005 (N = 3,618-3,692) as well as one dataset from the US in 2004 (N = 3,037-3,092) were used. In all the datasets, 65 of 68 items yielded non-significant differences between blood groups. Effect sizes (eta2) were less than .003. This means that blood type explained less than 0.3% of the total variance in personality. These results show the non-relevance of blood type for personality.
SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.
Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P
2013-12-01
Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.
Seeded hot dark matter models with inflation
NASA Technical Reports Server (NTRS)
Gratsias, John; Scherrer, Robert J.; Steigman, Gary; Villumsen, Jens V.
1993-01-01
We examine massive neutrino (hot dark matter) models for large-scale structure in which the density perturbations are produced by randomly distributed relic seeds and by inflation. Power spectra, streaming velocities, and the Sachs-Wolfe quadrupole fluctuation are derived for this model. We find that the pure seeded hot dark matter model without inflation produces Sachs-Wolfe fluctuations far smaller than those seen by COBE. With the addition of inflationary perturbations, fluctuations consistent with COBE can be produced. The COBE results set the normalization of the inflationary component, which determines the large-scale (about 50/h Mpc) streaming velocities. The normalization of the seed power spectrum is a free parameter, which can be adjusted to obtain the desired fluctuations on small scales. The power spectra produced are very similar to those seen in mixed hot and cold dark matter models.
Large-scale Graph Computation on Just a PC
2014-05-01
edges for several vertices simultaneously). We compared the performance of GraphChi-DB to Neo4j using their Java API (we discuss MySQL comparison in the...75 4.7.6 Comparison to RDBMS ( MySQL ) . . . . . . . . . . . . . . . . . . . . . 75 4.7.7 Summary of the...Windows method, GraphChi. The C++ implementation has circa 8,000 lines of code. We have also de- veloped a Java -version of GraphChi, but it does not
NASA Technical Reports Server (NTRS)
Baker, B.; Brown, H.
1974-01-01
Advantages of the large time bandwidth product of optical processing are presented. Experiments were performed to study the feasibility of the use of optical spectral analysis for detection of flaws in structural elements excited by random noise. Photographic and electronic methods of comparison of complex spectra were developed. Limitations were explored, and suggestions for further work are offered.
Diffusion in random networks: Asymptotic properties, and numerical and engineering approximations
NASA Astrophysics Data System (ADS)
Padrino, Juan C.; Zhang, Duan Z.
2016-11-01
The ensemble phase averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of a set of pockets connected by tortuous channels. Inside a channel, we assume that fluid transport is governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. The so-called dual porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem, we consider the one-dimensional mass diffusion in a semi-infinite domain, whose solution is sought numerically. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt- 1 / 4 rather than xt- 1 / 2 as in the traditional theory. This early time sub-diffusive similarity can be explained by random walk theory through the network. In addition, by applying concepts of fractional calculus, we show that, for small time, the governing equation reduces to a fractional diffusion equation with known solution. We recast this solution in terms of special functions easier to compute. Comparison of the numerical and exact solutions shows excellent agreement.
Large-Scale Hybrid Motor Testing. Chapter 10
NASA Technical Reports Server (NTRS)
Story, George
2006-01-01
Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.
Reproduction and optical analysis of Morpho-inspired polymeric nanostructures
NASA Astrophysics Data System (ADS)
Tippets, Cary A.; Fu, Yulan; Jackson, Anne-Martine; Donev, Eugenii U.; Lopez, Rene
2016-06-01
The brilliant blue coloration of the Morpho rhetenor butterfly originates from complex nanostructures found on the surface of its wings. The Morpho butterfly exhibits strong short-wavelength reflection and a unique two-lobe optical signature in the incident (θ) and reflected (ϕ) angular space. Here, we report the large-area fabrication of a Morpho-like structure and its reproduction in perfluoropolyether. Reflection comparisons of periodic and quasi-random ‘polymer butterfly’ nanostructures show similar normal-incidence spectra but differ in the angular θ-ϕ dependence. The periodic sample shows strong specular reflection and simple diffraction. However, the quasi-random sample produces a two-lobe angular reflection pattern with minimal specular refection, approximating the real butterfly’s optical behavior. Finite-difference time-domain simulations confirm that this pattern results from the quasi-random periodicity and highlights the significance of the inherent randomness in the Morpho’s photonic structure.
NASA Technical Reports Server (NTRS)
Chen, Fei; Yates, David; LeMone, Margaret
2001-01-01
To understand the effects of land-surface heterogeneity and the interactions between the land-surface and the planetary boundary layer at different scales, we develop a multiscale data set. This data set, based on the Cooperative Atmosphere-Surface Exchange Study (CASES97) observations, includes atmospheric, surface, and sub-surface observations obtained from a dense observation network covering a large region on the order of 100 km. We use this data set to drive three land-surface models (LSMs) to generate multi-scale (with three resolutions of 1, 5, and 10 kilometers) gridded surface heat flux maps for the CASES area. Upon validating these flux maps with measurements from surface station and aircraft, we utilize them to investigate several approaches for estimating the area-integrated surface heat flux for the CASES97 domain of 71x74 square kilometers, which is crucial for land surface model development/validation and area water and energy budget studies. This research is aimed at understanding the relative contribution of random turbulence versus organized mesoscale circulations to the area-integrated surface flux at the scale of 100 kilometers, and identifying the most important effective parameters for characterizing the subgrid-scale variability for large-scale atmosphere-hydrology models.
Comparing SMAP to Macro-scale and Hyper-resolution Land Surface Models over Continental U. S.
NASA Astrophysics Data System (ADS)
Pan, Ming; Cai, Xitian; Chaney, Nathaniel; Wood, Eric
2016-04-01
SMAP sensors collect moisture information in top soil at the spatial resolution of ~40 km (radiometer) and ~1 to 3 km (radar, before its failure in July 2015). Such information is extremely valuable for understanding various terrestrial hydrologic processes and their implications on human life. At the same time, soil moisture is a joint consequence of numerous physical processes (precipitation, temperature, radiation, topography, crop/vegetation dynamics, soil properties, etc.) that happen at a wide range of scales from tens of kilometers down to tens of meters. Therefore, a full and thorough analysis/exploration of SMAP data products calls for investigations at multiple spatial scales - from regional, to catchment, and to field scales. Here we first compare the SMAP retrievals to the Variable Infiltration Capacity (VIC) macro-scale land surface model simulations over the continental U. S. region at 3 km resolution. The forcing inputs to the model are merged/downscaled from a suite of best available data products including the NLDAS-2 forcing, Stage IV and Stage II precipitation, GOES Surface and Insolation Products, and fine elevation data. The near real time VIC simulation is intended to provide a source of large scale comparisons at the active sensor resolution. Beyond the VIC model scale, we perform comparisons at 30 m resolution against the recently developed HydroBloks hyper-resolution land surface model over several densely gauged USDA experimental watersheds. Comparisons are also made against in-situ point-scale observations from various SMAP Cal/Val and field campaign sites.
Duncan, Larissa G; Cohn, Michael A; Chao, Maria T; Cook, Joseph G; Riccobono, Jane; Bardacke, Nancy
2017-05-12
Childbirth fear is linked with lower labor pain tolerance and worse postpartum adjustment. Empirically validated childbirth preparation options are lacking for pregnant women facing this problem. Mindfulness approaches, now widely disseminated, can alleviate symptoms of both chronic and acute pain and improve psychological adjustment, suggesting potential benefit when applied to childbirth education. This study, the Prenatal Education About Reducing Labor Stress (PEARLS) study, is a randomized controlled trial (RCT; n = 30) of a short, time-intensive, 2.5-day mindfulness-based childbirth preparation course offered as a weekend workshop, the Mind in Labor (MIL): Working with Pain in Childbirth, based on Mindfulness-Based Childbirth and Parenting (MBCP) education. First-time mothers in the late 3rd trimester of pregnancy were randomized to attend either the MIL course or a standard childbirth preparation course with no mind-body focus. Participants completed self-report assessments pre-intervention, post-intervention, and post-birth, and medical record data were collected. In a demographically diverse sample, this small RCT demonstrated mindfulness-based childbirth education improved women's childbirth-related appraisals and psychological functioning in comparison to standard childbirth education. MIL program participants showed greater childbirth self-efficacy and mindful body awareness (but no changes in dispositional mindfulness), lower post-course depression symptoms that were maintained through postpartum follow-up, and a trend toward a lower rate of opioid analgesia use in labor. They did not, however, retrospectively report lower perceived labor pain or use epidural less frequently than controls. This study suggests mindfulness training carefully tailored to address fear and pain of childbirth may lead to important maternal mental health benefits, including improvements in childbirth-related appraisals and the prevention of postpartum depression symptoms. There is also some indication that MIL participants may use mindfulness coping in lieu of systemic opioid pain medication. A large-scale RCT that captures real-time pain perceptions during labor and length of labor is warranted to provide a more definitive test of these effects. The ClinicalTrials.gov identifier for the PEARLS study is: NCT02327559 . The study was retrospectively registered on June 23, 2014.
Implementation intentions and colorectal screening: a randomized trial in safety-net clinics.
Greiner, K Allen; Daley, Christine M; Epp, Aaron; James, Aimee; Yeh, Hung-Wen; Geana, Mugur; Born, Wendi; Engelman, Kimberly K; Shellhorn, Jeremy; Hester, Christina M; LeMaster, Joseph; Buckles, Daniel C; Ellerbeck, Edward F
2014-12-01
Low-income and racial/ethnic minority populations experience disproportionate colorectal cancer (CRC) burden and poorer survival. Novel behavioral strategies are needed to improve screening rates in these groups. The study aimed to test a theoretically based "implementation intentions" intervention for improving CRC screening among unscreened adults in urban safety-net clinics. Randomized controlled trial. Adults (N=470) aged ≥50 years, due for CRC screening, from urban safety-net clinics were recruited. The intervention (conducted in 2009-2011) was delivered via touchscreen computers that tailored informational messages to decisional stage and screening barriers. The computer then randomized participants to generic health information on diet and exercise (Comparison group) or "implementation intentions" questions and planning (Experimental group) specific to the CRC screening test chosen (fecal immunochemical test or colonoscopy). The primary study outcome was completion of CRC screening at 26 weeks based on test reports (analysis conducted in 2012-2013). The study population had a mean age of 57 years and was 42% non-Hispanic African American, 28% non-Hispanic white, and 27% Hispanic. Those receiving the implementation intentions-based intervention had higher odds (AOR=1.83, 95% CI=1.23, 2.73) of completing CRC screening than the Comparison group. Those with higher self-efficacy for screening (AOR=1.57, 95% CI=1.03, 2.39), history of asthma (AOR=2.20, 95% CI=1.26, 3.84), no history of diabetes (AOR=1.86, 95% CI=1.21, 2.86), and reporting they had never heard that "cutting on cancer" makes it spread (AOR=1.78, 95% CI=1.16, 2.72) were more likely to complete CRC screening. The results of this study suggest that programs incorporating an implementation intentions approach can contribute to successful completion of CRC screening even among very low-income and diverse primary care populations. Future initiatives to reduce CRC incidence and mortality disparities may be able to employ implementation intentions in large-scale efforts to encourage screening and prevention behaviors. Copyright © 2014. Published by Elsevier Inc.
Three-Year Evaluation of a Large Scale Early Grade French Immersion Program: The Ottawa Study
ERIC Educational Resources Information Center
Barik, Henri; Swain, Marrill
1975-01-01
The school performance of pupils in grades K-2 of the French immersion program in operation in Ottawa public schools is evaluated in comparison with that of pupils in the regular English program. (Author/RM)
Similarities between principal components of protein dynamics and random diffusion
NASA Astrophysics Data System (ADS)
Hess, Berk
2000-12-01
Principal component analysis, also called essential dynamics, is a powerful tool for finding global, correlated motions in atomic simulations of macromolecules. It has become an established technique for analyzing molecular dynamics simulations of proteins. The first few principal components of simulations of large proteins often resemble cosines. We derive the principal components for high-dimensional random diffusion, which are almost perfect cosines. This resemblance between protein simulations and noise implies that for many proteins the time scales of current simulations are too short to obtain convergence of collective motions.
Power generation in random diode arrays
NASA Astrophysics Data System (ADS)
Shvydka, Diana; Karpov, V. G.
2005-03-01
We discuss nonlinear disordered systems, random diode arrays (RDAs), which can represent such objects as large-area photovoltaics and ion channels of biological membranes. Our numerical modeling has revealed several interesting properties of RDAs. In particular, the geometrical distribution of nonuniformities across a RDA has only a minor effect on its integral characteristics determined by RDA parameter statistics. In the meantime, the dispersion of integral characteristics vs system size exhibits a nontrivial scaling dependence. Our theoretical interpretation here remains limited and is based on the picture of eddy currents flowing through weak diodes in the RDA.
Do Interim Assessments Reduce the Race and SES Achievement Gaps?
ERIC Educational Resources Information Center
Konstantopoulos, Spyros; Li, Wei; Miller, Shazia R.; van der Ploeg, Arie
2017-01-01
The authors examined differential effects of interim assessments on minority and low socioeconomic status students' achievement in Grades K-6. They conducted a large-scale cluster randomized experiment in 2009-2010 to evaluate the impact of Indiana's policy initiative introducing interim assessments statewide. The authors used 2-level models to…
What Have Researchers Learned from Project STAR?
ERIC Educational Resources Information Center
Schanzenbach, Diane Whitmore
2007-01-01
Project STAR (Student/Teacher Achievement Ratio) was a large-scale randomized trial of reduced class sizes in kindergarten through the third grade. Because of the scope of the experiment, it has been used in many policy discussions. For example, the California statewide class-size-reduction policy was justified, in part, by the successes of…
Designing Large-Scale Multisite and Cluster-Randomized Studies of Professional Development
ERIC Educational Resources Information Center
Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi
2017-01-01
We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…
Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories
ERIC Educational Resources Information Center
Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.
2011-01-01
A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…
ERIC Educational Resources Information Center
Vaughan, Angela L.; Lalonde, Trent L.; Jenkins-Guarnieri, Michael A.
2014-01-01
Many researchers assessing the efficacy of educational programs face challenges due to issues with non-randomization and the likelihood of dependence between nested subjects. The purpose of the study was to demonstrate a rigorous research methodology using a hierarchical propensity score matching method that can be utilized in contexts where…
Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach. NCEE 2012-4025
ERIC Educational Resources Information Center
Gleason, Philip M.; Resch, Alexandra M.; Berk, Jillian A.
2012-01-01
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials--the Education Technology Evaluation and the Teach for America Study--to provide evidence on the performance of…