Sample records for random average process

  1. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  2. Random variable transformation for generalized stochastic radiative transfer in finite participating slab media

    NASA Astrophysics Data System (ADS)

    El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.

    2015-10-01

    The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.

  3. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    NASA Astrophysics Data System (ADS)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  4. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  5. Studies in astronomical time series analysis: Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  6. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    NASA Astrophysics Data System (ADS)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  7. Heat currents in electronic junctions driven by telegraph noise

    NASA Astrophysics Data System (ADS)

    Entin-Wohlman, O.; Chowdhury, D.; Aharony, A.; Dattagupta, S.

    2017-11-01

    The energy and charge fluxes carried by electrons in a two-terminal junction subjected to a random telegraph noise, produced by a single electronic defect, are analyzed. The telegraph processes are imitated by the action of a stochastic electric field that acts on the electrons in the junction. Upon averaging over all random events of the telegraph process, it is found that this electric field supplies, on the average, energy to the electronic reservoirs, which is distributed unequally between them: the stronger is the coupling of the reservoir with the junction, the more energy it gains. Thus the noisy environment can lead to a temperature gradient across an unbiased junction.

  8. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  9. Value stream mapping of the Pap test processing procedure: a lean approach to improve quality and efficiency.

    PubMed

    Michael, Claire W; Naik, Kalyani; McVicker, Michael

    2013-05-01

    We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.

  10. Are randomly grown graphs really random?

    PubMed

    Callaway, D S; Hopcroft, J E; Kleinberg, J M; Newman, M E; Strogatz, S H

    2001-10-01

    We analyze a minimal model of a growing network. At each time step, a new vertex is added; then, with probability delta, two vertices are chosen uniformly at random and joined by an undirected edge. This process is repeated for t time steps. In the limit of large t, the resulting graph displays surprisingly rich characteristics. In particular, a giant component emerges in an infinite-order phase transition at delta=1/8. At the transition, the average component size jumps discontinuously but remains finite. In contrast, a static random graph with the same degree distribution exhibits a second-order phase transition at delta=1/4, and the average component size diverges there. These dramatic differences between grown and static random graphs stem from a positive correlation between the degrees of connected vertices in the grown graph-older vertices tend to have higher degree, and to link with other high-degree vertices, merely by virtue of their age. We conclude that grown graphs, however randomly they are constructed, are fundamentally different from their static random graph counterparts.

  11. Averaging of phase noise in PSK signals by an opto-electrical feed-forward circuit

    NASA Astrophysics Data System (ADS)

    Inoue, K.; Ohta, M.

    2013-10-01

    This paper proposes an opto-electrical feed-forward circuit that reduces phase noise in binary PSK signals by averaging the noise. Random and independent phase noise is averaged over several bit slots by externally modulating a phase-fluctuating PSK signal with feed-forward signal obtained from signal processing of the outputs of delay interferometers. The simulation results demonstrate a reduction in the phase noise.

  12. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  13. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    NASA Technical Reports Server (NTRS)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  14. Analytical techniques for the study of some parameters of multispectral scanner systems for remote sensing

    NASA Technical Reports Server (NTRS)

    Wiswell, E. R.; Cooper, G. R. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The concept of average mutual information in the received spectral random process about the spectral scene was developed. Techniques amenable to implementation on a digital computer were also developed to make the required average mutual information calculations. These techniques required identification of models for the spectral response process of scenes. Stochastic modeling techniques were adapted for use. These techniques were demonstrated on empirical data from wheat and vegetation scenes.

  15. Nature of alpha and beta particles in glycogen using molecular size distributions.

    PubMed

    Sullivan, Mitchell A; Vilaplana, Francisco; Cave, Richard A; Stapleton, David; Gray-Weale, Angus A; Gilbert, Robert G

    2010-04-12

    Glycogen is a randomly hyperbranched glucose polymer. Complex branched polymers have two structural levels: individual branches and the way these branches are linked. Liver glycogen has a third level: supramolecular clusters of beta particles which form larger clusters of alpha particles. Size distributions of native glycogen were characterized using size exclusion chromatography (SEC) to find the number and weight distributions and the size dependences of the number- and weight-average masses. These were fitted to two distinct randomly joined reference structures, constructed by random attachment of individual branches and as random aggregates of beta particles. The z-average size of the alpha particles in dimethylsulfoxide does not change significantly with high concentrations of LiBr, a solvent system that would disrupt hydrogen bonding. These data reveal that the beta particles are covalently bonded to form alpha particles through a hitherto unsuspected enzyme process, operative in the liver on particles above a certain size range.

  16. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  17. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    NASA Astrophysics Data System (ADS)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  18. An invariance property of generalized Pearson random walks in bounded geometries

    NASA Astrophysics Data System (ADS)

    Mazzolo, Alain

    2009-03-01

    Invariance properties of random walks in bounded domains are a topic of growing interest since they contribute to improving our understanding of diffusion in confined geometries. Recently, limited to Pearson random walks with exponentially distributed straight paths, it has been shown that under isotropic uniform incidence, the average length of the trajectories through the domain is independent of the random walk characteristic and depends only on the ratio of the volume's domain over its surface. In this paper, thanks to arguments of integral geometry, we generalize this property to any isotropic bounded stochastic process and we give the conditions of its validity for isotropic unbounded stochastic processes. The analytical form for the traveled distance from the boundary to the first scattering event that ensures the validity of the Cauchy formula is also derived. The generalization of the Cauchy formula is an analytical constraint that thus concerns a very wide range of stochastic processes, from the original Pearson random walk to a Rayleigh distribution of the displacements, covering many situations of physical importance.

  19. Inhomogeneous diffusion and ergodicity breaking induced by global memory effects

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2016-11-01

    We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.

  20. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  1. Phenomenological picture of fluctuations in branching random walks

    NASA Astrophysics Data System (ADS)

    Mueller, A. H.; Munier, S.

    2014-10-01

    We propose a picture of the fluctuations in branching random walks, which leads to predictions for the distribution of a random variable that characterizes the position of the bulk of the particles. We also interpret the 1 /√{t } correction to the average position of the rightmost particle of a branching random walk for large times t ≫1 , computed by Ebert and Van Saarloos, as fluctuations on top of the mean-field approximation of this process with a Brunet-Derrida cutoff at the tip that simulates discreteness. Our analytical formulas successfully compare to numerical simulations of a particular model of a branching random walk.

  2. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  3. Ages of Records in Random Walks

    NASA Astrophysics Data System (ADS)

    Szabó, Réka; Vető, Bálint

    2016-12-01

    We consider random walks with continuous and symmetric step distributions. We prove universal asymptotics for the average proportion of the age of the kth longest lasting record for k=1,2,ldots and for the probability that the record of the kth longest age is broken at step n. Due to the relation to the Chinese restaurant process, the ranked sequence of proportions of ages converges to the Poisson-Dirichlet distribution.

  4. Statistical optics

    NASA Astrophysics Data System (ADS)

    Goodman, J. W.

    This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.

  5. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  6. Random Fields

    NASA Astrophysics Data System (ADS)

    Vanmarcke, Erik

    1983-03-01

    Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.

  7. Renormalized Energy Concentration in Random Matrices

    NASA Astrophysics Data System (ADS)

    Borodin, Alexei; Serfaty, Sylvia

    2013-05-01

    We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of Sandier and Serfaty (From the Ginzburg-Landau model to vortex lattice problems, 2012; 1D log-gases and the renormalized energy, 2013). Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix β-sine processes on the real line ( β = 1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the β = 2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.

  8. Cost analysis of colorectal cancer screening with CT colonography in Italy.

    PubMed

    Mantellini, Paola; Lippi, Giuseppe; Sali, Lapo; Grazzini, Grazia; Delsanto, Silvia; Mallardi, Beatrice; Falchini, Massimo; Castiglione, Guido; Carozzi, Francesca Maria; Mascalchi, Mario; Milani, Stefano; Ventura, Leonardo; Zappa, Marco

    2018-06-01

    Unit costs of screening CT colonography (CTC) can be useful for cost-effectiveness analyses and for health care decision-making. We evaluated the unit costs of CTC as a primary screening test for colorectal cancer in the setting of a randomized trial in Italy. Data were collected within the randomized SAVE trial. Subjects were invited to screening CTC by mail and requested to have a pre-examination consultation. CTCs were performed with 64- and 128-slice CT scanners after reduced or full bowel preparation. Activity-based costing was used to determine unit costs per-process, per-participant to screening CTC, and per-subject with advanced neoplasia. Among 5242 subjects invited to undergo screening CTC, 1312 had pre-examination consultation and 1286 ultimately underwent CTC. Among 129 subjects with a positive CTC, 126 underwent assessment colonoscopy and 67 were ultimately diagnosed with advanced neoplasia (i.e., cancer or advanced adenoma). Cost per-participant of the entire screening CTC pathway was €196.80. Average cost per-participant for the screening invitation process was €17.04 and €9.45 for the pre-examination consultation process. Average cost per-participant of the CTC execution and reading process was €146.08 and of the diagnostic assessment colonoscopy process was €24.23. Average cost per-subject with advanced neoplasia was €3777.30. Cost of screening CTC was €196.80 per-participant. Our data suggest that the more relevant cost of screening CTC, amenable of intervention, is related to CTC execution and reading process.

  9. Multipass comminution process to produce precision wood particles of uniform size and shape with disrupted grain structure from wood chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooley, James H; Lanning, David N

    A process of comminution of wood chips (C) having a grain direction to produce a mixture of wood particles (P), wherein the wood chips are characterized by an average length dimension (L.sub.C) as measured substantially parallel to the grain, an average width dimension (W.sub.C) as measured normal to L.sub.C and aligned cross grain, and an average height dimension (H.sub.C) as measured normal to W.sub.C and L.sub.C, and wherein the comminution process comprises the step of feeding the wood chips in a direction of travel substantially randomly to the grain direction one or more times through a counter rotating pair ofmore » intermeshing arrays of cutting discs (D) arrayed axially perpendicular to the direction of wood chip travel.« less

  10. Random and externally controlled occurrences of Dansgaard-Oeschger events

    NASA Astrophysics Data System (ADS)

    Lohmann, Johannes; Ditlevsen, Peter D.

    2018-05-01

    Dansgaard-Oeschger (DO) events constitute the most pronounced mode of centennial to millennial climate variability of the last glacial period. Since their discovery, many decades of research have been devoted to understand the origin and nature of these rapid climate shifts. In recent years, a number of studies have appeared that report emergence of DO-type variability in fully coupled general circulation models via different mechanisms. These mechanisms result in the occurrence of DO events at varying degrees of regularity, ranging from periodic to random. When examining the full sequence of DO events as captured in the North Greenland Ice Core Project (NGRIP) ice core record, one can observe high irregularity in the timing of individual events at any stage within the last glacial period. In addition to the prevailing irregularity, certain properties of the DO event sequence, such as the average event frequency or the relative distribution of cold versus warm periods, appear to be changing throughout the glacial. By using statistical hypothesis tests on simple event models, we investigate whether the observed event sequence may have been generated by stationary random processes or rather was strongly modulated by external factors. We find that the sequence of DO warming events is consistent with a stationary random process, whereas dividing the event sequence into warming and cooling events leads to inconsistency with two independent event processes. As we include external forcing, we find a particularly good fit to the observed DO sequence in a model where the average residence time in warm periods are controlled by global ice volume and cold periods by boreal summer insolation.

  11. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  12. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasnobaeva, L. A., E-mail: kla1983@mail.ru; Siberian State Medical University Moscowski Trakt 2, Tomsk, 634050; Shapovalov, A. V.

    Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on dynamics local conformational perturbations (kink) propagating along the DNA molecule is investigated. Such waves have an important role in the regulation of important biological processes in living systems at the molecular level. As a dynamic model of DNA was used a modified sine-Gordon equation, simulating the rotational oscillations of bases in one of the chains DNA. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the frameworkmore » of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker– Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum. Within the formalism of the Fokker–Planck equation, the influence of nonstationary external force, random force, and dissipation effects on the kink dynamics is investigated in the sine–Gordon model. The equation of evolution of the kink momentum is obtained in the form of the stochastic differential equation in the Stratonovich sense within the framework of the well-known McLaughlin and Scott energy approach. The corresponding Fokker–Planck equation for the momentum distribution function coincides with the equation describing the Ornstein–Uhlenbek process with a regular nonstationary external force. The influence of the nonlinear stochastic effects on the kink dynamics is considered with the help of the Fokker–Planck nonlinear equation with the shift coefficient dependent on the first moment of the kink momentum distribution function. Expressions are derived for average value and variance of the momentum. Examples are considered which demonstrate the influence of the external regular and random forces on the evolution of the average value and variance of the kink momentum.« less

  14. Value of the future: Discounting in random environments

    NASA Astrophysics Data System (ADS)

    Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep

    2015-05-01

    We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.

  15. Value of the future: Discounting in random environments.

    PubMed

    Farmer, J Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep

    2015-05-01

    We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.

  16. Robust Tomography using Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Silva, Marcus; Kimmel, Shelby; Johnson, Blake; Ryan, Colm; Ohki, Thomas

    2013-03-01

    Conventional randomized benchmarking (RB) can be used to estimate the fidelity of Clifford operations in a manner that is robust against preparation and measurement errors -- thus allowing for a more accurate and relevant characterization of the average error in Clifford gates compared to standard tomography protocols. Interleaved RB (IRB) extends this result to the extraction of error rates for individual Clifford gates. In this talk we will show how to combine multiple IRB experiments to extract all information about the unital part of any trace preserving quantum process. Consequently, one can compute the average fidelity to any unitary, not just the Clifford group, with tighter bounds than IRB. Moreover, the additional information can be used to design improvements in control. MS, BJ, CR and TO acknowledge support from IARPA under contract W911NF-10-1-0324.

  17. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  18. Efficient sampling of complex network with modified random walk strategies

    NASA Astrophysics Data System (ADS)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  19. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  20. Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.

    PubMed

    Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M

    2014-02-10

    Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.

  1. Comminution process to produce precision wood particles of uniform size and shape with disrupted grain structure from wood chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooley, James H; Lanning, David N

    A process of comminution of wood chips (C) having a grain direction to produce a mixture of wood particles (P), wherein the wood chips are characterized by an average length dimension (L.sub.C) as measured substantially parallel to the grain, an average width dimension (W.sub.C) as measured normal to L.sub.C and aligned cross grain, and an average height dimension (H.sub.C) as measured normal to W.sub.C and L.sub.C, and wherein the comminution process comprises the step of feeding the wood chips in a direction of travel substantially randomly to the grain direction through a counter rotating pair of intermeshing arrays of cuttingmore » discs (D) arrayed axially perpendicular to the direction of wood chip travel, wherein the cutting discs have a uniform thickness (T.sub.D), and wherein at least one of L.sub.C, W.sub.C, and H.sub.C is greater than T.sub.D.« less

  2. Transmembrane protein CD93 diffuses by a continuous time random walk.

    NASA Astrophysics Data System (ADS)

    Goiko, Maria; de Bruyn, John; Heit, Bryan

    Molecular motion within the cell membrane is a poorly-defined process. In this study, we characterized the diffusion of the transmembrane protein CD93. By careful analysis of the dependence of the ensemble-averaged mean squared displacement (EA-MSD, r2) on time t and the ensemble-averaged, time-averaged MSD (EA-TAMSD, δ2) on lag time τ and total measurement time T, we showed that the motion of CD93 is well-described by a continuous-time random walk (CTRW). CD93 tracks were acquired using single particle tracking. The tracks were classified as confined or free, and the behavior of the MSD analyzed. EA-MSDs of both populations grew non-linearly with t, indicative of anomalous diffusion. Their EA-TAMSDs were found to depend on both τ and T, indicating non-ergodicity. Free molecules had r2 tα and δ2 (τ /T 1 - α) , with α 0 . 5 , consistent with a CTRW. Mean maximal excursion analysis supported this result. Confined CD93 had r2 t0 and δ2 (τ / T) α , with α 0 . 3 , consistent with a confined CTRW. CTRWs are described by a series of random jumps interspersed with power-law distributed waiting times, and may arise due to the interactions of CD93 with the endocytic machinery. NSERC.

  3. 77 FR 12847 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    .... The NADAC is a new pricing benchmark that will be based on the national average costs that pharmacies... costs collected directly from pharmacies through a nationwide survey process. This survey will be... NADAC Survey Request for Information has been developed to send to random pharmacies for voluntary...

  4. On the Determinants of the Conjunction Fallacy: Probability versus Inductive Confirmation

    ERIC Educational Resources Information Center

    Tentori, Katya; Crupi, Vincenzo; Russo, Selena

    2013-01-01

    Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such…

  5. Creation of high-pinning microstructures in post production YBCO coated conductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welp, Ulrich; Miller, Dean J.; Kwok, Wai-Kwong

    A method comprising irradiating a polycrystalline rare earth metal-alkaline earth metal-transition metal-oxide superconductor layer with protons having an energy of 1 to 6 MeV. The irradiating process produces an irradiated layer that comprises randomly dispersed defects with an average diameter in the range of 1-10 nm.

  6. Random element method for numerical modeling of diffusional processes

    NASA Technical Reports Server (NTRS)

    Ghoniem, A. F.; Oppenheim, A. K.

    1982-01-01

    The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.

  7. Stochastic stability of parametrically excited random systems

    NASA Astrophysics Data System (ADS)

    Labou, M.

    2004-01-01

    Multidegree-of-freedom dynamic systems subjected to parametric excitation are analyzed for stochastic stability. The variation of excitation intensity with time is described by the sum of a harmonic function and a stationary random process. The stability boundaries are determined by the stochastic averaging method. The effect of random parametric excitation on the stability of trivial solutions of systems of differential equations for the moments of phase variables is studied. It is assumed that the frequency of harmonic component falls within the region of combination resonances. Stability conditions for the first and second moments are obtained. It turns out that additional parametric excitation may have a stabilizing or destabilizing effect, depending on the values of certain parameters of random excitation. As an example, the stability of a beam in plane bending is analyzed.

  8. Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Xu, Hebing; Li, Chao

    2018-03-01

    Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.

  9. Energy dissipation in a friction-controlled slide of a body excited by random motions of the foundation

    NASA Astrophysics Data System (ADS)

    Berezin, Sergey; Zayats, Oleg

    2018-01-01

    We study a friction-controlled slide of a body excited by random motions of the foundation it is placed on. Specifically, we are interested in such quantities as displacement, traveled distance, and energy loss due to friction. We assume that the random excitation is switched off at some time (possibly infinite) and show that the problem can be treated in an analytic, explicit, manner. Particularly, we derive formulas for the moments of the displacement and distance, and also for the average energy loss. To accomplish that we use the Pugachev-Sveshnikov equation for the characteristic function of a continuous random process given by a system of SDEs. This equation is solved by reduction to a parametric Riemann boundary value problem of complex analysis.

  10. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  11. Robust Characterization of Loss Rates

    NASA Astrophysics Data System (ADS)

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph

    2015-08-01

    Many physical implementations of qubits—including ion traps, optical lattices and linear optics—suffer from loss. A nonzero probability of irretrievably losing a qubit can be a substantial obstacle to fault-tolerant methods of processing quantum information, requiring new techniques to safeguard against loss that introduce an additional overhead that depends upon the loss rate. Here we present a scalable and platform-independent protocol for estimating the average loss rate (averaged over all input states) resulting from an arbitrary Markovian noise process, as well as an independent estimate of detector efficiency. Moreover, we show that our protocol gives an additional constraint on estimated parameters from randomized benchmarking that improves the reliability of the estimated error rate and provides a new indicator for non-Markovian signatures in the experimental data. We also derive a bound for the state-dependent loss rate in terms of the average loss rate.

  12. Time series analysis of collective motions in proteins

    NASA Astrophysics Data System (ADS)

    Alakent, Burak; Doruker, Pemra; ćamurdan, Mehmet C.

    2004-01-01

    The dynamics of α-amylase inhibitor tendamistat around its native state is investigated using time series analysis of the principal components of the Cα atomic displacements obtained from molecular dynamics trajectories. Collective motion along a principal component is modeled as a homogeneous nonstationary process, which is the result of the damped oscillations in local minima superimposed on a random walk. The motion in local minima is described by a stationary autoregressive moving average model, consisting of the frequency, damping factor, moving average parameters and random shock terms. Frequencies for the first 50 principal components are found to be in the 3-25 cm-1 range, which are well correlated with the principal component indices and also with atomistic normal mode analysis results. Damping factors, though their correlation is less pronounced, decrease as principal component indices increase, indicating that low frequency motions are less affected by friction. The existence of a positive moving average parameter indicates that the stochastic force term is likely to disturb the mode in opposite directions for two successive sampling times, showing the modes tendency to stay close to minimum. All these four parameters affect the mean square fluctuations of a principal mode within a single minimum. The inter-minima transitions are described by a random walk model, which is driven by a random shock term considerably smaller than that for the intra-minimum motion. The principal modes are classified into three subspaces based on their dynamics: essential, semiconstrained, and constrained, at least in partial consistency with previous studies. The Gaussian-type distributions of the intermediate modes, called "semiconstrained" modes, are explained by asserting that this random walk behavior is not completely free but between energy barriers.

  13. Averaging in SU(2) open quantum random walk

    NASA Astrophysics Data System (ADS)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  14. Fractional Stochastic Field Theory

    NASA Astrophysics Data System (ADS)

    Honkonen, Juha

    2018-02-01

    Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.

  15. Progress in Operational Analysis of Launch Vehicles in Nonstationary Flight

    NASA Technical Reports Server (NTRS)

    James, George; Kaouk, Mo; Cao, Timothy

    2013-01-01

    This paper presents recent results in an ongoing effort to understand and develop techniques to process launch vehicle data, which is extremely challenging for modal parameter identification. The primary source of difficulty is due to the nonstationary nature of the situation. The system is changing, the environment is not steady, and there is an active control system operating. Hence, the primary tool for producing clean operational results (significant data lengths and data averaging) is not available to the user. This work reported herein uses a correlation-based two step operational modal analysis approach to process the relevant data sets for understanding and development of processes. A significant drawback for such processing of short time histories is a series of beating phenomena due to the inability to average out random modal excitations. A recursive correlation process coupled to a new convergence metric (designed to mitigate the beating phenomena) is the object of this study. It has been found in limited studies that this process creates clean modal frequency estimates but numerically alters the damping.

  16. Dynamical influence processes on networks: general theory and applications to social contagion.

    PubMed

    Harris, Kameron Decker; Danforth, Christopher M; Dodds, Peter Sheridan

    2013-08-01

    We study binary state dynamics on a network where each node acts in response to the average state of its neighborhood. By allowing varying amounts of stochasticity in both the network and node responses, we find different outcomes in random and deterministic versions of the model. In the limit of a large, dense network, however, we show that these dynamics coincide. We construct a general mean-field theory for random networks and show this predicts that the dynamics on the network is a smoothed version of the average response function dynamics. Thus, the behavior of the system can range from steady state to chaotic depending on the response functions, network connectivity, and update synchronicity. As a specific example, we model the competing tendencies of imitation and nonconformity by incorporating an off-threshold into standard threshold models of social contagion. In this way, we attempt to capture important aspects of fashions and societal trends. We compare our theory to extensive simulations of this "limited imitation contagion" model on Poisson random graphs, finding agreement between the mean-field theory and stochastic simulations.

  17. Dynamics and morphometric characterization of hippocampus neurons using digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Elkatlawy, Saeid; Gomariz, María.; Soto-Sánchez, Cristina; Martínez Navarrete, Gema; Fernández, Eduardo; Fimia, Antonio

    2014-05-01

    In this paper we report on the use of digital holographic microscopy for 3D real time imaging of cultured neurons and neural networks, in vitro. Digital holographic microscopy is employed as an assessment tool to study the biophysical origin of neurodegenerative diseases. Our study consists in the morphological characterization of the axon, dendrites and cell bodies. The average size and thickness of the soma were 21 and 13 μm, respectively. Furthermore, the average size and diameter of some randomly selected neurites were 4.8 and 0.89 μm, respectively. In addition, the spatiotemporal growth process of cellular bodies and extensions was fitted to by a non-linear behavior of the nerve system. Remarkably, this non-linear process represents the relationship between the growth process of cellular body with respect to the axon and dendrites of the neurons.

  18. Nursing Home Quality, Cost, Staffing, and Staff Mix

    ERIC Educational Resources Information Center

    Rantz, Marilyn J.; Hicks, Lanis; Grando, Victoria; Petroski, Gregory F.; Madsen, Richard W.; Mehr, David R.; Conn, Vicki; Zwygart-Staffacher, Mary; Scott, Jill; Flesner, Marcia; Bostick, Jane; Porter, Rose; Maas, Meridean

    2004-01-01

    Purpose: The purpose of this study was to describe the processes of care, organizational attributes, cost of care, staffing level, and staff mix in a sample of Missouri homes with good, average, and poor resident outcomes. Design and Methods: A three-group exploratory study design was used, with 92 nursing homes randomly selected from all nursing…

  19. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  20. Fluorescence correlation spectroscopy: the case of subdiffusion.

    PubMed

    Lubelski, Ariel; Klafter, Joseph

    2009-03-18

    The theory of fluorescence correlation spectroscopy is revisited here for the case of subdiffusing molecules. Subdiffusion is assumed to stem from a continuous-time random walk process with a fat-tailed distribution of waiting times and can therefore be formulated in terms of a fractional diffusion equation (FDE). The FDE plays the central role in developing the fluorescence correlation spectroscopy expressions, analogous to the role played by the simple diffusion equation for regular systems. Due to the nonstationary nature of the continuous-time random walk/FDE, some interesting properties emerge that are amenable to experimental verification and may help in discriminating among subdiffusion mechanisms. In particular, the current approach predicts 1), a strong dependence of correlation functions on the initial time (aging); 2), sensitivity of correlation functions to the averaging procedure, ensemble versus time averaging (ergodicity breaking); and 3), that the basic mean-squared displacement observable depends on how the mean is taken.

  1. Almost sure convergence in quantum spin glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu

    2015-12-15

    Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin, E-mail: godyalin@163.com; Singh, Uttam, E-mail: uttamsingh@hri.res.in; Pati, Arun K., E-mail: akpati@hri.res.in

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate thatmore » mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.« less

  3. Enhancing the capacity of substance abuse prevention coalitions through training and technical assistance.

    PubMed

    Watson-Thompson, Jomella; Woods, Nikki Keene; Schober, Daniel J; Schultz, Jerry A

    2013-01-01

    Community capacity may be enhanced through intermediary supports that provide training and technical assistance (TA). This study used a randomized pre/posttest design to assess the impact of training and TA on coalition capacity. Seven community coalitions from the Midwest participated in the 2-year study, which included 36 hours of training, followed by monthly TA calls to support action planning implementation for prioritized processes. Collaborative processes most commonly identified as high-need areas for TA were Developing Organizational Structure, Documenting Progress, Making Outcomes Matter, and Sustaining the Work. Based on a coalition survey, the average change for processes prioritized through TA across all seven coalitions was .27 (SD = .29), while the average change for non-prioritized processes was .09 (SD = .20) (t(6) = 4.86, p = .003, d = 1.84). The findings from this study suggest that TA can increase coalition capacity for implementing collaborative processes using a participatory approach.

  4. Comminution process to produce precision wood particles of uniform size and shape with disrupted grain structure from wood chips

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooley, James H.; Lanning, David N.

    A process of comminution of wood chips (C) having a grain direction to produce a mixture of wood particles (P), wherein the wood chips are characterized by an average length dimension (L.sub.C) as measured substantially parallel to the grain, an average width dimension (W.sub.C) as measured normal to L.sub.C and aligned cross grain, and an average height dimension (H.sub.C) as measured normal to W.sub.C and L.sub.C, wherein W.sub.C>L.sub.C, and wherein the comminution process comprises the step of feeding the wood chips in a direction of travel substantially randomly to the grain direction through a counter rotating pair of intermeshing arraysmore » of cutting discs (D) arrayed axially perpendicular to the direction of wood chip travel, wherein the cutting discs have a uniform thickness (T.sub.D), and wherein at least one of L.sub.C, W.sub.C, and H.sub.C is less than T.sub.D.« less

  5. Double jeopardy in inferring cognitive processes

    PubMed Central

    Fific, Mario

    2014-01-01

    Inferences we make about underlying cognitive processes can be jeopardized in two ways due to problematic forms of aggregation. First, averaging across individuals is typically considered a very useful tool for removing random variability. The threat is that averaging across subjects leads to averaging across different cognitive strategies, thus harming our inferences. The second threat comes from the construction of inadequate research designs possessing a low diagnostic accuracy of cognitive processes. For that reason we introduced the systems factorial technology (SFT), which has primarily been designed to make inferences about underlying processing order (serial, parallel, coactive), stopping rule (terminating, exhaustive), and process dependency. SFT proposes that the minimal research design complexity to learn about n number of cognitive processes should be equal to 2n. In addition, SFT proposes that (a) each cognitive process should be controlled by a separate experimental factor, and (b) The saliency levels of all factors should be combined in a full factorial design. In the current study, the author cross combined the levels of jeopardies in a 2 × 2 analysis, leading to four different analysis conditions. The results indicate a decline in the diagnostic accuracy of inferences made about cognitive processes due to the presence of each jeopardy in isolation and when combined. The results warrant the development of more individual subject analyses and the utilization of full-factorial (SFT) experimental designs. PMID:25374545

  6. Modeling efficiency at the process level: an examination of the care planning process in nursing homes.

    PubMed

    Lee, Robert H; Bott, Marjorie J; Gajewski, Byron; Taunton, Roma Lee

    2009-02-01

    To examine the efficiency of the care planning process in nursing homes. We collected detailed primary data about the care planning process for a stratified random sample of 107 nursing homes from Kansas and Missouri. We used these data to calculate the average direct cost per care plan and used data on selected deficiencies from the Online Survey Certification and Reporting System to measure the quality of care planning. We then analyzed the efficiency of the assessment process using corrected ordinary least squares (COLS) and data envelopment analysis (DEA). Both approaches suggested that there was considerable inefficiency in the care planning process. The average COLS score was 0.43; the average DEA score was 0.48. The correlation between the two sets of scores was quite high, and there was no indication that lower costs resulted in lower quality. For-profit facilities were significantly more efficient than not-for-profit facilities. Multiple studies of nursing homes have found evidence of inefficiency, but virtually all have had measurement problems that raise questions about the results. This analysis, which focuses on a process with much simpler measurement issues, finds evidence of inefficiency that is largely consistent with earlier studies. Making nursing homes more efficient merits closer attention as a strategy for improving care. Increasing efficiency by adopting well-designed, reliable processes can simultaneously reduce costs and improve quality.

  7. Spectral density of mixtures of random density matrices for qubits

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Wang, Jiamei; Chen, Zhihua

    2018-06-01

    We derive the spectral density of the equiprobable mixture of two random density matrices of a two-level quantum system. We also work out the spectral density of mixture under the so-called quantum addition rule. We use the spectral densities to calculate the average entropy of mixtures of random density matrices, and show that the average entropy of the arithmetic-mean-state of n qubit density matrices randomly chosen from the Hilbert-Schmidt ensemble is never decreasing with the number n. We also get the exact value of the average squared fidelity. Some conjectures and open problems related to von Neumann entropy are also proposed.

  8. Optimization and universality of Brownian search in a basic model of quenched heterogeneous media

    NASA Astrophysics Data System (ADS)

    Godec, Aljaž; Metzler, Ralf

    2015-05-01

    The kinetics of a variety of transport-controlled processes can be reduced to the problem of determining the mean time needed to arrive at a given location for the first time, the so-called mean first-passage time (MFPT) problem. The occurrence of occasional large jumps or intermittent patterns combining various types of motion are known to outperform the standard random walk with respect to the MFPT, by reducing oversampling of space. Here we show that a regular but spatially heterogeneous random walk can significantly and universally enhance the search in any spatial dimension. In a generic minimal model we consider a spherically symmetric system comprising two concentric regions with piecewise constant diffusivity. The MFPT is analyzed under the constraint of conserved average dynamics, that is, the spatially averaged diffusivity is kept constant. Our analytical calculations and extensive numerical simulations demonstrate the existence of an optimal heterogeneity minimizing the MFPT to the target. We prove that the MFPT for a random walk is completely dominated by what we term direct trajectories towards the target and reveal a remarkable universality of the spatially heterogeneous search with respect to target size and system dimensionality. In contrast to intermittent strategies, which are most profitable in low spatial dimensions, the spatially inhomogeneous search performs best in higher dimensions. Discussing our results alongside recent experiments on single-particle tracking in living cells, we argue that the observed spatial heterogeneity may be beneficial for cellular signaling processes.

  9. Dynamic speckle - Interferometry of micro-displacements

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2012-06-01

    The problem of the dynamics of speckles in the image plane of the object, caused by random movements of scattering centers is solved. We consider three cases: 1) during the observation the points move at random, but constant speeds, and 2) the relative displacement of any pair of points is a continuous random process, and 3) the motion of the centers is the sum of a deterministic movement and random displacement. For the cases 1) and 2) the characteristics of temporal and spectral autocorrelation function of the radiation intensity can be used for determining of individually and the average relative displacement of the centers, their dispersion and the relaxation time. For the case 3) is showed that under certain conditions, the optical signal contains a periodic component, the number of periods is proportional to the derivations of the deterministic displacements. The results of experiments conducted to test and application of theory are given.

  10. Dynamic stability of spinning pretwisted beams subjected to axial random forces

    NASA Astrophysics Data System (ADS)

    Young, T. H.; Gau, C. Y.

    2003-11-01

    This paper studies the dynamic stability of a pretwisted cantilever beam spinning along its longitudinal axis and subjected to an axial random force at the free end. The axial force is assumed as the sum of a constant force and a random process with a zero mean. Due to this axial force, the beam may experience parametric random instability. In this work, the finite element method is first applied to yield discretized system equations. The stochastic averaging method is then adopted to obtain Ito's equations for the response amplitudes of the system. Finally the mean-square stability criterion is utilized to determine the stability condition of the system. Numerical results show that the stability boundary of the system converges as the first three modes are taken into calculation. Before the convergence is reached, the stability condition predicted is not conservative enough.

  11. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    PubMed

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  12. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  13. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  14. Stochastic model for gene transcription on Drosophila melanogaster embryos

    NASA Astrophysics Data System (ADS)

    Prata, Guilherme N.; Hornos, José Eduardo M.; Ramos, Alexandre F.

    2016-02-01

    We examine immunostaining experimental data for the formation of stripe 2 of even-skipped (eve) transcripts on D. melanogaster embryos. An estimate of the factor converting immunofluorescence intensity units into molecular numbers is given. The analysis of the eve dynamics at the region of stripe 2 suggests that the promoter site of the gene has two distinct regimes: an earlier phase when it is predominantly activated until a critical time when it becomes mainly repressed. That suggests proposing a stochastic binary model for gene transcription on D. melanogaster embryos. Our model has two random variables: the transcripts number and the state of the source of mRNAs given as active or repressed. We are able to reproduce available experimental data for the average number of transcripts. An analysis of the random fluctuations on the number of eves and their consequences on the spatial precision of stripe 2 is presented. We show that the position of the anterior or posterior borders fluctuate around their average position by ˜1 % of the embryo length, which is similar to what is found experimentally. The fitting of data by such a simple model suggests that it can be useful to understand the functions of randomness during developmental processes.

  15. Three-dimensional direct laser written graphitic electrical contacts to randomly distributed components

    NASA Astrophysics Data System (ADS)

    Dorin, Bryce; Parkinson, Patrick; Scully, Patricia

    2018-04-01

    The development of cost-effective electrical packaging for randomly distributed micro/nano-scale devices is a widely recognized challenge for fabrication technologies. Three-dimensional direct laser writing (DLW) has been proposed as a solution to this challenge, and has enabled the creation of rapid and low resistance graphitic wires within commercial polyimide substrates. In this work, we utilize the DLW technique to electrically contact three fully encapsulated and randomly positioned light-emitting diodes (LEDs) in a one-step process. The resolution of the contacts is in the order of 20 μ m, with an average circuit resistance of 29 ± 18 kΩ per LED contacted. The speed and simplicity of this technique is promising to meet the needs of future microelectronics and device packaging.

  16. Analysis of click-evoked otoacoustic emissions by concentration of frequency and time: Preliminary results from normal hearing and Ménière's disease ears

    NASA Astrophysics Data System (ADS)

    Liu, Tzu-Chi; Wu, Hau-Tieng; Chen, Ya-Hui; Chen, Ya-Han; Fang, Te-Yung; Wang, Pa-Chun; Liu, Yi-Wen

    2018-05-01

    The presence of click-evoked (CE) otoacoustic emissions (OAEs) has been clinically accepted as an indicator of normal cochlear processing of sounds. For treatment and diagnostic purposes, however, clinicians do not typically pay attention to the detailed spectrum and waveform of CEOAEs. A possible reason is due to the lack of noise-robust signal processing tools to estimate physiologically meaningful time-frequency properties of CEOAEs, such as the latency of spectral components. In this on-going study, we applied a modern tool called concentration of frequency and time (ConceFT, [1]) to analyze CEOAE waveforms. Randomly combined orthogonal functions are used as windowing functions for time-frequency analysis. The resulting spectrograms are subject to nonlinear time-frequency reassignment so as to enhance the concentration of time-varying sinusoidal components. The results after reassignment could be further averaged across the random choice of windows. CEOAE waveforms are acquired by a linear averaging paradigm, and longitudinal data are currently being collected from patients with Ménière's disease (MD) and a control group of normal hearing subjects. When CEOAE is present, the ConceFT plots show traces of decreasing but fluctuating instantaneous frequency against time. For comparison purposes, same processing methods are also applied to analyze CEOAE data from cochlear mechanics simulation.

  17. Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations

    PubMed Central

    Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán

    2016-01-01

    Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165

  18. Dietary Seaweed and Early Breast Cancer: A Randomized Trial

    DTIC Science & Technology

    2005-05-01

    submitted to the American Journal of Clinical Nutrition . Our conclusions were that although 5 grams/day of seaweed, the average daily consumption in... Nutrition , Department of Medicine, Boston University School of Medicine, Boston, Massachusetts.4Process Development, Degussa Food Ingredients, Business Line... nutrition in Japan, a recommendation for increasing gious celebrations in precolonial times (23,24), and seaweeds seaweed consumption was included (17

  19. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  20. Collective relaxation dynamics of small-world networks

    NASA Astrophysics Data System (ADS)

    Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc

    2015-05-01

    Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N , average degree k , and topological randomness q . We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q , including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.

  1. Collective relaxation dynamics of small-world networks.

    PubMed

    Grabow, Carsten; Grosskinsky, Stefan; Kurths, Jürgen; Timme, Marc

    2015-05-01

    Complex networks exhibit a wide range of collective dynamic phenomena, including synchronization, diffusion, relaxation, and coordination processes. Their asymptotic dynamics is generically characterized by the local Jacobian, graph Laplacian, or a similar linear operator. The structure of networks with regular, small-world, and random connectivities are reasonably well understood, but their collective dynamical properties remain largely unknown. Here we present a two-stage mean-field theory to derive analytic expressions for network spectra. A single formula covers the spectrum from regular via small-world to strongly randomized topologies in Watts-Strogatz networks, explaining the simultaneous dependencies on network size N, average degree k, and topological randomness q. We present simplified analytic predictions for the second-largest and smallest eigenvalue, and numerical checks confirm our theoretical predictions for zero, small, and moderate topological randomness q, including the entire small-world regime. For large q of the order of one, we apply standard random matrix theory, thereby overarching the full range from regular to randomized network topologies. These results may contribute to our analytic and mechanistic understanding of collective relaxation phenomena of network dynamical systems.

  2. Long-range epidemic spreading in a random environment.

    PubMed

    Juhász, Róbert; Kovács, István A; Iglói, Ferenc

    2015-03-01

    Modeling long-range epidemic spreading in a random environment, we consider a quenched, disordered, d-dimensional contact process with infection rates decaying with distance as 1/rd+σ. We study the dynamical behavior of the model at and below the epidemic threshold by a variant of the strong-disorder renormalization-group method and by Monte Carlo simulations in one and two spatial dimensions. Starting from a single infected site, the average survival probability is found to decay as P(t)∼t-d/z up to multiplicative logarithmic corrections. Below the epidemic threshold, a Griffiths phase emerges, where the dynamical exponent z varies continuously with the control parameter and tends to zc=d+σ as the threshold is approached. At the threshold, the spatial extension of the infected cluster (in surviving trials) is found to grow as R(t)∼t1/zc with a multiplicative logarithmic correction and the average number of infected sites in surviving trials is found to increase as Ns(t)∼(lnt)χ with χ=2 in one dimension.

  3. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    NASA Astrophysics Data System (ADS)

    Hellaby, Charles

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  4. Scaling of average weighted shortest path and average receiving time on weighted expanded Koch networks

    NASA Astrophysics Data System (ADS)

    Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng

    2014-04-01

    Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.

  5. Aging scaled Brownian motion

    NASA Astrophysics Data System (ADS)

    Safdari, Hadiseh; Chechkin, Aleksei V.; Jafari, Gholamreza R.; Metzler, Ralf

    2015-04-01

    Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.

  6. Aging scaled Brownian motion.

    PubMed

    Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf

    2015-04-01

    Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.

  7. Selective processing of auditory evoked responses with iterative-randomized stimulation and averaging: A strategy for evaluating the time-invariant assumption.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D

    2016-03-01

    The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  8. A stochastic approach to noise modeling for barometric altimeters.

    PubMed

    Sabatini, Angelo Maria; Genovese, Vincenzo

    2013-11-18

    The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  9. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  10. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  11. Effective dynamics of a random walker on a heterogeneous ring: Exact results

    NASA Astrophysics Data System (ADS)

    Masharian, S. R.

    2018-07-01

    In this paper, by considering a biased random walker hopping on a one-dimensional lattice with a ring geometry, we investigate the fluctuations of the speed of the random walker. We assume that the lattice is heterogeneous i.e. the hopping rate of the random walker between the first and the last lattice sites is different from the hopping rate of the random walker between the other links of the lattice. Assuming that the average speed of the random walker in the steady-state is v∗, we have been able to find the unconditional effective dynamics of the random walker where the absolute value of the average speed of the random walker is -v∗. Using a perturbative method in the large system-size limit, we have also been able to show that the effective hopping rates of the random walker near the defective link are highly site-dependent.

  12. Extreme values and fat tails of multifractal fluctuations

    NASA Astrophysics Data System (ADS)

    Muzy, J. F.; Bacry, E.; Kozhemyak, A.

    2006-06-01

    In this paper we discuss the problem of the estimation of extreme event occurrence probability for data drawn from some multifractal process. We also study the heavy (power-law) tail behavior of probability density function associated with such data. We show that because of strong correlations, the standard extreme value approach is not valid and classical tail exponent estimators should be interpreted cautiously. Extreme statistics associated with multifractal random processes turn out to be characterized by non-self-averaging properties. Our considerations rely upon some analogy between random multiplicative cascades and the physics of disordered systems and also on recent mathematical results about the so-called multifractal formalism. Applied to financial time series, our findings allow us to propose an unified framework that accounts for the observed multiscaling properties of return fluctuations, the volatility clustering phenomenon and the observed “inverse cubic law” of the return pdf tails.

  13. Infectious disease control using contact tracing in random and scale-free networks

    PubMed Central

    Kiss, Istvan Z; Green, Darren M; Kao, Rowland R

    2005-01-01

    Contact tracing aims to identify and isolate individuals that have been in contact with infectious individuals. The efficacy of contact tracing and the hierarchy of traced nodes—nodes with higher degree traced first—is investigated and compared on random and scale-free (SF) networks with the same number of nodes N and average connection K. For values of the transmission rate larger than a threshold, the final epidemic size on SF networks is smaller than that on corresponding random networks. While in random networks new infectious and traced nodes from all classes have similar average degrees, in SF networks the average degree of nodes that are in more advanced stages of the disease is higher at any given time. On SF networks tracing removes possible sources of infection with high average degree. However a higher tracing effort is required to control the epidemic than on corresponding random networks due to the high initial velocity of spread towards the highly connected nodes. An increased latency period fails to significantly improve contact tracing efficacy. Contact tracing has a limited effect if the removal rate of susceptible nodes is relatively high, due to the fast local depletion of susceptible nodes. PMID:16849217

  14. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    PubMed

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  15. Recursive processes in self-affirmation: intervening to close the minority achievement gap.

    PubMed

    Cohen, Geoffrey L; Garcia, Julio; Purdie-Vaughns, Valerie; Apfel, Nancy; Brzustoski, Patricia

    2009-04-17

    A 2-year follow-up of a randomized field experiment previously reported in Science is presented. A subtle intervention to lessen minority students' psychological threat related to being negatively stereotyped in school was tested in an experiment conducted three times with three independent cohorts (N = 133, 149, and 134). The intervention, a series of brief but structured writing assignments focusing students on a self-affirming value, reduced the racial achievement gap. Over 2 years, the grade point average (GPA) of African Americans was, on average, raised by 0.24 grade points. Low-achieving African Americans were particularly benefited. Their GPA improved, on average, 0.41 points, and their rate of remediation or grade repetition was less (5% versus 18%). Additionally, treated students' self-perceptions showed long-term benefits. Findings suggest that because initial psychological states and performance determine later outcomes by providing a baseline and initial trajectory for a recursive process, apparently small but early alterations in trajectory can have long-term effects. Implications for psychological theory and educational practice are discussed.

  16. The usefulness of lean six sigma to the development of a clinical pathway for hip fractures.

    PubMed

    Niemeijer, Gerard C; Flikweert, Elvira; Trip, Albert; Does, Ronald J M M; Ahaus, Kees T B; Boot, Anja F; Wendt, Klaus W

    2013-10-01

    The objective of this study was to show the usefulness of lean six sigma (LSS) for the development of a multidisciplinary clinical pathway. A single centre, both retrospective and prospective, non-randomized controlled study design was used to identify the variables of a prolonged length of stay (LOS) for hip fractures in the elderly and to measure the effect of the process improvements--with the aim of improving efficiency of care and reducing the LOS. The project identified several variables influencing LOS, and interventions were designed to improve the process of care. Significant results were achieved by reducing both the average LOS by 4.2 days (-31%) and the average duration of surgery by 57 minutes (-36%). The average LOS of patients discharged to a nursing home reduced by 4.4 days. The findings of this study show a successful application of LSS methodology within the development of a clinical pathway. Further research is needed to explore the effect of the use of LSS methodology at clinical outcome and quality of life. © 2012 John Wiley & Sons Ltd.

  17. Reproduction accuracy of articulator mounting with an arbitrary face-bow vs. average values-a controlled, randomized, blinded patient simulator study.

    PubMed

    Ahlers, M Oliver; Edelhoff, Daniel; Jakstat, Holger A

    2018-06-21

    The benefit from positioning the maxillary casts with the aid of face-bows has been questioned in the past. Therefore, the aim of this study was to investigate the reliability and validity of arbitrary face-bow transfers compared to a process solely based on the orientation by means of average values. For optimized validity, the study was conducted using a controlled, randomized, anonymized, and blinded patient simulator study design. Thirty-eight undergraduate dental students were randomly divided into two groups; both groups were applied to both methods, in opposite sequences. Investigated methods were the transfer of casts using an arbitrary face-bow in comparison to the transfer using average values based on Bonwill's triangle and the Balkwill angle. The "patient" used in this study was a patient simulator. All casts were transferred to the same individual articulator, and all the transferred casts were made using type IV special hard stone plaster; for the attachment into the articulator, type II plaster was used. A blinded evaluation was performed based on three-dimensional measurements of three reference points. The results are presented three-dimensionally in scatterplots. Statistical analysis indicated a significantly smaller variance (Student's t test, p < 0.05) for the transfer using a face-bow, applicable for all three reference points. The use of an arbitrary face-bow significantly improves the transfer reliability and hence the validity. To simulate the patient situation in an individual articulator correctly, casts should be transferred at least by means of an arbitrary face-bow.

  18. The SALOME study: recruitment experiences in a clinical trial offering injectable diacetylmorphine and hydromorphone for opioid dependency.

    PubMed

    Oviedo-Joekes, Eugenia; Marchand, Kirsten; Lock, Kurt; MacDonald, Scott; Guh, Daphne; Schechter, Martin T

    2015-01-26

    The Study to Assess Long-term Opioid Medication Effectiveness (SALOME) is a two-stage phase III, single site (Vancouver, Canada), randomized, double blind controlled trial designed to test if hydromorphone is as effective as diacetylmorphine for the treatment of long-term illicit opioid injection. Recruiting participants for clinical trials continues to be a challenge in medical and addiction research, with many studies not being able to reach the planned sample size in a timely manner. The aim of this study is to describe the recruitment strategies in SALOME, which offered appealing treatments but had limited clinic capacity and no guaranteed post-trial continuation of the treatments. SALOME included chronic opioid-dependent, current illicit injection opioid users who had at least one previous episode of opioid maintenance treatment. Regulatory approvals were received in June 2011 and recruitment strategies were implemented over the next 5 months. Recruitment strategies included ongoing open communication with the community, a consistent and accessible team and participant-centered screening. All applicants completed a pre-screening checklist to assess prerequisites. Applicants meeting these prerequisites were later contacted to commence the screening process. A total of 598 applications were received over the two-year recruitment period; 130 were received on the first day of recruitment. Of these applicants, 485 met prerequisites; however, many could not be found or were not reached before recruitment ended. For the 253 candidates who initiated the screening process, the average time lapse between application and screening date was 8.3 months (standard deviation [SD] = 4.44) and for the 202 randomized to the study, the average processing time from initial screen to randomization was 25.9 days (SD = 37.48; Median = 15.0). As in prior trials offering injectable diacetylmorphine within a supervised model, recruiting participants for this study took longer than planned. The recruitment challenges overcome in SALOME were due to the high number of applicants compared with the limited number that could be randomized and treated. Our study emphasizes the value of integrating these strategies into clinical addiction research to overcome study-specific barriers. ClinicalTrials.gov: NCT01447212.

  19. Spreading in online social networks: the role of social reinforcement.

    PubMed

    Zheng, Muhua; Lü, Linyuan; Zhao, Ming

    2013-07-01

    Some epidemic spreading models are usually applied to analyze the propagation of opinions or news. However, the dynamics of epidemic spreading and information or behavior spreading are essentially different in many aspects. Centola's experiments [Science 329, 1194 (2010)] on behavior spreading in online social networks showed that the spreading is faster and broader in regular networks than in random networks. This result contradicts with the former understanding that random networks are preferable for spreading than regular networks. To describe the spreading in online social networks, a unknown-known-approved-exhausted four-status model was proposed, which emphasizes the effect of social reinforcement and assumes that the redundant signals can improve the probability of approval (i.e., the spreading rate). Performing the model on regular and random networks, it is found that our model can well explain the results of Centola's experiments on behavior spreading and some former studies on information spreading in different parameter space. The effects of average degree and network size on behavior spreading process are further analyzed. The results again show the importance of social reinforcement and are accordant with Centola's anticipation that increasing the network size or decreasing the average degree will enlarge the difference of the density of final approved nodes between regular and random networks. Our work complements the former studies on spreading dynamics, especially the spreading in online social networks where the information usually requires individuals' confirmations before being transmitted to others.

  20. Effects of practice schedule and task specificity on the adaptive process of motor learning.

    PubMed

    Barros, João Augusto de Camargo; Tani, Go; Corrêa, Umberto Cesar

    2017-10-01

    This study investigated the effects of practice schedule and task specificity based on the perspective of adaptive process of motor learning. For this purpose, tasks with temporal and force control learning requirements were manipulated in experiments 1 and 2, respectively. Specifically, the task consisted of touching with the dominant hand the three sequential targets with specific movement time or force for each touch. Participants were children (N=120), both boys and girls, with an average age of 11.2years (SD=1.0). The design in both experiments involved four practice groups (constant, random, constant-random, and random-constant) and two phases (stabilisation and adaptation). The dependent variables included measures related to the task goal (accuracy and variability of error of the overall movement and force patterns) and movement pattern (macro- and microstructures). Results revealed a similar error of the overall patterns for all groups in both experiments and that they adapted themselves differently in terms of the macro- and microstructures of movement patterns. The study concludes that the effects of practice schedules on the adaptive process of motor learning were both general and specific to the task. That is, they were general to the task goal performance and specific regarding the movement pattern. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Process Produces Low-Secondary-Electron-Emission Surfaces

    NASA Technical Reports Server (NTRS)

    Curren, A. N.; Jensen, K. A.; Roman, R. F.

    1986-01-01

    Textured carbon layer applied to copper by sputtering. Carbon surface characterized by dense, random array of needle-like spires or peaks that extend perpendicularly from local copper surface. Spires approximately 7 micrometers in height and spaced approximately 3 micrometers apart, on average. Copper substrate essentially completely covered by carbon layer, is tenacious and not damaged by vibration loadings representative of multistage depressed collector (MDC) applications. Process developed primarily to provide extremely low-secondary-electron-emission surface for copper for use as highefficiency electrodes in MDC's for microwave amplifier traveling-wave tubes (TWT's). Tubes widely used in space communications, aircraft, and terrestrial applications.

  2. Randomness and diversity matter in the maintenance of the public resources

    NASA Astrophysics Data System (ADS)

    Liu, Aizhi; Zhang, Yanling; Chen, Xiaojie; Sun, Changyin

    2017-03-01

    Most previous models about the public goods game usually assume two possible strategies, i.e., investing all or nothing. The real-life situation is rarely all or nothing. In this paper, we consider that multiple strategies are adopted in a well-mixed population, and each strategy represents an investment to produce the public goods. Past efforts have found that randomness matters in the evolution of fairness in the ultimatum game. In the framework involving no other mechanisms, we study how diversity and randomness influence the average investment of the population defined by the mean value of all individuals' strategies. The level of diversity is increased by increasing the strategy number, and the level of randomness is increased by increasing the mutation probability, or decreasing the population size or the selection intensity. We find that a higher level of diversity and a higher level of randomness lead to larger average investment and favor more the evolution of cooperation. Under weak selection, the average investment changes very little with the strategy number, the population size, and the mutation probability. Under strong selection, the average investment changes very little with the strategy number and the population size, but changes a lot with the mutation probability. Under intermediate selection, the average investment increases significantly with the strategy number and the mutation probability, and decreases significantly with the population size. These findings are meaningful to study how to maintain the public resource.

  3. Cost minimization analysis of a store-and-forward teledermatology consult system.

    PubMed

    Pak, Hon S; Datta, Santanu K; Triplett, Crystal A; Lindquist, Jennifer H; Grambow, Steven C; Whited, John D

    2009-03-01

    The aim of this study was to perform a cost minimization analysis of store-and-forward teledermatology compared to a conventional dermatology referral process (usual care). In a Department of Defense (DoD) setting, subjects were randomized to either a teledermatology consult or usual care. Accrued healthcare utilization recorded over a 4-month period included clinic visits, teledermatology visits, laboratories, preparations, procedures, radiological tests, and medications. Direct medical care costs were estimated by combining utilization data with Medicare reimbursement rates and wholesale drug prices. The indirect cost of productivity loss for seeking treatment was also included in the analysis using an average labor rate. Total and average costs were compared between groups. Teledermatology patients incurred $103,043 in total direct costs ($294 average), while usual-care patients incurred $98,365 ($283 average). However, teledermatology patients only incurred $16,359 ($47 average) in lost productivity cost while usual-care patients incurred $30,768 ($89 average). In total, teledermatology patients incurred $119,402 ($340 average) and usual-care patients incurred $129,133 ($372 average) in costs. From the economic perspective of the DoD, store-and-forward teledermatology was a cost-saving strategy for delivering dermatology care compared to conventional consultation methods when productivity loss cost is taken into consideration.

  4. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography

    PubMed Central

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present. PMID:27213008

  5. Detection of Periodic Leg Movements by Machine Learning Methods Using Polysomnographic Parameters Other Than Leg Electromyography.

    PubMed

    Umut, İlhan; Çentik, Güven

    2016-01-01

    The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM) in sleep with the use of the channels except leg electromyography (EMG) by analysing polysomnography (PSG) data with digital signal processing (DSP) and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests) and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87%) and lower average classification error value (RMSE = 0.2850), multilayer perceptron algorithm had the lowest average classification rate (83.29%) and the highest average classification error value (RMSE = 0.3705). Results showed that PLM can be classified with high accuracy (91.87%) without leg EMG record being present.

  6. Criticality in finite dynamical networks

    NASA Astrophysics Data System (ADS)

    Rohlf, Thimo; Gulbahce, Natali; Teuscher, Christof

    2007-03-01

    It has been shown analytically and experimentally that both random boolean and random threshold networks show a transition from ordered to chaotic dynamics at a critical average connectivity Kc in the thermodynamical limit [1]. By looking at the statistical distributions of damage spreading (damage sizes), we go beyond this extensively studied mean-field approximation. We study the scaling properties of damage size distributions as a function of system size N and initial perturbation size d(t=0). We present numerical evidence that another characteristic point, Kd exists for finite system sizes, where the expectation value of damage spreading in the network is independent of the system size N. Further, the probability to obtain critical networks is investigated for a given system size and average connectivity k. Our results suggest that, for finite size dynamical networks, phase space structure is very complex and may not exhibit a sharp order-disorder transition. Finally, we discuss the implications of our findings for evolutionary processes and learning applied to networks which solve specific computational tasks. [1] Derrida, B. and Pomeau, Y. (1986), Europhys. Lett., 1, 45-49

  7. The impact of system level factors on treatment timeliness: utilizing the Toyota Production System to implement direct intake scheduling in a semi-rural community mental health clinic.

    PubMed

    Weaver, Addie; Greeno, Catherine G; Goughler, Donald H; Yarzebinski, Kathleen; Zimmerman, Tina; Anderson, Carol

    2013-07-01

    This study examined the effect of using the Toyota Production System (TPS) to change intake procedures on treatment timeliness within a semi-rural community mental health clinic. One hundred randomly selected cases opened the year before the change and 100 randomly selected cases opened the year after the change were reviewed. An analysis of covariance demonstrated that changing intake procedures significantly decreased the number of days consumers waited for appointments (F(1,160) = 4.9; p = .03) from an average of 11 to 8 days. The pattern of difference on treatment timeliness was significantly different between adult and child programs (F(1,160) = 4.2; p = .04), with children waiting an average of 4 days longer than adults for appointments. Findings suggest that small system level changes may elicit important changes and that TPS offers a valuable model to improve processes within community mental health settings. Results also indicate that different factors drive adult and children's treatment timeliness.

  8. On the determinants of the conjunction fallacy: probability versus inductive confirmation.

    PubMed

    Tentori, Katya; Crupi, Vincenzo; Russo, Selena

    2013-02-01

    Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such accounts with a different reading of the phenomenon based on the notion of inductive confirmation as defined by contemporary Bayesian theorists. Averaging rule hypotheses along with the random error model and many other existing proposals are shown to all imply that conjunction fallacy rates would rise as the perceived probability of the added conjunct does. By contrast, our account predicts that the conjunction fallacy depends on the added conjunct being perceived as inductively confirmed. Four studies are reported in which the judged probability versus confirmation of the added conjunct have been systematically manipulated and dissociated. The results consistently favor a confirmation-theoretic account of the conjunction fallacy against competing views. Our proposal is also discussed in connection with related issues in the study of human inductive reasoning. 2013 APA, all rights reserved

  9. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. The Impact of System Level Factors on Treatment Timeliness: Utilizing the Toyota Production System to Implement Direct Intake Scheduling in a Semi-Rural Community Mental Health Clinic

    PubMed Central

    Weaver, A.; Greeno, C.G.; Goughler, D.H.; Yarzebinski, K.; Zimmerman, T.; Anderson, C.

    2013-01-01

    This study examined the effect of using the Toyota Production System (TPS) to change intake procedures on treatment timeliness within a semi-rural community mental health clinic. One hundred randomly selected cases opened the year before the change and one hundred randomly selected cases opened the year after the change were reviewed. An analysis of covariance (ANCOVA) demonstrated that changing intake procedures significantly decreased the number of days consumers waited for appointments (F(1,160)=4.9; p=.03) from an average of 11 days to 8 days. The pattern of difference on treatment timeliness was significantly different between adult and child programs (F(1,160)=4.2; p=.04), with children waiting an average of 4 days longer than adults for appointments. Findings suggest that small system level changes may elicit important changes and that TPS offers a valuable model to improve processes within community mental health settings. Results also indicate that different factors drive adult and children’s treatment timeliness. PMID:23576137

  11. Risk Stratification and Shared Decision Making for Colorectal Cancer Screening: A Randomized Controlled Trial.

    PubMed

    Schroy, Paul C; Duhovic, Emir; Chen, Clara A; Heeren, Timothy C; Lopez, William; Apodaca, Danielle L; Wong, John B

    2016-05-01

    Eliciting patient preferences within the context of shared decision making has been advocated for colorectal cancer (CRC) screening, yet providers often fail to comply with patient preferences that differ from their own. To determine whether risk stratification for advanced colorectal neoplasia (ACN) influences provider willingness to comply with patient preferences when selecting a desired CRC screening option. Randomized controlled trial. Asymptomatic, average-risk patients due for CRC screening in an urban safety net health care setting. Patients were randomized 1:1 to a decision aid alone (n= 168) or decision aid plus risk assessment (n= 173) arm between September 2012 and September 2014. The primary outcome was concordance between patient preference and test ordered; secondary outcomes included patient satisfaction with the decision-making process, screening intentions, test completion rates, and provider satisfaction. Although providers perceived risk stratification to be useful in selecting an appropriate screening test for their average-risk patients, no significant differences in concordance were observed between the decision aid alone and decision aid plus risk assessment groups (88.1% v. 85.0%,P= 0.40) or high- and low-risk groups (84.5% v. 87.1%,P= 0.51). Concordance was highest for colonoscopy and relatively low for tests other than colonoscopy, regardless of study arm or risk group. Failure to comply with patient preferences was negatively associated with satisfaction with the decision-making process, screening intentions, and test completion rates. Single-institution setting; lack of provider education about the utility of risk stratification into their decision making. Providers perceived risk stratification to be useful in their decision making but often failed to comply with patient preferences for tests other than colonoscopy, even among those deemed to be at low risk of ACN. © The Author(s) 2016.

  12. Quantitative analysis of random migration of cells using time-lapse video microscopy.

    PubMed

    Jain, Prachi; Worthylake, Rebecca A; Alahari, Suresh K

    2012-05-13

    Cell migration is a dynamic process, which is important for embryonic development, tissue repair, immune system function, and tumor invasion (1, 2). During directional migration, cells move rapidly in response to an extracellular chemotactic signal, or in response to intrinsic cues (3) provided by the basic motility machinery. Random migration occurs when a cell possesses low intrinsic directionality, allowing the cells to explore their local environment. Cell migration is a complex process, in the initial response cell undergoes polarization and extends protrusions in the direction of migration (2). Traditional methods to measure migration such as the Boyden chamber migration assay is an easy method to measure chemotaxis in vitro, which allows measuring migration as an end point result. However, this approach neither allows measurement of individual migration parameters, nor does it allow to visualization of morphological changes that cell undergoes during migration. Here, we present a method that allows us to monitor migrating cells in real time using video - time lapse microscopy. Since cell migration and invasion are hallmarks of cancer, this method will be applicable in studying cancer cell migration and invasion in vitro. Random migration of platelets has been considered as one of the parameters of platelet function (4), hence this method could also be helpful in studying platelet functions. This assay has the advantage of being rapid, reliable, reproducible, and does not require optimization of cell numbers. In order to maintain physiologically suitable conditions for cells, the microscope is equipped with CO(2) supply and temperature thermostat. Cell movement is monitored by taking pictures using a camera fitted to the microscope at regular intervals. Cell migration can be calculated by measuring average speed and average displacement, which is calculated by Slidebook software.

  13. Analysis of Realized Volatility for Nikkei Stock Average on the Tokyo Stock Exchange

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya; Watanabe, Toshiaki

    2016-04-01

    We calculate realized volatility of the Nikkei Stock Average (Nikkei225) Index on the Tokyo Stock Exchange and investigate the return dynamics. To avoid the bias on the realized volatility from the non-trading hours issue we calculate realized volatility separately in the two trading sessions, i.e. morning and afternoon, of the Tokyo Stock Exchange and find that the microstructure noise decreases the realized volatility at small sampling frequency. Using realized volatility as a proxy of the integrated volatility we standardize returns in the morning and afternoon sessions and investigate the normality of the standardized returns by calculating variance, kurtosis and 6th moment. We find that variance, kurtosis and 6th moment are consistent with those of the standard normal distribution, which indicates that the return dynamics of the Nikkei Stock Average are well described by a Gaussian random process with time-varying volatility.

  14. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  15. Short communication: Effects of processing methods of barley grain in starter diets on feed intake and performance of dairy calves.

    PubMed

    Jarrah, A; Ghorbani, G R; Rezamand, P; Khorvash, M

    2013-01-01

    The present study was conducted to evaluate the effects of different processing methods of barley grain in starter rations on feed intake, average daily gain, feed efficiency, skeletal growth, fecal score, and rumen pH of dairy calves. Thirty-two Holstein dairy calves (16 female and 16 male) were randomly allocated to 1 of 4 treatments consisting of coarse ground, whole, steam-rolled, or roasted barley from d 4 to 56 of birth in a completely randomized design. Starter diets were formulated to have similar ingredients and composition. All calves had free access to water and feed throughout the study period and received 4 L of milk/d from a bottle from d 4 to 41, 2L/d from d 41 to 45, and weaning occurred on d 45. Feed intake and fecal score were recorded daily. Body weight and skeletal growth measures were recorded on d 4 (beginning of the study), 45, and 56. Rumen fluid and blood samples were collected on d 35, 45, and 56. Data were analyzed using PROC MIXED of SAS (SAS Institute Inc., Cary, NC). The results indicate that different methods of processing barley had no detectable effect on dry matter intake, average daily gain, and feed efficiency and that skeletal growth, health, and rumen pH were not affected by dietary treatments. In conclusion, the results show that different processing methods of barley included in starter diets had no detectable effect on the performance of dairy calves under our experimental conditions. Therefore, feeding whole or coarsely ground barley would be a more economical method compared with steam rolled or roasted barley. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Characterization of addressability by simultaneous randomized benchmarking.

    PubMed

    Gambetta, Jay M; Córcoles, A D; Merkel, S T; Johnson, B R; Smolin, John A; Chow, Jerry M; Ryan, Colm A; Rigetti, Chad; Poletto, S; Ohki, Thomas A; Ketchen, Mark B; Steffen, M

    2012-12-14

    The control and handling of errors arising from cross talk and unwanted interactions in multiqubit systems is an important issue in quantum information processing architectures. We introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking experiments run both individually and simultaneously on pairs of qubits. A relevant figure of merit for the addressability is then related to the differences in the measured average gate fidelities in the two experiments. We present results from two similar samples with differing cross talk and unwanted qubit-qubit interactions. The results agree with predictions based on simple models of the classical cross talk and Stark shifts.

  17. Blocking carbohydrate absorption and weight loss: a clinical trial using Phase 2 brand proprietary fractionated white bean extract.

    PubMed

    Udani, Jay; Hardy, Mary; Madsen, Damian C

    2004-03-01

    Phase 2' starch neutralizer brand bean extract product ("Phase 2") is a water-extract of a common white bean (Phaseolus vulgaris) that has been shown in vitro to inhibit the digestive enzyme alpha-amylase. Inhibiting this enzyme may prevent the digestion of complex carbohydrates, thus decreasing the number of carbohydrate calories absorbed and potentially promoting weight loss. Fifty obese adults were screened to participate in a randomized, double-blind, placebo-controlled study evaluating the effects of treatment with Phase 2 versus placebo on weight loss. Participants were randomized to receive either 1500 mg Phase 2 or an identical placebo twice daily with meals. The active study period was eight weeks. Thirty-nine subjects completed the initial screening process and 27 subjects completed the study. The results after eight weeks demonstrated the Phase 2 group lost an average of 3.79 lbs (average of 0.47 lb per week) compared with the placebo group, which lost an average of 1.65 lbs (average of 0.21 lb per week), representing a difference of 129 percent (p=0.35). Triglyceride levels in the Phase 2 group were reduced an average of 26.3 mg/dL, more than three times greater a reduction than observed in the placebo group (8.2 mg/dL) (p=0.07). No adverse events during the study were attributed to the study medication. Clinical trends were identified for weight loss and a decrease in triglycerides, although statistical significance was not reached. Phase 2 shows potential promise as an adjunct therapy in the treatment of obesity and hypertriglyceridemia and further studies with larger numbers of subjects are warranted to conclusively demonstrate effectiveness.

  18. A randomized comparison of print and web communication on colorectal cancer screening.

    PubMed

    Weinberg, David S; Keenan, Eileen; Ruth, Karen; Devarajan, Karthik; Rodoletz, Michelle; Bieber, Eric J

    2013-01-28

    New methods to enhance colorectal cancer (CRC) screening rates are needed. The web offers novel possibilities to educate patients and to improve health behaviors, such as cancer screening. Evidence supports the efficacy of health communications that are targeted and tailored to improve the uptake of recommendations. We identified unscreened women at average risk for CRC from the scheduling databases of obstetrics and gynecology practices in 2 large health care systems. Participants consented to a randomized controlled trial that compared CRC screening uptake after receipt of CRC screening information delivered via the web or in print form. Participants could also be assigned to a control (usual care) group. Women in the interventional arms received tailored information in a high- or low-monitoring Cognitive Social Information Processing model-defined attentional style. The primary outcome was CRC screening participation at 4 months. A total of 904 women were randomized to the interventional or control group. At 4 months, CRC screening uptake was not significantly different in the web (12.2%), print (12.0%), or control (12.9%) group. Attentional style had no effect on screening uptake for any group. Some baseline participant factors were associated with greater screening, including higher income (P = .03), stage of change (P < .001), and physician recommendation to screen (P < .001). A web-based educational intervention was no more effective than a print-based one or control (no educational intervention) in increasing CRC screening rates in women at average risk of CRC. Risk messages tailored to attentional style had no effect on screening uptake. In average-risk populations, use of the Internet for health communication without additional enhancement is unlikely to improve screening participation. clinicaltrials.gov Identifier: NCT00459030.

  19. Image Processing, Coding, and Compression with Multiple-Point Impulse Response Functions.

    NASA Astrophysics Data System (ADS)

    Stossel, Bryan Joseph

    1995-01-01

    Aspects of image processing, coding, and compression with multiple-point impulse response functions are investigated. Topics considered include characterization of the corresponding random-walk transfer function, image recovery for images degraded by the multiple-point impulse response, and the application of the blur function to image coding and compression. It is found that although the zeros of the real and imaginary parts of the random-walk transfer function occur in continuous, closed contours, the zeros of the transfer function occur at isolated spatial frequencies. Theoretical calculations of the average number of zeros per area are in excellent agreement with experimental results obtained from computer counts of the zeros. The average number of zeros per area is proportional to the standard deviations of the real part of the transfer function as well as the first partial derivatives. Statistical parameters of the transfer function are calculated including the mean, variance, and correlation functions for the real and imaginary parts of the transfer function and their corresponding first partial derivatives. These calculations verify the assumptions required in the derivation of the expression for the average number of zeros. Interesting results are found for the correlations of the real and imaginary parts of the transfer function and their first partial derivatives. The isolated nature of the zeros in the transfer function and its characteristics at high spatial frequencies result in largely reduced reconstruction artifacts and excellent reconstructions are obtained for distributions of impulses consisting of 25 to 150 impulses. The multiple-point impulse response obscures original scenes beyond recognition. This property is important for secure transmission of data on many communication systems. The multiple-point impulse response enables the decoding and restoration of the original scene with very little distortion. Images prefiltered by the random-walk transfer function yield greater compression ratios than are obtained for the original scene. The multiple-point impulse response decreases the bit rate approximately 40-70% and affords near distortion-free reconstructions. Due to the lossy nature of transform-based compression algorithms, noise reduction measures must be incorporated to yield acceptable reconstructions after decompression.

  20. Testing feedback message framing and comparators to address prescribing of high-risk medications in nursing homes: protocol for a pragmatic, factorial, cluster-randomized trial.

    PubMed

    Ivers, Noah M; Desveaux, Laura; Presseau, Justin; Reis, Catherine; Witteman, Holly O; Taljaard, Monica K; McCleary, Nicola; Thavorn, Kednapa; Grimshaw, Jeremy M

    2017-07-14

    Audit and feedback (AF) interventions that leverage routine administrative data offer a scalable and relatively low-cost method to improve processes of care. AF interventions are usually designed to highlight discrepancies between desired and actual performance and to encourage recipients to act to address such discrepancies. Comparing to a regional average is a common approach, but more recipients would have a discrepancy if compared to a higher-than-average level of performance. In addition, how recipients perceive and respond to discrepancies may depend on how the feedback itself is framed. We aim to evaluate the effectiveness of different comparators and framing in feedback on high-risk prescribing in nursing homes. This is a pragmatic, 2 × 2 factorial, cluster-randomized controlled trial testing variations in the comparator and framing on the effectiveness of quarterly AF in changing high-risk prescribing in nursing homes in Ontario, Canada. We grouped homes that share physicians into clusters and randomized these clusters into the four experimental conditions. Outcomes will be assessed after 6 months; all primary analyses will be by intention-to-treat. The primary outcome (monthly number of high-risk medications received by each patient) will be analysed using a general linear mixed effects regression model. We will present both four-arm and factorial analyses. With 160 clusters and an average of 350 beds per cluster, assuming no interaction and similar effects for each intervention, we anticipate 90% power to detect an absolute mean difference of 0.3 high-risk medications prescribed. A mixed-methods process evaluation will explore potential mechanisms underlying the observed effects, exploring targeted constructs including intention, self-efficacy, outcome expectations, descriptive norms, and goal prioritization. An economic analysis will examine cost-effectiveness analysis from the perspective of the publicly funded health care system. This protocol describes the rationale and methodology of a trial testing manipulations of theory-informed components of an audit and feedback intervention to determine how to improve an existing intervention and provide generalizable insights for implementation science. NCT02979964.

  1. Cost-effectiveness of a long-term Internet-delivered worksite health promotion programme on physical activity and nutrition: a cluster randomized controlled trial

    PubMed Central

    Robroek, Suzan J. W.; Polinder, Suzanne; Bredt, Folef J.; Burdorf, Alex

    2012-01-01

    This study aims to evaluate the cost-effectiveness of a long-term workplace health promotion programme on physical activity (PA) and nutrition. In total, 924 participants enrolled in a 2-year cluster randomized controlled trial, with departments (n = 74) within companies (n = 6) as the unit of randomization. The intervention was compared with a standard programme consisting of a physical health check with face-to-face advice and personal feedback on a website. The intervention consisted of several additional website functionalities: action-oriented feedback, self-monitoring, possibility to ask questions and monthly e-mail messages. Primary outcomes were meeting the guidelines for PA and fruit and vegetable intake. Secondary outcomes were self-perceived health, obesity, elevated blood pressure, elevated cholesterol level and maximum oxygen uptake. Direct and indirect costs were calculated from a societal perspective, and a process evaluation was performed. Of the 924 participants, 72% participated in the first and 60% in the second follow-up. No statistically significant differences were found on primary and secondary outcomes, nor on costs. Average direct costs per participant over the 2-year period were €376, and average indirect costs were €9476. In conclusion, no additional benefits were found in effects or cost savings. Therefore, the programme in its current form cannot be recommended for implementation. PMID:22350194

  2. Align and random electrospun mat of PEDOT:PSS and PEDOT:PSS/RGO

    NASA Astrophysics Data System (ADS)

    Sarabi, Ghazale Asghari; Latifi, Masoud; Bagherzadeh, Roohollah

    2018-01-01

    In this research work we fabricated two ultrafine conductive nanofibrous layers to investigate the materilas composition and their properties for the preparation of supercapacitor materials application. In first layer, a polymer and a conductive polymer were used and second layer was a composition of polymer, conductive polymer and carbon-base material. In both cases align and randomized mat of conductive nanofibers were fabricated using electrospinning set up. Conductive poly (3,4-ethylenedioxythiophene)/ polystyrene sulfonate (PEDOT:PSS) nanofibers were electrospun by dissolving fiber-forming polymer and polyvinyl alcohol (PVA) in an aqueous dispersion of PEDOT:PSS. The effect of addition of reduced graphene oxide (RGO) was considered for nanocomposite layer. The ultrafine conductive polymer fibers and conductive nanocomposite fibrous materials were also fabricated using an electrospinning process. A fixed collector and a rotating drum were used for random and align nanofibers production, respectively. The resulted fibers were characterized and analyzed by SEM, FTIR and two-point probe conductivity test. The average diameter of nanofibers measured by ImageJ software indicated that the average fiber diameter for first layer was 100 nm and for nanocomposite layer was about 85 nm. The presence of PEDOT:PSS and RGO in the nanofibers was confirmed by FT-IR spectroscopy. The conductivity of align and random layers was characterized. The conductivity of PEDOT:PSS nanofibers showed higher enhancement by addition of RGO in aqueous dispersion. The obtained results showed that alignment of fibrous materials can be considered as an engineering tool for tuning the conductivity of fibrous materials for many different applications such as supercapacitors, conductive and transparent materials.

  3. Accuracy and reliability testing of two methods to measure internal rotation of the glenohumeral joint.

    PubMed

    Hall, Justin M; Azar, Frederick M; Miller, Robert H; Smith, Richard; Throckmorton, Thomas W

    2014-09-01

    We compared accuracy and reliability of a traditional method of measurement (most cephalad vertebral spinous process that can be reached by a patient with the extended thumb) to estimates made with the shoulder in abduction to determine if there were differences between the two methods. Six physicians with fellowship training in sports medicine or shoulder surgery estimated measurements in 48 healthy volunteers. Three were randomly chosen to make estimates of both internal rotation measurements for each volunteer. An independent observer made objective measurements on lateral scoliosis films (spinous process method) or with a goniometer (abduction method). Examiners were blinded to objective measurements as well as to previous estimates. Intraclass coefficients for interobserver reliability for the traditional method averaged 0.75, indicating good agreement among observers. The difference in vertebral level estimated by the examiner and the actual radiographic level averaged 1.8 levels. The intraclass coefficient for interobserver reliability for the abduction method averaged 0.81 for all examiners, indicating near-perfect agreement. Confidence intervals indicated that estimates were an average of 8° different from the objective goniometer measurements. Pearson correlation coefficients of intraobserver reliability for the abduction method averaged 0.94, indicating near-perfect agreement within observers. Confidence intervals demonstrated repeated estimates between 5° and 10° of the original. Internal rotation estimates made with the shoulder abducted demonstrated interobserver reliability superior to that of spinous process estimates, and reproducibility was high. On the basis of this finding, we now take glenohumeral internal rotation measurements with the shoulder in abduction and use a goniometer to maximize accuracy and objectivity. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  4. Effects of correlations and fees in random multiplicative environments: Implications for portfolio management.

    PubMed

    Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur

    2017-08-01

    Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.

  5. Effects of correlations and fees in random multiplicative environments: Implications for portfolio management

    NASA Astrophysics Data System (ADS)

    Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur

    2017-08-01

    Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.

  6. Bridges in complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Ang-Kun; Tian, Liang; Liu, Yang-Yu

    2018-01-01

    A bridge in a graph is an edge whose removal disconnects the graph and increases the number of connected components. We calculate the fraction of bridges in a wide range of real-world networks and their randomized counterparts. We find that real networks typically have more bridges than their completely randomized counterparts, but they have a fraction of bridges that is very similar to their degree-preserving randomizations. We define an edge centrality measure, called bridgeness, to quantify the importance of a bridge in damaging a network. We find that certain real networks have a very large average and variance of bridgeness compared to their degree-preserving randomizations and other real networks. Finally, we offer an analytical framework to calculate the bridge fraction and the average and variance of bridgeness for uncorrelated random networks with arbitrary degree distributions.

  7. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  8. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    NASA Astrophysics Data System (ADS)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  9. Intervention-Based Stochastic Disease Eradication

    NASA Astrophysics Data System (ADS)

    Billings, Lora; Mier-Y-Teran-Romero, Luis; Lindley, Brandon; Schwartz, Ira

    2013-03-01

    Disease control is of paramount importance in public health with infectious disease extinction as the ultimate goal. Intervention controls, such as vaccination of susceptible individuals and/or treatment of infectives, are typically based on a deterministic schedule, such as periodically vaccinating susceptible children based on school calendars. In reality, however, such policies are administered as a random process, while still possessing a mean period. Here, we consider the effect of randomly distributed intervention as disease control on large finite populations. We show explicitly how intervention control, based on mean period and treatment fraction, modulates the average extinction times as a function of population size and the speed of infection. In particular, our results show an exponential improvement in extinction times even though the controls are implemented using a random Poisson distribution. Finally, we discover those parameter regimes where random treatment yields an exponential improvement in extinction times over the application of strictly periodic intervention. The implication of our results is discussed in light of the availability of limited resources for control. Supported by the National Institute of General Medical Sciences Award No. R01GM090204

  10. Randomization to Standard and Concise Informed Consent Forms: Development of Evidence-Based Consent Practices

    PubMed Central

    Enama, Mary E.; Hu, Zonghui; Gordon, Ingelise; Costner, Pamela; Ledgerwood, Julie E.; Grady, Christine

    2012-01-01

    Background Consent to participate in research is an important component of the conduct of ethical clinical trials. Current consent practices are largely policy-driven. This study was conducted to assess comprehension of study information and satisfaction with the consent form between subjects randomized to concise or to standard informed consent forms as one approach to developing evidence-based consent practices. Methods Participants (N=111) who enrolled into two Phase I investigational influenza vaccine protocols (VRC 306 and VRC 307) at the NIH Clinical Center were randomized to one of two IRB-approved consents; either a standard or concise form. Concise consents had an average of 63% fewer words. All other aspects of the consent process were the same. Questionnaires about the study and the consent process were completed at enrollment and at the last visit in both studies. Results Subjects using concise consent forms scored as well as those using standard length consents in measures of comprehension (7 versus 7, p=0.79 and 20 versus 21, p=0.13), however, the trend was for the concise consent group to report feeling better informed. Both groups thought the length and detail of the consent form was appropriate. Conclusions Randomization of study subjects to different length IRB-approved consents forms as one method for developing evidence-based consent practices, resulted in no differences in study comprehension or satisfaction with the consent form. A concise consent form may be used ethically in the context of a consent process conducted by well-trained staff with opportunities for discussion and education throughout the study. PMID:22542645

  11. Langmuir turbulence driven by beams in solar wind plasmas with long wavelength density fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krafft, C., E-mail: catherine.krafft@u-psud.fr; Universite´ Paris Sud, 91405 Orsay Cedex; Volokitin, A., E-mail: a.volokitin@mail.ru

    2016-03-25

    The self-consistent evolution of Langmuir turbulence generated by electron beams in solar wind plasmas with density inhomogeneities is calculated by numerical simulations based on a 1D Hamiltonian model. It is shown, owing to numerical simulations performed with parameters relevant to type III solar bursts’ conditions at 1 AU, that the presence of long-wavelength random density fluctuations of sufficiently large average level crucially modifies the well-known process of beam interaction with Langmuir waves in homogeneous plasmas.

  12. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    NASA Astrophysics Data System (ADS)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  13. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  14. Statistical theory of nucleation in the presence of uncharacterized impurities

    NASA Astrophysics Data System (ADS)

    Sear, Richard P.

    2004-08-01

    First order phase transitions proceed via nucleation. The rate of nucleation varies exponentially with the free-energy barrier to nucleation, and so is highly sensitive to variations in this barrier. In practice, very few systems are absolutely pure, there are typically some impurities present which are rather poorly characterized. These interact with the nucleus, causing the barrier to vary, and so must be taken into account. Here the impurity-nucleus interactions are modelled by random variables. The rate then has the same form as the partition function of Derrida’s random energy model, and as in this model there is a regime in which the behavior is non-self-averaging. Non-self-averaging nucleation is nucleation with a rate that varies significantly from one realization of the random variables to another. In experiment this corresponds to variation in the nucleation rate from one sample to another. General analytic expressions are obtained for the crossover from a self-averaging to a non-self-averaging rate of nucleation.

  15. Adverse effects of pesticides on central auditory functions in tobacco growers.

    PubMed

    França, Denise Maria Vaz Romano; Bender Moreira Lacerda, Adriana; Lobato, Diolen; Ribas, Angela; Ziliotto Dias, Karin; Leroux, Tony; Fuente, Adrian

    2017-04-01

    To investigate the effects of exposure to pesticides on the central auditory functions (CAF) of Brazilian tobacco growers. This was a cross-sectional study carried out between 2010 and 2012. Participants were evaluated with two behavioural procedures to investigate CAF, the random gap detection test (RGDT) and the dichotic digit test in Portuguese (DDT). A total of 22 growers exposed to pesticides (study group) and 21 subjects who were not exposed to pesticides (control group) were selected. No significant differences between groups were observed for pure-tone thresholds. A significant association between pesticide exposure and the results for RGDT and DDT was found. Significant differences between pesticide-exposed and nonexposed subjects were found for RGDT frequency average and DDT binaural average, when including age and hearing level as covariates. Age was significantly associated with RGDT frequency average, DDT left ear score, DDT binaural average and DDT right ear advantage. Hearing levels were not significantly associated with any of the test scores. The relative risk of failing the DDT and RGDT for the study group was 1.88 (95% CI: 1.10-3.20) and 1.74 (95% CI: 1.06-2.86), respectively, as compared with the control group. The results showed that tobacco growers exposed to pesticides exhibited signs of central auditory dysfunction characterised by decrements in temporal processing and binaural integration processes/abilities.

  16. Correlated continuous time random walk and option pricing

    NASA Astrophysics Data System (ADS)

    Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao

    2016-04-01

    In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.

  17. Feynman-Kac formula for stochastic hybrid systems.

    PubMed

    Bressloff, Paul C

    2017-01-01

    We derive a Feynman-Kac formula for functionals of a stochastic hybrid system evolving according to a piecewise deterministic Markov process. We first derive a stochastic Liouville equation for the moment generator of the stochastic functional, given a particular realization of the underlying discrete Markov process; the latter generates transitions between different dynamical equations for the continuous process. We then analyze the stochastic Liouville equation using methods recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment generating function, averaged with respect to realizations of the discrete Markov process. The resulting Feynman-Kac formula takes the form of a differential Chapman-Kolmogorov equation. We illustrate the theory by calculating the occupation time for a one-dimensional velocity jump process on the infinite or semi-infinite real line. Finally, we present an alternative derivation of the Feynman-Kac formula based on a recent path-integral formulation of stochastic hybrid systems.

  18. Revealing nonergodic dynamics in living cells from a single particle trajectory

    NASA Astrophysics Data System (ADS)

    Lanoiselée, Yann; Grebenkov, Denis S.

    2016-05-01

    We propose the improved ergodicity and mixing estimators to identify nonergodic dynamics from a single particle trajectory. The estimators are based on the time-averaged characteristic function of the increments and can thus capture additional information on the process as compared to the conventional time-averaged mean-square displacement. The estimators are first investigated and validated for several models of anomalous diffusion, such as ergodic fractional Brownian motion and diffusion on percolating clusters, and nonergodic continuous-time random walks and scaled Brownian motion. The estimators are then applied to two sets of earlier published trajectories of mRNA molecules inside live Escherichia coli cells and of Kv2.1 potassium channels in the plasma membrane. These statistical tests did not reveal nonergodic features in the former set, while some trajectories of the latter set could be classified as nonergodic. Time averages along such trajectories are thus not representative and may be strongly misleading. Since the estimators do not rely on ensemble averages, the nonergodic features can be revealed separately for each trajectory, providing a more flexible and reliable analysis of single-particle tracking experiments in microbiology.

  19. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  20. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  1. New scaling relation for information transfer in biological networks

    PubMed Central

    Kim, Hyunju; Davies, Paul; Walker, Sara Imari

    2015-01-01

    We quantify characteristics of the informational architecture of two representative biological networks: the Boolean network model for the cell-cycle regulatory network of the fission yeast Schizosaccharomyces pombe (Davidich et al. 2008 PLoS ONE 3, e1672 (doi:10.1371/journal.pone.0001672)) and that of the budding yeast Saccharomyces cerevisiae (Li et al. 2004 Proc. Natl Acad. Sci. USA 101, 4781–4786 (doi:10.1073/pnas.0305937101)). We compare our results for these biological networks with the same analysis performed on ensembles of two different types of random networks: Erdös–Rényi and scale-free. We show that both biological networks share features in common that are not shared by either random network ensemble. In particular, the biological networks in our study process more information than the random networks on average. Both biological networks also exhibit a scaling relation in information transferred between nodes that distinguishes them from random, where the biological networks stand out as distinct even when compared with random networks that share important topological properties, such as degree distribution, with the biological network. We show that the most biologically distinct regime of this scaling relation is associated with a subset of control nodes that regulate the dynamics and function of each respective biological network. Information processing in biological networks is therefore interpreted as an emergent property of topology (causal structure) and dynamics (function). Our results demonstrate quantitatively how the informational architecture of biologically evolved networks can distinguish them from other classes of network architecture that do not share the same informational properties. PMID:26701883

  2. Use of Play Therapy in Nursing Process: A Prospective Randomized Controlled Study.

    PubMed

    Sezici, Emel; Ocakci, Ayse Ferda; Kadioglu, Hasibe

    2017-03-01

    Play therapy is a nursing intervention employed in multidisciplinary approaches to develop the social, emotional, and behavioral skills of children. In this study, we aim to determine the effects of play therapy on the social, emotional, and behavioral skills of pre-school children through the nursing process. A single-blind, prospective, randomized controlled study was undertaken. The design, conduct, and reporting of this study adhere to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The participants included 4- to 5-year-old kindergarten children with no oral or aural disabilities and parents who agreed to participate in the study. The Pre-school Child and Family Identification Form and Social Competence and the Behavior Evaluation Scale were used to gather data. Games in the play therapy literature about nursing diagnoses (fear, social disturbance, impaired social interactions, ineffective coping, anxiety), which were determined after the preliminary test, constituted the application of the study. There was no difference in the average scores of the children in the experimental and control groups in their Anger-Aggression (AA), Social Competence (SC), and Anxiety-Withdrawal (AW) scores beforehand (t = 0.015, p = .988; t = 0.084, p = .933; t = 0.214, p = .831, respectively). The difference between the average AA and SC scores in the post-test (t = 2.041, p = .045; t = 2.692, p = .009, respectively), and the retests were statistically significant in AA and SC average scores in the experimental and control groups (t = 4.538, p = .000; t = 4.693; p = .000, respectively). In AW average scores, no statistical difference was found in the post-test (t = 0.700, p = .486), whereas in the retest, a significant difference was identified (t = 5.839, p = .000). Play therapy helped pre-school children to improve their social, emotional, and behavioral skills. It also provided benefits for the children to decrease their fear and anxiety levels, to improve their communication and coping skills, and to increase their self-esteem. The study concluded that play therapy helps develop the social, emotional, and behavioral skills of pre-school children. It has also helped children lower their fear and anxiety levels, improve their communication and coping skills, and promote their self-esteem. Pediatric nurses are recommended to include play therapy in their profession and in the nursing process. © 2017 Sigma Theta Tau International.

  3. Arbitrarily small amounts of correlation for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2013-11-15

    As our main result show that in order to achieve the randomness assisted message and entanglement transmission capacities of a finite arbitrarily varying quantum channel it is not necessary that sender and receiver share (asymptotically perfect) common randomness. Rather, it is sufficient that they each have access to an unlimited amount of uses of one part of a correlated bipartite source. This access might be restricted to an arbitrary small (nonzero) fraction per channel use, without changing the main result. We investigate the notion of common randomness. It turns out that this is a very costly resource – generically, itmore » cannot be obtained just by local processing of a bipartite source. This result underlines the importance of our main result. Also, the asymptotic equivalence of the maximal- and average error criterion for classical message transmission over finite arbitrarily varying quantum channels is proven. At last, we prove a simplified symmetrizability condition for finite arbitrarily varying quantum channels.« less

  4. EEG-based research on brain functional networks in cognition.

    PubMed

    Wang, Niannian; Zhang, Li; Liu, Guozhong

    2015-01-01

    Recently, exploring the cognitive functions of the brain by establishing a network model to understand the working mechanism of the brain has become a popular research topic in the field of neuroscience. In this study, electroencephalography (EEG) was used to collect data from subjects given four different mathematical cognitive tasks: recite numbers clockwise and counter-clockwise, and letters clockwise and counter-clockwise to build a complex brain function network (BFN). By studying the connectivity features and parameters of those brain functional networks, it was found that the average clustering coefficient is much larger than its corresponding random network and the average shortest path length is similar to the corresponding random networks, which clearly shows the characteristics of the small-world network. The brain regions stimulated during the experiment are consistent with traditional cognitive science regarding learning, memory, comprehension, and other rational judgment results. The new method of complex networking involves studying the mathematical cognitive process of reciting, providing an effective research foundation for exploring the relationship between brain cognition and human learning skills and memory. This could help detect memory deficits early in young and mentally handicapped children, and help scientists understand the causes of cognitive brain disorders.

  5. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms.

    PubMed

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.

  6. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  7. Hybrid Stochastic Forecasting Model for Management of Large Open Water Reservoir with Storage Function

    NASA Astrophysics Data System (ADS)

    Kozel, Tomas; Stary, Milos

    2017-12-01

    The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for all numbers from interval. Resulted course of management was compared with course, which was obtained from using GE + real flow series. Comparing results showed that fuzzy model with forecasted values has been able to manage main malfunction and artificially disorders made by model were founded essential, after values of water volume during management were evaluated. Forecasting model in combination with fuzzy model provide very good results in management of water reservoir with storage function and can be recommended for this purpose.

  8. What Randomized Benchmarking Actually Measures

    DOE PAGES

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; ...

    2017-09-28

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  9. Height and calories in early childhood.

    PubMed

    Griffen, Andrew S

    2016-03-01

    This paper estimates a height production function using data from a randomized nutrition intervention conducted in rural Guatemala from 1969 to 1977. Using the experimental intervention as an instrument, the IV estimates of the effect of calories on height are an order of magnitude larger than the OLS estimates. Information from a unique measurement error process in the calorie data, counterfactuals results from the estimated model and external evidence from migration studies suggest that IV is not identifying a policy relevant average marginal impact of calories on height. The preferred, attenuation bias corrected OLS estimates from the height production function suggest that, averaging over ages, a 100 calorie increase in average daily calorie intake over the course of a year would increase height by 0.06 cm. Counterfactuals from the model imply that calories gaps in early childhood can explain at most 16% of the height gap between Guatemalan children and the US born children of Guatemalan immigrants. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Statistical analysis of loopy belief propagation in random fields

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki

    2015-10-01

    Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.

  11. Topology-dependent density optima for efficient simultaneous network exploration

    NASA Astrophysics Data System (ADS)

    Wilson, Daniel B.; Baker, Ruth E.; Woodhouse, Francis G.

    2018-06-01

    A random search process in a networked environment is governed by the time it takes to visit every node, termed the cover time. Often, a networked process does not proceed in isolation but competes with many instances of itself within the same environment. A key unanswered question is how to optimize this process: How many concurrent searchers can a topology support before the benefits of parallelism are outweighed by competition for space? Here, we introduce the searcher-averaged parallel cover time (APCT) to quantify these economies of scale. We show that the APCT of the networked symmetric exclusion process is optimized at a searcher density that is well predicted by the spectral gap. Furthermore, we find that nonequilibrium processes, realized through the addition of bias, can support significantly increased density optima. Our results suggest alternative hybrid strategies of serial and parallel search for efficient information gathering in social interaction and biological transport networks.

  12. Coherence-generating power of quantum dephasing processes

    NASA Astrophysics Data System (ADS)

    Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo

    2018-03-01

    We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.

  13. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Maternal choline supplementation during the third trimester of pregnancy improves infant information processing speed: a randomized, double-blind, controlled feeding study.

    PubMed

    Caudill, Marie A; Strupp, Barbara J; Muscalu, Laura; Nevins, Julie E H; Canfield, Richard L

    2018-04-01

    Rodent studies demonstrate that supplementing the maternal diet with choline during pregnancy produces life-long cognitive benefits for the offspring. In contrast, the two experimental studies examining cognitive effects of maternal choline supplementation in humans produced inconsistent results, perhaps because of poor participant adherence and/or uncontrolled variation in intake of choline or other nutrients. We examined the effects of maternal choline supplementation during pregnancy on infant cognition, with intake of choline and other nutrients tightly controlled. Women entering their third trimester were randomized to consume, until delivery, either 480 mg choline/d ( n = 13) or 930 mg choline/d ( n = 13). Infant information processing speed and visuospatial memory were tested at 4, 7, 10, and 13 mo of age ( n = 24). Mean reaction time averaged across the four ages was significantly faster for infants born to mothers in the 930 ( vs. 480) mg choline/d group. This result indicates that maternal consumption of approximately twice the recommended amount of choline during the last trimester improves infant information processing speed. Furthermore, for the 480-mg choline/d group, there was a significant linear effect of exposure duration (infants exposed longer showed faster reaction times), suggesting that even modest increases in maternal choline intake during pregnancy may produce cognitive benefits for offspring.-Caudill, M. A., Strupp, B. J., Muscalu, L., Nevins, J. E. H., Canfield, R. L. Maternal choline supplementation during the third trimester of pregnancy improves infant information processing speed: a randomized, double-blind, controlled feeding study.

  15. The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices

    PubMed Central

    An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei

    2014-01-01

    All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033

  16. Why are mixed-race people perceived as more attractive?

    PubMed

    Lewis, Michael B

    2010-01-01

    Previous, small scale, studies have suggested that people of mixed race are perceived as being more attractive than non-mixed-race people. Here, it is suggested that the reason for this is the genetic process of heterosis or hybrid vigour (ie cross-bred offspring have greater genetic fitness than pure-bred offspring). A random sample of 1205 black, white, and mixed-race faces was collected. These faces were then rated for their perceived attractiveness. There was a small but highly significant effect, with mixed-race faces, on average, being perceived as more attractive. This result is seen as a perceptual demonstration of heterosis in humans-a biological process that may have implications far beyond just attractiveness.

  17. Fast Physically Accurate Rendering of Multimodal Signatures of Distributed Fracture in Heterogeneous Materials.

    PubMed

    Visell, Yon

    2015-04-01

    This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.

  18. Generalized self-adjustment method for statistical mechanics of composite materials

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-03-01

    A new method is developed for the statistical mechanics of composite materials — the generalized selfadjustment method — which makes it possible to reduce the problem of predicting effective elastic properties of composites with random structures to the solution of two simpler "averaged" problems of an inclusion with transitional layers in a medium with the desired effective elastic properties. The inhomogeneous elastic properties and dimensions of the transitional layers take into account both the "approximate" order of mutual positioning, and also the variation in the dimensions and elastics properties of inclusions through appropriate special averaged indicator functions of the random structure of the composite. A numerical calculation of averaged indicator functions and effective elastic characteristics is performed by the generalized self-adjustment method for a unidirectional fiberglass on the basis of various models of actual random structures in the plane of isotropy.

  19. Generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-05-01

    The feasibility of using a generalized self-consistent method for predicting the effective elastic properties of composites with random hybrid structures has been examined. Using this method, the problem is reduced to solution of simpler special averaged problems for composites with single inclusions and corresponding transition layers in the medium examined. The dimensions of the transition layers are defined by correlation radii of the composite random structure of the composite, while the heterogeneous elastic properties of the transition layers take account of the probabilities for variation of the size and configuration of the inclusions using averaged special indicator functions. Results are given for a numerical calculation of the averaged indicator functions and analysis of the effect of the micropores in the matrix-fiber interface region on the effective elastic properties of unidirectional fiberglass—epoxy using the generalized self-consistent method and compared with experimental data and reported solutions.

  20. Statistical study of defects caused by primary knock-on atoms in fcc Cu and bcc W using molecular dynamics

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.

    2015-12-01

    We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.

  1. The stretch to stray on time: Resonant length of random walks in a transient

    NASA Astrophysics Data System (ADS)

    Falcke, Martin; Friedhoff, Victor Nicolai

    2018-05-01

    First-passage times in random walks have a vast number of diverse applications in physics, chemistry, biology, and finance. In general, environmental conditions for a stochastic process are not constant on the time scale of the average first-passage time or control might be applied to reduce noise. We investigate moments of the first-passage time distribution under an exponential transient describing relaxation of environmental conditions. We solve the Laplace-transformed (generalized) master equation analytically using a novel method that is applicable to general state schemes. The first-passage time from one end to the other of a linear chain of states is our application for the solutions. The dependence of its average on the relaxation rate obeys a power law for slow transients. The exponent ν depends on the chain length N like ν = - N / ( N + 1 ) to leading order. Slow transients substantially reduce the noise of first-passage times expressed as the coefficient of variation (CV), even if the average first-passage time is much longer than the transient. The CV has a pronounced minimum for some lengths, which we call resonant lengths. These results also suggest a simple and efficient noise control strategy and are closely related to the timing of repetitive excitations, coherence resonance, and information transmission by noisy excitable systems. A resonant number of steps from the inhibited state to the excitation threshold and slow recovery from negative feedback provide optimal timing noise reduction and information transmission.

  2. Estimating rate uncertainty with maximum likelihood: differences between power-law and flicker–random-walk models

    USGS Publications Warehouse

    Langbein, John O.

    2012-01-01

    Recent studies have documented that global positioning system (GPS) time series of position estimates have temporal correlations which have been modeled as a combination of power-law and white noise processes. When estimating quantities such as a constant rate from GPS time series data, the estimated uncertainties on these quantities are more realistic when using a noise model that includes temporal correlations than simply assuming temporally uncorrelated noise. However, the choice of the specific representation of correlated noise can affect the estimate of uncertainty. For many GPS time series, the background noise can be represented by either: (1) a sum of flicker and random-walk noise or, (2) as a power-law noise model that represents an average of the flicker and random-walk noise. For instance, if the underlying noise model is a combination of flicker and random-walk noise, then incorrectly choosing the power-law model could underestimate the rate uncertainty by a factor of two. Distinguishing between the two alternate noise models is difficult since the flicker component can dominate the assessment of the noise properties because it is spread over a significant portion of the measurable frequency band. But, although not necessarily detectable, the random-walk component can be a major constituent of the estimated rate uncertainty. None the less, it is possible to determine the upper bound on the random-walk noise.

  3. Degradation modeling of mid-power white-light LEDs by using Wiener process.

    PubMed

    Huang, Jianlin; Golubović, Dušan S; Koh, Sau; Yang, Daoguo; Li, Xiupeng; Fan, Xuejun; Zhang, G Q

    2015-07-27

    The IES standard TM-21-11 provides a guideline for lifetime prediction of LED devices. As it uses average normalized lumen maintenance data and performs non-linear regression for lifetime modeling, it cannot capture dynamic and random variation of the degradation process of LED devices. In addition, this method cannot capture the failure distribution, although it is much more relevant in reliability analysis. Furthermore, the TM-21-11 only considers lumen maintenance for lifetime prediction. Color shift, as another important performance characteristic of LED devices, may also render significant degradation during service life, even though the lumen maintenance has not reached the critical threshold. In this study, a modified Wiener process has been employed for the modeling of the degradation of LED devices. By using this method, dynamic and random variations, as well as the non-linear degradation behavior of LED devices, can be easily accounted for. With a mild assumption, the parameter estimation accuracy has been improved by including more information into the likelihood function while neglecting the dependency between the random variables. As a consequence, the mean time to failure (MTTF) has been obtained and shows comparable result with IES TM-21-11 predictions, indicating the feasibility of the proposed method. Finally, the cumulative failure distribution was presented corresponding to different combinations of lumen maintenance and color shift. The results demonstrate that a joint failure distribution of LED devices could be modeled by simply considering their lumen maintenance and color shift as two independent variables.

  4. Numerical simulation of asphalt mixtures fracture using continuum models

    NASA Astrophysics Data System (ADS)

    Szydłowski, Cezary; Górski, Jarosław; Stienss, Marcin; Smakosz, Łukasz

    2018-01-01

    The paper considers numerical models of fracture processes of semi-circular asphalt mixture specimens subjected to three-point bending. Parameter calibration of the asphalt mixture constitutive models requires advanced, complex experimental test procedures. The highly non-homogeneous material is numerically modelled by a quasi-continuum model. The computational parameters are averaged data of the components, i.e. asphalt, aggregate and the air voids composing the material. The model directly captures random nature of material parameters and aggregate distribution in specimens. Initial results of the analysis are presented here.

  5. The distribution of catchment coverage by stationary rainstorms

    NASA Technical Reports Server (NTRS)

    Eagleson, P. S.

    1984-01-01

    The occurrence of wetted rainstorm area within a catchment is modeled as a Poisson arrival process in which each storm is composed of stationary, nonoverlapping, independent random cell clusters whose centers are Poisson-distributed in space and whose areas are fractals. The two Poisson parameters and hence the first two moments of the wetted fraction are derived in terms of catchment average characteristics of the (observable) station precipitation. The model is used to estimate spatial properties of tropical air mass thunderstorms on six tropical catchments in the Sudan.

  6. Spectra of random networks in the weak clustering regime

    NASA Astrophysics Data System (ADS)

    Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen; Rodrigues, Francisco A.

    2018-03-01

    The asymptotic behavior of dynamical processes in networks can be expressed as a function of spectral properties of the corresponding adjacency and Laplacian matrices. Although many theoretical results are known for the spectra of traditional configuration models, networks generated through these models fail to describe many topological features of real-world networks, in particular non-null values of the clustering coefficient. Here we study effects of cycles of order three (triangles) in network spectra. By using recent advances in random matrix theory, we determine the spectral distribution of the network adjacency matrix as a function of the average number of triangles attached to each node for networks without modular structure and degree-degree correlations. Implications to network dynamics are discussed. Our findings can shed light in the study of how particular kinds of subgraphs influence network dynamics.

  7. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  8. Economic lot sizing in a production system with random demand

    NASA Astrophysics Data System (ADS)

    Lee, Shine-Der; Yang, Chin-Ming; Lan, Shu-Chuan

    2016-04-01

    An extended economic production quantity model that copes with random demand is developed in this paper. A unique feature of the proposed study is the consideration of transient shortage during the production stage, which has not been explicitly analysed in existing literature. The considered costs include set-up cost for the batch production, inventory carrying cost during the production and depletion stages in one replenishment cycle, and shortage cost when demand cannot be satisfied from the shop floor immediately. Based on renewal reward process, a per-unit-time expected cost model is developed and analysed. Under some mild condition, it can be shown that the approximate cost function is convex. Computational experiments have demonstrated that the average reduction in total cost is significant when the proposed lot sizing policy is compared with those with deterministic demand.

  9. Evaluation of Gas Phase Dispersion in Flotation under Predetermined Hydrodynamic Conditions

    NASA Astrophysics Data System (ADS)

    Młynarczykowska, Anna; Oleksik, Konrad; Tupek-Murowany, Klaudia

    2018-03-01

    Results of various investigations shows the relationship between the flotation parameters and gas distribution in a flotation cell. The size of gas bubbles is a random variable with a specific distribution. The analysis of this distribution is useful to make mathematical description of the flotation process. The flotation process depends on many variable factors. These are mainly occurrences like collision of single particle with gas bubble, adhesion of particle to the surface of bubble and detachment process. These factors are characterized by randomness. Because of that it is only possible to talk about the probability of occurence of one of these events which directly affects the speed of the process, thus a constant speed of flotation process. Probability of the bubble-particle collision in the flotation chamber with mechanical pulp agitation depends on the surface tension of the solution, air consumption, degree of pul aeration, energy dissipation and average feed particle size. Appropriate identification and description of the parameters of the dispersion of gas bubbles helps to complete the analysis of the flotation process in a specific physicochemical conditions and hydrodynamic for any raw material. The article presents the results of measurements and analysis of the gas phase dispersion by the size distribution of air bubbles in a flotation chamber under fixed hydrodynamic conditions. The tests were carried out in the Laboratory of Instrumental Methods in Department of Environmental Engineering and Mineral Processing, Faculty of Mining and Geoengineerin, AGH Univeristy of Science and Technology in Krakow.

  10. Continuous-Time Random Walk Models of DNA Electrophoresis in a Post Array: II. Mobility and Sources of Band Broadening

    PubMed Central

    Olson, Daniel W.; Dutta, Sarit; Laachi, Nabil; Tian, Mingwei; Dorfman, Kevin D.

    2011-01-01

    Using the two-state, continuous-time random walk model, we develop expressions for the mobility and the plate height during DNA electrophoresis in an ordered post array that delineate the contributions due to (i) the random distance between collisions and (ii) the random duration of a collision. These contributions are expressed in terms of the means and variances of the underlying stochastic processes, which we evaluate from a large ensemble of Brownian dynamics simulations performed using different electric fields and molecular weights in a hexagonal array of 1 μm posts with a 3 μm center-to-center distance. If we fix the molecular weight, we find that the collision frequency governs the mobility. In contrast, the average collision duration is the most important factor for predicting the mobility as a function of DNA size at constant Péclet number. The plate height is reasonably well-described by a single post rope-over-pulley model, provided that the extension of the molecule is small. Our results only account for dispersion inside the post array and thus represent a theoretical lower bound on the plate height in an actual device. PMID:21290387

  11. Programmable random interval generator

    NASA Technical Reports Server (NTRS)

    Lindsey, R. S., Jr.

    1973-01-01

    Random pulse generator can supply constant-amplitude randomly distributed pulses with average rate ranging from a few counts per second to more than one million counts per second. Generator requires no high-voltage power supply or any special thermal cooling apparatus. Device is uniquely versatile and provides wide dynamic range of operation.

  12. Predicting Energy Consumption for Potential Effective Use in Hybrid Vehicle Powertrain Management Using Driver Prediction

    NASA Astrophysics Data System (ADS)

    Magnuson, Brian

    A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.

  13. Properties of plane discrete Poisson-Voronoi tessellations on triangular tiling formed by the Kolmogorov-Johnson-Mehl-Avrami growth of triangular islands

    NASA Astrophysics Data System (ADS)

    Korobov, A.

    2011-08-01

    Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.

  14. Properties of plane discrete Poisson-Voronoi tessellations on triangular tiling formed by the Kolmogorov-Johnson-Mehl-Avrami growth of triangular islands.

    PubMed

    Korobov, A

    2011-08-01

    Discrete uniform Poisson-Voronoi tessellations of two-dimensional triangular tilings resulting from the Kolmogorov-Johnson-Mehl-Avrami (KJMA) growth of triangular islands have been studied. This shape of tiles and islands, rarely considered in the field of random tessellations, is prompted by the birth-growth process of Ir(210) faceting. The growth mode determines a triangular metric different from the Euclidean metric. Kinetic characteristics of tessellations appear to be metric sensitive, in contrast to area distributions. The latter have been studied for the variant of nuclei growth to the first impingement in addition to the conventional case of complete growth. Kiang conjecture works in both cases. The averaged number of neighbors is six for all studied densities of random tessellations, but neighbors appear to be mainly different in triangular and Euclidean metrics. Also, the applicability of the obtained results for simulating birth-growth processes when the 2D nucleation and impingements are combined with the 3D growth in the particular case of similar shape and the same orientation of growing nuclei is briefly discussed.

  15. Entanglement spectrum of random-singlet quantum critical points

    NASA Astrophysics Data System (ADS)

    Fagotti, Maurizio; Calabrese, Pasquale; Moore, Joel E.

    2011-01-01

    The entanglement spectrum (i.e., the full distribution of Schmidt eigenvalues of the reduced density matrix) contains more information than the conventional entanglement entropy and has been studied recently in several many-particle systems. We compute the disorder-averaged entanglement spectrum in the form of the disorder-averaged moments TrρAα̲ of the reduced density matrix ρA for a contiguous block of many spins at the random-singlet quantum critical point in one dimension. The result compares well in the scaling limit with numerical studies on the random XX model and is also expected to describe the (interacting) random Heisenberg model. Our numerical studies on the XX case reveal that the dependence of the entanglement entropy and spectrum on the geometry of the Hilbert space partition is quite different than for conformally invariant critical points.

  16. Nonergodic property of the space-time coupled CTRW: Dependence on the long-tailed property and correlation

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Li, Baohe; Chen, Xiaosong

    2018-02-01

    The space-time coupled continuous time random walk model is a stochastic framework of anomalous diffusion with many applications in physics, geology and biology. In this manuscript the time averaged mean squared displacement and nonergodic property of a space-time coupled continuous time random walk model is studied, which is a prototype of the coupled continuous time random walk presented and researched intensively with various methods. The results in the present manuscript show that the time averaged mean squared displacements increase linearly with lag time which means ergodicity breaking occurs, besides, we find that the diffusion coefficient is intrinsically random which shows both aging and enhancement, the analysis indicates that the either aging or enhancement phenomena are determined by the competition between the correlation exponent γ and the waiting time's long-tailed index α.

  17. The male-taller norm: Lack of evidence from a developing country.

    PubMed

    Sohn, K

    2015-08-01

    In general, women prefer men taller than themselves; this is referred to as the male-taller norm. However, since women are shorter than men on average, it is difficult to determine whether the fact that married women are on average shorter than their husbands results from the norm or is a simple artifact generated by the shorter stature of women. This study addresses the question by comparing the rate of adherence to the male-taller norm between actual mating and hypothetical random mating. A total of 7954 actually married couples are drawn from the last follow-up of the Indonesian Family Life Survey, a nationally representative survey. Their heights were measured by trained nurses. About 10,000 individuals are randomly sampled from the actual couples and randomly matched. An alternative random mating of about 100,000 couples is also performed, taking into account an age difference of 5 years within a couple. The rate of adherence to the male-taller norm is 93.4% for actual couples and 88.8% for random couples. The difference between the two figures is statistically significant, but it is emphasized that it is very small. The alternative random mating produces a rate of 91.4%. The male-taller norm exists in Indonesia, but only in a statistical sense. The small difference suggests that the norm is mostly explained by the fact that women are shorter than men on average. Copyright © 2015 Elsevier GmbH. All rights reserved.

  18. 7 CFR 51.2548 - Average moisture content determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content determination. 51.2548... moisture content determination. (a) Determining average moisture content of the lot is not a requirement of... connection with grade analysis or as a separate determination. (b) Nuts shall be obtained from a randomly...

  19. Pervasive randomness in physics: an introduction to its modelling and spectral characterisation

    NASA Astrophysics Data System (ADS)

    Howard, Roy

    2017-10-01

    An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.

  20. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    NASA Technical Reports Server (NTRS)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  1. The effect of induced mood on children's social information processing: goal clarification and response decision.

    PubMed

    Harper, Bridgette D; Lemerise, Elizabeth A; Caverly, Sarah L

    2010-07-01

    We investigated whether induced mood influenced the social information processing steps of goal clarification and response decision in 480 1st-3rd graders, and in more selected groups of low accepted-aggressive (n = 39), average accepted-nonaggressive (n = 103), and high accepted-nonaggressive children (n = 68). Children participated in two sessions; in the first session peer assessments were administered. In the second session children were randomly assigned to receive either a happy, angry, or neutral mood induction prior to participating in a social cognitive interview assessing goals, outcome expectancies, and self efficacy for competent, hostile, and passive responses in the context of ambiguous provocations. Results revealed that an angry mood increased focus on instrumental goals. Low accepted-aggressive children were more susceptible to the effects of mood than were high accepted- and average-nonaggressive children. In addition, children's predominant goal orientation was related to children's response decisions; children with predominantly instrumental goals evaluated nonhostile responses to provocation more negatively and had higher self efficacy for hostile responses. Implications and future research directions are discussed.

  2. The Statistical Fermi Paradox

    NASA Astrophysics Data System (ADS)

    Maccone, C.

    In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.

  3. Random SU(2) invariant tensors

    NASA Astrophysics Data System (ADS)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  4. cisTEM, user-friendly software for single-particle image processing.

    PubMed

    Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus

    2018-03-07

    We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.

  5. cisTEM, user-friendly software for single-particle image processing

    PubMed Central

    2018-01-01

    We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216

  6. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  7. Models of stochastic gene expression

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2005-06-01

    Gene expression is an inherently stochastic process: Genes are activated and inactivated by random association and dissociation events, transcription is typically rare, and many proteins are present in low numbers per cell. The last few years have seen an explosion in the stochastic modeling of these processes, predicting protein fluctuations in terms of the frequencies of the probabilistic events. Here I discuss commonalities between theoretical descriptions, focusing on a gene-mRNA-protein model that includes most published studies as special cases. I also show how expression bursts can be explained as simplistic time-averaging, and how generic approximations can allow for concrete interpretations without requiring concrete assumptions. Measures and nomenclature are discussed to some extent and the modeling literature is briefly reviewed.

  8. Telegraph noise in Markovian master equation for electron transport through molecular junctions

    NASA Astrophysics Data System (ADS)

    Kosov, Daniel S.

    2018-05-01

    We present a theoretical approach to solve the Markovian master equation for quantum transport with stochastic telegraph noise. Considering probabilities as functionals of a random telegraph process, we use Novikov's functional method to convert the stochastic master equation to a set of deterministic differential equations. The equations are then solved in the Laplace space, and the expression for the probability vector averaged over the ensemble of realisations of the stochastic process is obtained. We apply the theory to study the manifestations of telegraph noise in the transport properties of molecular junctions. We consider the quantum electron transport in a resonant-level molecule as well as polaronic regime transport in a molecular junction with electron-vibration interaction.

  9. Operating Room Time Savings with the Use of Splint Packs: A Randomized Controlled Trial

    PubMed Central

    Gonzalez, Tyler A.; Bluman, Eric M.; Palms, David; Smith, Jeremy T.; Chiodo, Christopher P.

    2016-01-01

    Background: The most expensive variable in the operating room (OR) is time. Lean Process Management is being used in the medical field to improve efficiency in the OR. Streamlining individual processes within the OR is crucial to a comprehensive time saving and cost-cutting health care strategy. At our institution, one hour of OR time costs approximately $500, exclusive of supply and personnel costs. Commercially prepared splint packs (SP) contain all components necessary for plaster-of-Paris short-leg splint application and have the potential to decrease splint application time and overall costs by making it a more lean process. We conducted a randomized controlled trial comparing OR time savings between SP use and bulk supply (BS) splint application. Methods: Fifty consecutive adult operative patients on whom post-operative short-leg splint immobilization was indicated were randomized to either a control group using BS or an experimental group using SP. One orthopaedic surgeon (EMB) prepared and applied all of the splints in a standardized fashion. Retrieval time, preparation time, splint application time, and total splinting time for both groups were measured and statistically analyzed. Results: The retrieval time, preparation time and total splinting time were significantly less (p<0.001) in the SP group compared with the BS group. There was no significant difference in application time between the SP group and BS group. Conclusion: The use of SP made the process of splinting more lean. This has resulted in an average of 2 minutes 52 seconds saved in total splinting time compared to BS, making it an effective cost-cutting and time saving technique. For high volume ORs, use of splint packs may contribute to substantial time and cost savings without impacting patient safety. PMID:26894212

  10. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms

    PubMed Central

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. Methods: We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusion: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel. PMID:24795875

  11. Long-run growth rate in a random multiplicative model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirjol, Dan

    2014-08-01

    We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}₋1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the grand partition function of a one-dimensional lattice gas with two-body linear attractive interactions placed in a uniform field. We study the Lyapunov exponent λ=lim{sub n→∞}1/n log(x{sub n}), at fixed β=1/2 σ²t{sub n}n, and show that it is given by the equation of state of the lattice gas inmore » thermodynamical equilibrium. The Lyapunov exponent has discontinuous partial derivatives along a curve in the (ρ, β) plane ending at a critical point (ρ{sub C}, β{sub C}) which is related to a phase transition in the equivalent lattice gas. Using the equivalence of the lattice gas with a bosonic system, we obtain the exact solution for the equation of state in the thermodynamical limit n → ∞.« less

  12. Computationally Efficient Resampling of Nonuniform Oversampled SAR Data

    DTIC Science & Technology

    2010-05-01

    noncoherently . The resample data is calculated using both a simple average and a weighted average of the demodulated data. The average nonuniform...trials with randomly varying accelerations. The results are shown in Fig. 5 for the noncoherent power difference and Fig. 6 for and coherent power...simple average. Figure 5. Noncoherent difference between SAR imagery generated with uniform sampling and nonuniform sampling that was resampled

  13. Effects of coarse-graining on fluctuations in gene expression

    NASA Astrophysics Data System (ADS)

    Pedraza, Juan; Paulsson, Johan

    2008-03-01

    Many cellular components are present in such low numbers per cell that random births and deaths of individual molecules can cause significant `noise' in concentrations. But biochemical events do not necessarily occur in steps of individual molecules. Some processes are greatly randomized when synthesis or degradation occurs in large bursts of many molecules in a short time interval. Conversely, each birth or death of a macromolecule could involve several small steps, creating a memory between individual events. Here we present generalized theory for stochastic gene expression, formulating the variance in protein abundance in terms of the randomness of the individual events, and discuss the effective coarse-graining of the molecular hardware. We show that common molecular mechanisms produce gestation and senescence periods that can reduce noise without changing average abundances, lifetimes, or any concentration-dependent control loops. We also show that single-cell experimental methods that are now commonplace in cell biology do not discriminate between qualitatively different stochastic principles, but that this in turn makes them better suited for identifying which components introduce fluctuations.

  14. Critical spreading dynamics of parity conserving annihilating random walks with power-law branching

    NASA Astrophysics Data System (ADS)

    Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.

    2018-09-01

    We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.

  15. Groupies in multitype random graphs.

    PubMed

    Shang, Yilun

    2016-01-01

    A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.

  16. Stochastical analysis of surfactant-enhanced remediation of denser-than-water nonaqueous phase liquid (DNAPL)-contaminated soils.

    PubMed

    Zhang, Renduo; Wood, A Lynn; Enfield, Carl G; Jeong, Seung-Woo

    2003-01-01

    Stochastical analysis was performed to assess the effect of soil spatial variability and heterogeneity on the recovery of denser-than-water nonaqueous phase liquids (DNAPL) during the process of surfactant-enhanced remediation. UTCHEM, a three-dimensional, multicomponent, multiphase, compositional model, was used to simulate water flow and chemical transport processes in heterogeneous soils. Soil spatial variability and heterogeneity were accounted for by considering the soil permeability as a spatial random variable and a geostatistical method was used to generate random distributions of the permeability. The randomly generated permeability fields were incorporated into UTCHEM to simulate DNAPL transport in heterogeneous media and stochastical analysis was conducted based on the simulated results. From the analysis, an exponential relationship between average DNAPL recovery and soil heterogeneity (defined as the standard deviation of log of permeability) was established with a coefficient of determination (r2) of 0.991, which indicated that DNAPL recovery decreased exponentially with increasing soil heterogeneity. Temporal and spatial distributions of relative saturations in the water phase, DNAPL, and microemulsion in heterogeneous soils were compared with those in homogeneous soils and related to soil heterogeneity. Cleanup time and uncertainty to determine DNAPL distributions in heterogeneous soils were also quantified. The study would provide useful information to design strategies for the characterization and remediation of nonaqueous phase liquid-contaminated soils with spatial variability and heterogeneity.

  17. Robustness of Controllability for Networks Based on Edge-Attack

    PubMed Central

    Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong

    2014-01-01

    We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components. PMID:24586507

  18. Robustness of controllability for networks based on edge-attack.

    PubMed

    Nie, Sen; Wang, Xuwen; Zhang, Haifeng; Li, Qilang; Wang, Binghong

    2014-01-01

    We study the controllability of networks in the process of cascading failures under two different attacking strategies, random and intentional attack, respectively. For the highest-load edge attack, it is found that the controllability of Erdős-Rényi network, that with moderate average degree, is less robust, whereas the Scale-free network with moderate power-law exponent shows strong robustness of controllability under the same attack strategy. The vulnerability of controllability under random and intentional attacks behave differently with the increasing of removal fraction, especially, we find that the robustness of control has important role in cascades for large removal fraction. The simulation results show that for Scale-free networks with various power-law exponents, the network has larger scale of cascades do not mean that there will be more increments of driver nodes. Meanwhile, the number of driver nodes in cascading failures is also related to the edges amount in strongly connected components.

  19. Role of protein fluctuation correlations in electron transfer in photosynthetic complexes.

    PubMed

    Nesterov, Alexander I; Berman, Gennady P

    2015-04-01

    We consider the dependence of the electron transfer in photosynthetic complexes on correlation properties of random fluctuations of the protein environment. The electron subsystem is modeled by a finite network of connected electron (exciton) sites. The fluctuations of the protein environment are modeled by random telegraph processes, which act either collectively (correlated) or independently (uncorrelated) on the electron sites. We derived an exact closed system of first-order linear differential equations with constant coefficients, for the average density matrix elements and for their first moments. Under some conditions, we obtained analytic expressions for the electron transfer rates and found the range of parameters for their applicability by comparing with the exact numerical simulations. We also compared the correlated and uncorrelated regimes and demonstrated numerically that the uncorrelated fluctuations of the protein environment can, under some conditions, either increase or decrease the electron transfer rates.

  20. Multiscale volatility duration characteristics on financial multi-continuum percolation dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Min; Wang, Jun

    A random stock price model based on the multi-continuum percolation system is developed to investigate the nonlinear dynamics of stock price volatility duration, in an attempt to explain various statistical facts found in financial data, and have a deeper understanding of mechanisms in the financial market. The continuum percolation system is usually referred to be a random coverage process or a Boolean model, it is a member of a class of statistical physics systems. In this paper, the multi-continuum percolation (with different values of radius) is employed to model and reproduce the dispersal of information among the investors. To testify the rationality of the proposed model, the nonlinear analyses of return volatility duration series are preformed by multifractal detrending moving average analysis and Zipf analysis. The comparison empirical results indicate the similar nonlinear behaviors for the proposed model and the actual Chinese stock market.

  1. Enhancement of Spike Synchrony in Hindmarsh-Rose Neural Networks by Randomly Rewiring Connections

    NASA Astrophysics Data System (ADS)

    Yang, Renhuan; Song, Aiguo; Yuan, Wujie

    Spike synchrony of the neural system is thought to have very dichotomous roles. On the one hand, it is ubiquitously present in the healthy brain and is thought to underlie feature binding during information processing. On the other hand, large scale synchronization is an underlying mechanism of epileptic seizures. In this paper, we investigate the spike synchrony of Hindmarsh-Rose (HR) neural networks. Our focus is the influence of the network connections on the spike synchrony of the neural networks. The simulations show that desynchronization in the nearest-neighbor coupled network evolves into accurate synchronization with connection-rewiring probability p increasing. We uncover a phenomenon of enhancement of spike synchrony by randomly rewiring connections. With connection strength c and average connection number m increasing spike synchrony is enhanced but it is not the whole story. Furthermore, the possible mechanism behind such synchronization is also addressed.

  2. Stochastic transport in the presence of spatial disorder: Fluctuation-induced corrections to homogenization

    NASA Astrophysics Data System (ADS)

    Russell, Matthew J.; Jensen, Oliver E.; Galla, Tobias

    2016-10-01

    Motivated by uncertainty quantification in natural transport systems, we investigate an individual-based transport process involving particles undergoing a random walk along a line of point sinks whose strengths are themselves independent random variables. We assume particles are removed from the system via first-order kinetics. We analyze the system using a hierarchy of approaches when the sinks are sparsely distributed, including a stochastic homogenization approximation that yields explicit predictions for the extrinsic disorder in the stationary state due to sink strength fluctuations. The extrinsic noise induces long-range spatial correlations in the particle concentration, unlike fluctuations due to the intrinsic noise alone. Additionally, the mean concentration profile, averaged over both intrinsic and extrinsic noise, is elevated compared with the corresponding profile from a uniform sink distribution, showing that the classical homogenization approximation can be a biased estimator of the true mean.

  3. Method of model reduction and multifidelity models for solute transport in random layered porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersionmore » initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.« less

  4. Global mean first-passage times of random walks on complex networks.

    PubMed

    Tejedor, V; Bénichou, O; Voituriez, R

    2009-12-01

    We present a general framework, applicable to a broad class of random walks on complex networks, which provides a rigorous lower bound for the mean first-passage time of a random walker to a target site averaged over its starting position, the so-called global mean first-passage time (GMFPT). This bound is simply expressed in terms of the equilibrium distribution at the target and implies a minimal scaling of the GMFPT with the network size. We show that this minimal scaling, which can be arbitrarily slow, is realized under the simple condition that the random walk is transient at the target site and independently of the small-world, scale-free, or fractal properties of the network. Last, we put forward that the GMFPT to a specific target is not a representative property of the network since the target averaged GMFPT satisfies much more restrictive bounds.

  5. Patterns of taxonomic, phylogenetic diversity during a long-term succession of forest on the Loess Plateau, China: insights into assembly process

    PubMed Central

    Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang

    2016-01-01

    Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient. PMID:27272407

  6. Patterns of taxonomic, phylogenetic diversity during a long-term succession of forest on the Loess Plateau, China: insights into assembly process.

    PubMed

    Chai, Yongfu; Yue, Ming; Liu, Xiao; Guo, Yaoxin; Wang, Mao; Xu, Jinshi; Zhang, Chenguang; Chen, Yu; Zhang, Lixia; Zhang, Ruichang

    2016-06-08

    Quantifying the drivers underlying the distribution of biodiversity during succession is a critical issue in ecology and conservation, and also can provide insights into the mechanisms of community assembly. Ninety plots were established in the Loess Plateau region of northern Shaanxi in China. The taxonomic and phylogenetic (alpha and beta) diversity were quantified within six succession stages. Null models were used to test whether phylogenetic distance observed differed from random expectations. Taxonomic beta diversity did not show a regular pattern, while phylogenetic beta diversity decreased throughout succession. The shrub stage occurred as a transition from phylogenetic overdispersion to clustering either for NRI (Net Relatedness Index) or betaNRI. The betaNTI (Nearest Taxon Index) values for early stages were on average phylogenetically random, but for the betaNRI analyses, these stages were phylogenetically overdispersed. Assembly of woody plants differed from that of herbaceous plants during late community succession. We suggest that deterministic and stochastic processes respectively play a role in different aspects of community phylogenetic structure for early succession stage, and that community composition of late succession stage is governed by a deterministic process. In conclusion, the long-lasting evolutionary imprints on the present-day composition of communities arrayed along the succession gradient.

  7. Measurement-induced randomness and state-merging

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Indranil; Deshpande, Abhishek; Chatterjee, Sourav

    In this work we introduce the randomness which is truly quantum mechanical in nature arising as an act of measurement. For a composite classical system, we have the joint entropy to quantify the randomness present in the total system and that happens to be equal to the sum of the entropy of one subsystem and the conditional entropy of the other subsystem, given we know the first system. The same analogy carries over to the quantum setting by replacing the Shannon entropy by the von Neumann entropy. However, if we replace the conditional von Neumann entropy by the average conditional entropy due to measurement, we find that it is different from the joint entropy of the system. We call this difference Measurement Induced Randomness (MIR) and argue that this is unique of quantum mechanical systems and there is no classical counterpart to this. In other words, the joint von Neumann entropy gives only the total randomness that arises because of the heterogeneity of the mixture and we show that it is not the total randomness that can be generated in the composite system. We generalize this quantity for N-qubit systems and show that it reduces to quantum discord for two-qubit systems. Further, we show that it is exactly equal to the change in the cost quantum state merging that arises because of the measurement. We argue that for quantum information processing tasks like state merging, the change in the cost as a result of discarding prior information can also be viewed as a rise of randomness due to measurement.

  8. Average size of random polygons with fixed knot topology.

    PubMed

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  9. Introducing two Random Forest based methods for cloud detection in remote sensing images

    NASA Astrophysics Data System (ADS)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.

  10. Work extraction from quantum systems with bounded fluctuations in work.

    PubMed

    Richens, Jonathan G; Masanes, Lluis

    2016-11-25

    In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations.

  11. Work extraction from quantum systems with bounded fluctuations in work

    PubMed Central

    Richens, Jonathan G.; Masanes, Lluis

    2016-01-01

    In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations. PMID:27886177

  12. Work extraction from quantum systems with bounded fluctuations in work

    NASA Astrophysics Data System (ADS)

    Richens, Jonathan G.; Masanes, Lluis

    2016-11-01

    In the standard framework of thermodynamics, work is a random variable whose average is bounded by the change in free energy of the system. This average work is calculated without regard for the size of its fluctuations. Here we show that for some processes, such as reversible cooling, the fluctuations in work diverge. Realistic thermal machines may be unable to cope with arbitrarily large fluctuations. Hence, it is important to understand how thermodynamic efficiency rates are modified by bounding fluctuations. We quantify the work content and work of formation of arbitrary finite dimensional quantum states when the fluctuations in work are bounded by a given amount c. By varying c we interpolate between the standard and minimum free energies. We derive fundamental trade-offs between the magnitude of work and its fluctuations. As one application of these results, we derive the corrected Carnot efficiency of a qubit heat engine with bounded fluctuations.

  13. Pattern Selection and Super-Patterns in Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Ben-Naim, Eli; Scheel, Arnd

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.

  14. Voter dynamics on an adaptive network with finite average connectivity

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  15. Calibration of a universal indicated turbulence system

    NASA Technical Reports Server (NTRS)

    Chapin, W. G.

    1977-01-01

    Theoretical and experimental work on a Universal Indicated Turbulence Meter is described. A mathematical transfer function from turbulence input to output indication was developed. A random ergodic process and a Gaussian turbulence distribution were assumed. A calibration technique based on this transfer function was developed. The computer contains a variable gain amplifier to make the system output independent of average velocity. The range over which this independence holds was determined. An optimum dynamic response was obtained for the tubulation between the system pitot tube and pressure transducer by making dynamic response measurements for orifices of various lengths and diameters at the source end.

  16. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator.

    PubMed

    Khazendar, S; Sayasneh, A; Al-Assam, H; Du, H; Kaijser, J; Ferrara, L; Timmerman, D; Jassim, S; Bourne, T

    2015-01-01

    Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered.

  17. Shifting the focus to practice quality improvement in radiation oncology.

    PubMed

    Crozier, Cheryl; Erickson-Wittmann, Beth; Movsas, Benjamin; Owen, Jean; Khalid, Najma; Wilson, J Frank

    2011-09-01

    To demonstrate how the American College of Radiology, Quality Research in Radiation Oncology (QRRO) process survey database can serve as an evidence base for assessing quality of care in radiation oncology. QRRO has drawn a stratified random sample of radiation oncology facilities in the USA and invited those facilities to participate in a Process Survey. Information from a prior QRRO Facilities Survey has been used along with data collected under the current National Process Survey to calculate national averages and make statistically valid inferences for national process measures for selected cancers in which radiation therapy plays a major role. These measures affect outcomes important to patients and providers and measure quality of care. QRRO's survey data provides national benchmark data for numerous quality indicators. The Process Survey is "fully qualified" as a Practice Quality Improvement project by the American Board of Radiology under its Maintenance of Certification requirements for radiation oncology and radiation physics. © 2011 National Association for Healthcare Quality.

  18. Strange kinetics of bulk-mediated diffusion on lipid bilayers

    PubMed Central

    Campagnola, Grace; Nepal, Kanti; Peersen, Olve B.

    2016-01-01

    Diffusion at solid-liquid interfaces is crucial in many technological and biophysical processes. Although its behavior seems deceivingly simple, recent studies showing passive superdiffusive transport suggest diffusion on surfaces may hide rich complexities. In particular, bulk-mediated diffusion occurs when molecules are transiently released from the surface to perform three-dimensional excursions into the liquid bulk. This phenomenon bears the dichotomy where a molecule always return to the surface but the mean jump length is infinite. Such behavior is associated with a breakdown of the central limit theorem and weak ergodicity breaking. Here, we use single-particle tracking to study the statistics of bulk-mediated diffusion on a supported lipid bilayer. We find that the time-averaged mean square displacement (MSD) of individual trajectories, the archetypal measure in diffusion processes, does not converge to the ensemble MSD but it remains a random variable, even in the long observation-time limit. The distribution of time averages is shown to agree with a Lévy flight model. Our results also unravel intriguing anomalies in the statistics of displacements. The time averaged MSD is shown to depend on experimental time and investigations of fractional moments show a scaling 〈|r(t)|q〉 ∼ tqv(q) with non-linear exponents, i.e. v(q) ≠ const. This type of behavior is termed strong anomalous diffusion and is rare among experimental observations. PMID:27095275

  19. Adapted intervention mapping: a strategic planning process for increasing physical activity and healthy eating opportunities in schools via environment and policy change.

    PubMed

    Belansky, Elaine S; Cutforth, Nick; Chavez, Robert; Crane, Lori A; Waters, Emily; Marshall, Julie A

    2013-03-01

    School environment and policy changes have increased healthy eating and physical activity; however, there has been modest success in translating research findings to practice. The School Environment Project tested whether an adapted version of Intervention Mapping (AIM) resulted in school change. Using a pair randomized design, 10 rural elementary schools were assigned to AIM or the School Health Index (SHI). Baseline measures were collected fall 2005, AIM was conducted 2005-2006, and follow-up measures were collected fall 2006 and 2007. Outcome measures included number and type of effective environment and policy changes implemented; process measures included the extent to which 11 implementation steps were used. AIM schools made an average of 4.4 effective changes per school with 90% still in place a year later. SHI schools made an average of 0.6 effective changes with 66% in place a year later. Implementation steps distinguishing AIM from SHI included use of external, trained facilitators; principal involvement; explicitly stating the student behavior goals; identifying effective environment and policy changes; prioritizing potential changes based on importance and feasibility; and developing an action plan. The AIM process led to environment and policy changes known to increase healthy eating and physical activity. © 2013, American School Health Association.

  20. Effectiveness of bone cleaning process using chemical and entomology approaches: time and cost.

    PubMed

    Lai, Poh Soon; Khoo, Lay See; Mohd Hilmi, Saidin; Ahmad Hafizam, Hasmi; Mohd Shah, Mahmood; Nurliza, Abdullah; Nazni, Wasi Ahmad

    2015-08-01

    Skeletal examination is an important aspect of forensic pathology practice, requiring effective bone cleaning with minimal artefact. This study was conducted to compare between chemical and entomology methods of bone cleaning. Ten subjects between 20 and 40 years old who underwent uncomplicated medico-legal autopsies at the Institute of Forensic Medicine Malaysia were randomly chosen for this descriptive cross sectional study. The sternum bone was divided into 4 parts, each part subjected to a different cleaning method, being two chemical approaches i.e. laundry detergent and a combination of 6% hydrogen peroxide and powder sodium bicarbonate and two entomology approaches using 2nd instar maggots of Chrysomyia rufifacies and Ophyra spinigera. A scoring system for grading the outcome of cleaning was used. The effectiveness of the methods was evaluated based on average weight reduction per day and median number of days to achieve the average score of less than 1.5 within 12 days of the bone cleaning process. Using maggots was the most time-effective and costeffective method, achieving an average weight reduction of 1.4 gm per day, a median of 11.3 days to achieve the desired score and an average cost of MYR 4.10 per case to reach the desired score within 12 days. This conclusion was supported by blind validation by forensic specialists achieving a 77.8% preference for maggots. Emission scanning electron microscopy evaluation also revealed that maggots especially Chrysomyia rufifacies preserved the original condition of the bones better allowing improved elucidation of bone injuries in future real cases.

  1. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    PubMed

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra

    2016-08-05

    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.

  2. When push comes to shove: Exclusion processes with nonlocal consequences

    NASA Astrophysics Data System (ADS)

    Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.

    2015-11-01

    Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.

  3. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    NASA Astrophysics Data System (ADS)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  4. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  5. Inference from clustering with application to gene-expression microarrays.

    PubMed

    Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M

    2002-01-01

    There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.

  6. Determinants of translation speed are randomly distributed across transcripts resulting in a universal scaling of protein synthesis times

    NASA Astrophysics Data System (ADS)

    Sharma, Ajeet K.; Ahmed, Nabeel; O'Brien, Edward P.

    2018-02-01

    Ribosome profiling experiments have found greater than 100-fold variation in ribosome density along mRNA transcripts, indicating that individual codon elongation rates can vary to a similar degree. This wide range of elongation times, coupled with differences in codon usage between transcripts, suggests that the average codon translation-rate per gene can vary widely. Yet, ribosome run-off experiments have found that the average codon translation rate for different groups of transcripts in mouse stem cells is constant at 5.6 AA/s. How these seemingly contradictory results can be reconciled is the focus of this study. Here, we combine knowledge of the molecular factors shown to influence translation speed with genomic information from Escherichia coli, Saccharomyces cerevisiae and Homo sapiens to simulate the synthesis of cytosolic proteins in these organisms. The model recapitulates a near constant average translation rate, which we demonstrate arises because the molecular determinants of translation speed are distributed nearly randomly amongst most of the transcripts. Consequently, codon translation rates are also randomly distributed and fast-translating segments of a transcript are likely to be offset by equally probable slow-translating segments, resulting in similar average elongation rates for most transcripts. We also show that the codon usage bias does not significantly affect the near random distribution of codon translation rates because only about 10 % of the total transcripts in an organism have high codon usage bias while the rest have little to no bias. Analysis of Ribo-Seq data and an in vivo fluorescent assay supports these conclusions.

  7. A Fock space representation for the quantum Lorentz gas

    NASA Astrophysics Data System (ADS)

    Maassen, H.; Tip, A.

    1995-02-01

    A Fock space representation is given for the quantum Lorentz gas, i.e., for random Schrödinger operators of the form H(ω)=p2+Vω=p2+∑ φ(x-xj(ω)), acting in H=L2(Rd), with Poisson distributed xjs. An operator H is defined in K=H⊗P=H⊗L2(Ω,P(dω))=L2(Ω,P(dω);H) by the action of H(ω) on its fibers in a direct integral decomposition. The stationarity of the Poisson process allows a unitarily equivalent description in terms of a new family {H(k)||k∈Rd}, where each H(k) acts in P [A. Tip, J. Math. Phys. 35, 113 (1994)]. The space P is then unitarily mapped upon the symmetric Fock space over L2(Rd,ρdx), with ρ the intensity of the Poisson process (the average number of points xj per unit volume; the scatterer density), and the equivalent of H(k) is determined. Averages now become vacuum expectation values and a further unitary transformation (removing ρ in ρdx) is made which leaves the former invariant. The resulting operator HF(k) has an interesting structure: On the nth Fock layer we encounter a single particle moving in the field of n scatterers and the randomness now appears in the coefficient √ρ in a coupling term connecting neighboring Fock layers. We also give a simple direct self-adjointness proof for HF(k), based upon Nelson's commutator theorem. Restriction to a finite number of layers (a kind of low scatterer density approximation) still gives nontrivial results, as is demonstrated by considering an example.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not amore » well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.« less

  9. Strong Shock Propagating Over A Random Bed of Spherical Particles

    NASA Astrophysics Data System (ADS)

    Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S.; Thakur, Siddharth

    2017-11-01

    The study of shock interaction with particles has been largely motivated because of its wide-ranging applications. The complex interaction between the compressible flow features, such as shock wave and expansion fan, and the dispersed phase makes this multi-phase flow very difficult to predict and control. In this talk we will be presenting results on fully resolved inviscid simulations of shock interaction with random bed of particles. One of the fascinating observations from these simulations are the flow field fluctuations due to the presence of randomly distributed particles. Rigorous averaging (Favre averaging) of the governing equations results in Reynolds stress like term, which can be classified as pseudo turbulence in this case. We have computed this ``Reynolds stress'' term along with individual fluctuations and the turbulent kinetic energy. Average pressure was also computed to characterize the strength of the transmitted and the reflected waves. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program.

  10. Effect of texture randomization on the slip and interfacial robustness in turbulent flows over superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Mani, Ali

    2018-04-01

    Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.

  11. The Dynamical Classification of Centaurs which Evolve into Comets

    NASA Astrophysics Data System (ADS)

    Wood, Jeremy R.; Horner, Jonathan; Hinse, Tobias; Marsden, Stephen; Swinburne University of Technology

    2016-10-01

    Centaurs are small Solar system bodies with semi-major axes between Jupiter and Neptune and perihelia beyond Jupiter. Centaurs can be further subclassified into two dynamical categories - random walk and resonance hopping. Random walk Centaurs have mean square semi-major axes (< a2 >) which vary in time according to a generalized diffusion equation where < a2 > ~t2H. H is the Hurst exponent with 0 < H < 1, and t is time. The behavior of < a2 > for resonance hopping Centaurs is not well described by generalized diffusion.The aim of this study is to determine which dynamical type of Centaur is most likely to evolve into each class of comet. 31,722 fictional massless test particles were integrated for 3 Myr in the 6-body problem (Sun, Jovian planets, test particle). Initially each test particle was a member of one of four groups. The semi-major axes of all test particles in a group were clustered within 0.27 au from a first order, interior Mean Motion resonance of Neptune. The resonances were centered at 18.94 au, 22.95 au, 24.82 au and 28.37 au.If the perihelion of a test particle reached < 4 au then the test particle was considered to be a comet and classified as either a random walk or resonance hopping Centaur. The results showed that over 4,000 test particles evolved into comets within 3 Myr. 59% of these test particles were random walk and 41% were resonance hopping. The behavior of the semi-major axis in time was usually well described by generalized diffusion for random walk Centaurs (ravg = 0.98) and poorly described for resonance hopping Centaurs (ravg = 0.52). The average Hurst exponent was 0.48 for random walk Centaurs and 0.20 for resonance hopping Centaurs. Random walk Centaurs were more likely to evolve into short period comets while resonance hopping Centaurs were more likely to evolve into long period comets. For each initial cluster, resonance hopping Centaurs took longer to evolve into comets than random walk Centaurs. Overall the population of random walk Centaurs averaged 143 kyr to evolve into comets, and the population of resonance hopping Centaurs averaged 164 kyr.

  12. A Randomized Controlled Study of an Insulin Dosing Application That Uses Recognition and Meal Bolus Estimations

    PubMed Central

    Pańkowska, Ewa; Ładyżyński, Piotr; Foltyński, Piotr; Mazurczak, Karolina

    2016-01-01

    Background: Throughout the insulin pump therapy, decisions of prandial boluses programming are taken by patients individually a few times every day, and, moreover, this complex process requires numerical skills and knowledge in nutrition components estimation. The aim of the study was to determine the impact of the expert system, supporting the patient’s decision on meal bolus programming, on the time in range of diurnal glucose excursion in patients treated with continuous subcutaneous insulin infusion (CSII). Methods: The crossover, randomized study included 12 adults, aged 19 to 53, with type 1 diabetes mellitus, duration ranging from 7 to 30 years. Patients were educated in complex food counting, including carbohydrate units (CU) and fat-protein units (FPU). Subsequently, they were randomly allocated to the experimental group (A), which used the expert software named VoiceDiab, and the control group (B), using a manual method of meal-bolus estimation. Results: It was found that 66.7% of patients within the A group statistically reported a relevant increase in the percentage (%) of sensor glucose (SG) in range (TIR 70-180 mg/dl), compared to the B group. TIR (median) reached 53.9% in the experimental group (A) versus 44% within the control group (B), P < .05. The average difference in the number of hypoglycemia episodes was not statistically significant (–0.2%, SD 11.6%, P = .93). The daily insulin requirement in both groups was comparable—the average difference in total daily insulin dose between two groups was 0.26 (SD 7.06 IU, P = .9). Conclusion: The expert system in meal insulin dosing allows improvement in glucose control without increasing the rates of hypoglycemia or the insulin requirement. PMID:28264177

  13. Temporal behavior of the effective diffusion coefficients for transport in heterogeneous saturated aquifers

    NASA Astrophysics Data System (ADS)

    Suciu, N.; Vamos, C.; Vereecken, H.; Vanderborght, J.; Hardelauf, H.

    2003-04-01

    When the small scale transport is modeled by a Wiener process and the large scale heterogeneity by a random velocity field, the effective coefficients, Deff, can be decomposed as sums between the local coefficient, D, a contribution of the random advection, Dadv, and a contribution of the randomness of the trajectory of plume center of mass, Dcm: Deff=D+Dadv-Dcm. The coefficient Dadv is similar to that introduced by Taylor in 1921, and more recent works associate it with the thermodynamic equilibrium. The ``ergodic hypothesis'' says that over large time intervals Dcm vanishes and the effect of the heterogeneity is described by Dadv=Deff-D. In this work we investigate numerically the long time behavior of the effective coefficients as well as the validity of the ergodic hypothesis. The transport in every realization of the velocity field is modeled with the Global Random Walk Algorithm, which is able to track as many particles as necessary to achieve a statistically reliable simulation of the process. Averages over realizations are further used to estimate mean coefficients and standard deviations. In order to remain in the frame of most of the theoretical approaches, the velocity field was generated in a linear approximation and the logarithm of the hydraulic conductivity was taken to be exponential decaying correlated with variance equal to 0.1. Our results show that even in these idealized conditions, the effective coefficients tend to asymptotic constant values only when the plume travels thousands of correlations lengths (while the first order theories usually predict Fickian behavior after tens of correlations lengths) and that the ergodicity conditions are still far from being met.

  14. Automatic microseismic event picking via unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang

    2018-01-01

    Effective and efficient arrival picking plays an important role in microseismic and earthquake data processing and imaging. Widely used short-term-average long-term-average ratio (STA/LTA) based arrival picking algorithms suffer from the sensitivity to moderate-to-strong random ambient noise. To make the state-of-the-art arrival picking approaches effective, microseismic data need to be first pre-processed, for example, removing sufficient amount of noise, and second analysed by arrival pickers. To conquer the noise issue in arrival picking for weak microseismic or earthquake event, I leverage the machine learning techniques to help recognizing seismic waveforms in microseismic or earthquake data. Because of the dependency of supervised machine learning algorithm on large volume of well-designed training data, I utilize an unsupervised machine learning algorithm to help cluster the time samples into two groups, that is, waveform points and non-waveform points. The fuzzy clustering algorithm has been demonstrated to be effective for such purpose. A group of synthetic, real microseismic and earthquake data sets with different levels of complexity show that the proposed method is much more robust than the state-of-the-art STA/LTA method in picking microseismic events, even in the case of moderately strong background noise.

  15. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  16. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    NASA Astrophysics Data System (ADS)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  17. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    DOE PAGES

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; ...

    2018-05-29

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given “elite” status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitnessmore » of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. Furthermore, the machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.« less

  18. A statistical model of false negative and false positive detection of phase singularities.

    PubMed

    Jacquemet, Vincent

    2017-10-01

    The complexity of cardiac fibrillation dynamics can be assessed by analyzing the distribution of phase singularities (PSs) observed using mapping systems. Interelectrode distance, however, limits the accuracy of PS detection. To investigate in a theoretical framework the PS false negative and false positive rates in relation to the characteristics of the mapping system and fibrillation dynamics, we propose a statistical model of phase maps with controllable number and locations of PSs. In this model, phase maps are generated from randomly distributed PSs with physiologically-plausible directions of rotation. Noise and distortion of the phase are added. PSs are detected using topological charge contour integrals on regular grids of varying resolutions. Over 100 × 10 6 realizations of the random field process are used to estimate average false negative and false positive rates using a Monte-Carlo approach. The false detection rates are shown to depend on the average distance between neighboring PSs expressed in units of interelectrode distance, following approximately a power law with exponents in the range of 1.14 to 2 for false negatives and around 2.8 for false positives. In the presence of noise or distortion of phase, false detection rates at high resolution tend to a non-zero noise-dependent lower bound. This model provides an easy-to-implement tool for benchmarking PS detection algorithms over a broad range of configurations with multiple PSs.

  19. Nearest-Neighbor Distances and Aggregative Effects in Turbulence

    NASA Astrophysics Data System (ADS)

    Lanerolle, Lyon W. J.; Rothschild, B. J.; Yeung, P. K.

    2000-11-01

    The dispersive nature of turbulence which causes fluid elements to move apart (on average) is well known. Here we study another facet of turbulent mixing relevant to marine population dynamics - on how small organisms (approximated by fluid particles) are brought close to each other and allowed to interact. The crucial role played by the small scales in this process allows us to use direct numerical simulations of stationary isotropic turbulence, here with Taylor-scale Reynolds numbers (R_λ) from 38 to 91. We study the evolution of the Nearest-Neighbor Distances (NND) for collections of fluid particles initially located randomly in space satisfying Poisson-type distributions with mean values from 0.5 to 2.0 Kolmogorov length scales. Our results show that as particles begin to disperse on average, some also begin to aggregate in space. In particular, we find that (i) a significant proportion of particles are closer to each other than if their NNDs were randomly distributed, (ii) aggregative effects become stronger with R_λ, and (iii) although the mean value of NND grows monotonically with time in Kolmogorov variables, the growth rates are slower at higher R_λ. These results may assist in explaining the ``patchiness'' in plankton distributions observed in biological oceanography. Further details are given in B. J. Rothschild et al., The Biophysical Interpretation of Spatial Effects of Small-scale Turbulent Flow in the Ocean (paper in prep.).

  20. Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies

    PubMed Central

    Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne

    2014-01-01

    Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111

  1. Evaluating the potential for site-specific modification of LiDAR DEM derivatives to improve environmental planning-scale wetland identification using Random Forest classification

    NASA Astrophysics Data System (ADS)

    O'Neil, Gina L.; Goodall, Jonathan L.; Watson, Layne T.

    2018-04-01

    Wetlands are important ecosystems that provide many ecological benefits, and their quality and presence are protected by federal regulations. These regulations require wetland delineations, which can be costly and time-consuming to perform. Computer models can assist in this process, but lack the accuracy necessary for environmental planning-scale wetland identification. In this study, the potential for improvement of wetland identification models through modification of digital elevation model (DEM) derivatives, derived from high-resolution and increasingly available light detection and ranging (LiDAR) data, at a scale necessary for small-scale wetland delineations is evaluated. A novel approach of flow convergence modelling is presented where Topographic Wetness Index (TWI), curvature, and Cartographic Depth-to-Water index (DTW), are modified to better distinguish wetland from upland areas, combined with ancillary soil data, and used in a Random Forest classification. This approach is applied to four study sites in Virginia, implemented as an ArcGIS model. The model resulted in significant improvement in average wetland accuracy compared to the commonly used National Wetland Inventory (84.9% vs. 32.1%), at the expense of a moderately lower average non-wetland accuracy (85.6% vs. 98.0%) and average overall accuracy (85.6% vs. 92.0%). From this, we concluded that modifying TWI, curvature, and DTW provides more robust wetland and non-wetland signatures to the models by improving accuracy rates compared to classifications using the original indices. The resulting ArcGIS model is a general tool able to modify these local LiDAR DEM derivatives based on site characteristics to identify wetlands at a high resolution.

  2. Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection

    NASA Astrophysics Data System (ADS)

    Kang, Z.; Lindenbergh, R.; Pu, S.

    2016-06-01

    This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.

  3. Statistical mechanics of scale-free gene expression networks

    NASA Astrophysics Data System (ADS)

    Gross, Eitan

    2012-12-01

    The gene co-expression networks of many organisms including bacteria, mice and man exhibit scale-free distribution. This heterogeneous distribution of connections decreases the vulnerability of the network to random attacks and thus may confer the genetic replication machinery an intrinsic resilience to such attacks, triggered by changing environmental conditions that the organism may be subject to during evolution. This resilience to random attacks comes at an energetic cost, however, reflected by the lower entropy of the scale-free distribution compared to the more homogenous, random network. In this study we found that the cell cycle-regulated gene expression pattern of the yeast Saccharomyces cerevisiae obeys a power-law distribution with an exponent α = 2.1 and an entropy of 1.58. The latter is very close to the maximal value of 1.65 obtained from linear optimization of the entropy function under the constraint of a constant cost function, determined by the average degree connectivity . We further show that the yeast's gene expression network can achieve scale-free distribution in a process that does not involve growth but rather via re-wiring of the connections between nodes of an ordered network. Our results support the idea of an evolutionary selection, which acts at the level of the protein sequence, and is compatible with the notion of greater biological importance of highly connected nodes in the protein interaction network. Our constrained re-wiring model provides a theoretical framework for a putative thermodynamically driven evolutionary selection process.

  4. The Influence of decision aids on prostate cancer screening preferences: A randomized survey study.

    PubMed

    Weiner, Adam B; Tsai, Kyle P; Keeter, Mary-Kate; Victorson, David E; Schaeffer, Edward M; Catalona, William J; Kundu, Shilajit D

    2018-05-28

    Shared decision making is recommended regarding prostate cancer screening. Decision aids may facilitate this process; however, the impact of decision aids on screening preferences is poorly understood. In an online survey, a national sample of adults were randomized to one of six different professional societies' online decision aids. We compared pre- and post-decision aid responses. The primary outcome was change in participant likelihood to undergo or recommend prostate cancer screening on a scale of 1 (unlikely) to 100 (extremely likely). Secondary outcomes included change in participant comfort with prostate cancer screening based on the average of six, five-point Likert-scale questions. Median age was 53 years for the 1,336 participants, and 50% were men. Randomized groups did not differ significantly by race, age, gender, income, marital status, or education level. Likelihood to undergo or recommend prostate cancer screening decreased from 83 to 78 following decision aid exposure (p<0.001; Figure). Reviewing the decision aid from the Centers for Disease Control or American Academy of Family Physicians did not alter likelihood (both p>0.2), while the decision aid from the United States Preventive Services Task Force was associated with the largest decrease in screening preference (-16.0, p<0.001). Participants reported increased comfort with the decision-making process for prostate cancer screening from 3.5 to 4.1 (out of 5, p<0.001) following exposure to a decision aid. Exposure to a decision aid decreased participant likelihood to undergo or recommend prostate cancer screening and increased comfort with the screening process. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  5. [The reentrant binomial model of nuclear anomalies growth in rhabdomyosarcoma RA-23 cell populations under increasing doze of rare ionizing radiation].

    PubMed

    Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I

    2008-01-01

    Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.

  6. Intrinsic random functions for mitigation of atmospheric effects in terrestrial radar interferometry

    NASA Astrophysics Data System (ADS)

    Butt, Jemil; Wieser, Andreas; Conzett, Stefan

    2017-06-01

    The benefits of terrestrial radar interferometry (TRI) for deformation monitoring are restricted by the influence of changing meteorological conditions contaminating the potentially highly precise measurements with spurious deformations. This is especially the case when the measurement setup includes long distances between instrument and objects of interest and the topography affecting atmospheric refraction is complex. These situations are typically encountered with geo-monitoring in mountainous regions, e.g. with glaciers, landslides or volcanoes. We propose and explain an approach for the mitigation of atmospheric influences based on the theory of intrinsic random functions of order k (IRF-k) generalizing existing approaches based on ordinary least squares estimation of trend functions. This class of random functions retains convenient computational properties allowing for rigorous statistical inference while still permitting to model stochastic spatial phenomena which are non-stationary in mean and variance. We explore the correspondence between the properties of the IRF-k and the properties of the measurement process. In an exemplary case study, we find that our method reduces the time needed to obtain reliable estimates of glacial movements from 12 h down to 0.5 h compared to simple temporal averaging procedures.

  7. Divergence instability of pipes conveying fluid with uncertain flow velocity

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi; Mirdamadi, Hamid Reza; Goli, Sareh

    2018-02-01

    This article deals with investigation of probabilistic stability of pipes conveying fluid with stochastic flow velocity in time domain. As a matter of fact, this study has focused on the randomness effects of flow velocity on stability of pipes conveying fluid while most of research efforts have only focused on the influences of deterministic parameters on the system stability. The Euler-Bernoulli beam and plug flow theory are employed to model pipe structure and internal flow, respectively. In addition, flow velocity is considered as a stationary random process with Gaussian distribution. Afterwards, the stochastic averaging method and Routh's stability criterion are used so as to investigate the stability conditions of system. Consequently, the effects of boundary conditions, viscoelastic damping, mass ratio, and elastic foundation on the stability regions are discussed. Results delineate that the critical mean flow velocity decreases by increasing power spectral density (PSD) of the random velocity. Moreover, by increasing PSD from zero, the type effects of boundary condition and presence of elastic foundation are diminished, while the influences of viscoelastic damping and mass ratio could increase. Finally, to have a more applicable study, regression analysis is utilized to develop design equations and facilitate further analyses for design purposes.

  8. Enhancing physical activity and reducing obesity through smartcare and financial incentives: A pilot randomized trial.

    PubMed

    Shin, Dong Wook; Yun, Jae Moon; Shin, Jung-Hyun; Kwon, Hyuktae; Min, Hye Yeon; Joh, Hee-Kyung; Chung, Won Joo; Park, Jin Ho; Jung, Kee-Taig; Cho, BeLong

    2017-02-01

    A pilot randomized trial assessed the feasibility and effectiveness of an intervention combining Smartcare (activity tracker with a smartphone application) and financial incentives. A three-arm, open-label randomized controlled trial design involving traditional education, Smartcare, and Smartcare with financial incentives was involved in this study. The latter group received financial incentives depending on the achievement of daily physical activity goals (process incentive) and weight loss targets (outcome incentive). Male university students (N = 105) with body mass index of ≥27 were enrolled. The average weight loss in the traditional education, Smartcare, and Smartcare with financial incentives groups was -0.4, -1.1, and -3.1 kg, respectively, with significantly greater weight loss in the third group (both Ps < 0.01). The final weight loss goal was achieved by 0, 2, and 10 participants in the traditional education, Smartcare, and Smartcare with financial incentives groups (odds ratio for the Smartcare with financial incentive vs. Smartcare = 7.27, 95% confidence interval: 1.45-36.47). Levels of physical activity were significantly higher in this group. The addition of financial incentives to Smartcare was effective in increasing physical activity and reducing obesity. © 2017 The Obesity Society.

  9. Unraveling spurious properties of interaction networks with tailored random networks.

    PubMed

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.

  10. Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks

    PubMed Central

    Bialonski, Stephan; Wendler, Martin; Lehnertz, Klaus

    2011-01-01

    We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis. PMID:21850239

  11. A model of gene expression based on random dynamical systems reveals modularity properties of gene regulatory networks.

    PubMed

    Antoneli, Fernando; Ferreira, Renata C; Briones, Marcelo R S

    2016-06-01

    Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables (small noise limit). It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a coupled regulatory network of random dynamical systems. Finally, the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model. We illustrate our framework with some simple examples of single-gene system and network motifs. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators

    NASA Astrophysics Data System (ADS)

    Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.

    2018-05-01

    Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.

  13. Scalable randomized benchmarking of non-Clifford gates

    NASA Astrophysics Data System (ADS)

    Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay

    Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.

  14. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    PubMed

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  15. Correlation Dimension Estimates of Global and Local Temperature Data.

    NASA Astrophysics Data System (ADS)

    Wang, Qiang

    1995-11-01

    The author has attempted to detect the presence of low-dimensional deterministic chaos in temperature data by estimating the correlation dimension with the Hill estimate that has been recently developed by Mikosch and Wang. There is no convincing evidence of low dimensionality with either global dataset (Southern Hemisphere monthly average temperatures from 1858 to 1984) or local temperature dataset (daily minimums at Auckland, New Zealand). Any apparent reduction in the dimension estimates appears to be due large1y, if not entirely, to effects of statistical bias, but neither is it a purely random stochastic process. The dimension of the climatic attractor may be significantly larger than 10.

  16. The timescales of global surface-ocean connectivity.

    PubMed

    Jönsson, Bror F; Watson, James R

    2016-04-19

    Planktonic communities are shaped through a balance of local evolutionary adaptation and ecological succession driven in large part by migration. The timescales over which these processes operate are still largely unresolved. Here we use Lagrangian particle tracking and network theory to quantify the timescale over which surface currents connect different regions of the global ocean. We find that the fastest path between two patches--each randomly located anywhere in the surface ocean--is, on average, less than a decade. These results suggest that marine planktonic communities may keep pace with climate change--increasing temperatures, ocean acidification and changes in stratification over decadal timescales--through the advection of resilient types.

  17. The timescales of global surface-ocean connectivity

    PubMed Central

    Jönsson, Bror F.; Watson, James R.

    2016-01-01

    Planktonic communities are shaped through a balance of local evolutionary adaptation and ecological succession driven in large part by migration. The timescales over which these processes operate are still largely unresolved. Here we use Lagrangian particle tracking and network theory to quantify the timescale over which surface currents connect different regions of the global ocean. We find that the fastest path between two patches—each randomly located anywhere in the surface ocean—is, on average, less than a decade. These results suggest that marine planktonic communities may keep pace with climate change—increasing temperatures, ocean acidification and changes in stratification over decadal timescales—through the advection of resilient types. PMID:27093522

  18. Analysis of the Habitat of Henslow's Sparrows and Grasshopper Sparrows Compared to Random Grassland Areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, K.; Walton, R.; Kasper, P.

    2006-01-01

    ABSTRAC T Henslow’s Sparrows are endangered prairie birds, and Grasshopper Sparrows are considered rare prairie birds. Both of these birds were abundant in Illinois, but their populations have been declining due to loss of the grasslands. This begins an ongoing study of the birds’ habitat so Fermilab can develop a land management plan for the Henslow’s and Grasshoppers. The Henslow’s were found at ten sites and Grasshoppers at eight sites. Once the birds were located, the vegetation at their sites was studied. Measurements of the maximum plant height, average plant height, and duff height were taken and estimates of themore » percent of grass, forbs, duff, and bare ground were recorded for each square meter studied. The same measurements were taken at ten random grassland sites on Fermilab property. Several t-tests were performed on the data, and it was found that both Henslow’s Sparrows and Grasshopper Sparrows preferred areas with a larger percentage of grass than random areas. Henslow’s also preferred areas with less bare ground than random areas, while Grasshoppers preferred areas with more bare ground than random areas. In addition, Grasshopper Sparrows preferred a lower percentage of forbs than was found in random areas and a shorter average plant height than the random locations. Two-sample variance tests suggested significantly less variance for both Henslow’s Sparrows and Grasshopper Sparrows for maximum plant height in comparison to the random sites.« less

  19. Design of a randomized trial of diabetes genetic risk testing to motivate behavior change: the Genetic Counseling/lifestyle Change (GC/LC) Study for Diabetes Prevention.

    PubMed

    Grant, Richard W; Meigs, James B; Florez, Jose C; Park, Elyse R; Green, Robert C; Waxler, Jessica L; Delahanty, Linda M; O'Brien, Kelsey E

    2011-10-01

    The efficacy of diabetes genetic risk testing to motivate behavior change for diabetes prevention is currently unknown. This paper presents key issues in the design and implementation of one of the first randomized trials (The Genetic Counseling/Lifestyle Change (GC/LC) Study for Diabetes Prevention) to test whether knowledge of diabetes genetic risk can motivate patients to adopt healthier behaviors. Because individuals may react differently to receiving 'higher' vs 'lower' genetic risk results, we designed a 3-arm parallel group study to separately test the hypotheses that: (1) patients receiving 'higher' diabetes genetic risk results will increase healthy behaviors compared to untested controls, and (2) patients receiving 'lower' diabetes genetic risk results will decrease healthy behaviors compared to untested controls. In this paper we describe several challenges to implementing this study, including: (1) the application of a novel diabetes risk score derived from genetic epidemiology studies to a clinical population, (2) the use of the principle of Mendelian randomization to efficiently exclude 'average' diabetes genetic risk patients from the intervention, and (3) the development of a diabetes genetic risk counseling intervention that maintained the ethical need to motivate behavior change in both 'higher' and 'lower' diabetes genetic risk result recipients. Diabetes genetic risk scores were developed by aggregating the results of 36 diabetes-associated single nucleotide polymorphisms. Relative risk for type 2 diabetes was calculated using Framingham Offspring Study outcomes, grouped by quartiles into 'higher', 'average' (middle two quartiles) and 'lower' genetic risk. From these relative risks, revised absolute risks were estimated using the overall absolute risk for the study group. For study efficiency, we excluded all patients receiving 'average' diabetes risk results from the subsequent intervention. This post-randomization allocation strategy was justified because genotype represents a random allocation of parental alleles ('Mendelian randomization'). Finally, because it would be unethical to discourage participants to participate in diabetes prevention behaviors, we designed our two diabetes genetic risk counseling interventions (for 'higher' and 'lower' result recipients) so that both groups would be motivated despite receiving opposing results. For this initial assessment of the clinical implementation of genetic risk testing we assessed intermediate outcomes of attendance at a 12-week diabetes prevention course and changes in self-reported motivation. If effective, longer term studies with larger sample sizes will be needed to assess whether knowledge of diabetes genetic risk can help patients prevent diabetes. We designed a randomized clinical trial designed to explore the motivational impact of disclosing both higher than average and lower than average genetic risk for type 2 diabetes. This design allowed exploration of both increased risk and false reassurance, and has implications for future studies in translational genomics.

  20. Critical thresholds for eventual extinction in randomly disturbed population growth models.

    PubMed

    Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick

    2018-02-16

    This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.

  1. Evaluating clinical trial design: systematic review of randomized vehicle-controlled trials for determining efficacy of benzoyl peroxide topical therapy for acne.

    PubMed

    Lamel, Sonia A; Sivamani, Raja K; Rahvar, Maral; Maibach, Howard I

    2015-11-01

    Determined efficacies of benzoyl peroxide may be affected by study design, implementation, and vehicle effects. We sought to elucidate areas that may allow improvement in determining accurate treatment efficacies by determining rates of active treatment and vehicle responders in randomized controlled trials assessing the efficacy of topical benzoyl peroxide to treat acne. We conducted a systematic review of randomized vehicle-controlled trials evaluating the efficacy of topical benzoyl peroxide for the treatment of acne. We compared response rates of vehicle treatment arms versus those in benzoyl peroxide arms. Twelve trials met inclusion criteria with 2818 patients receiving benzoyl peroxide monotherapy treatment and 2004 receiving vehicle treatment. The average percent reduction in total number of acne lesions was 44.3 (SD = 9.2) and 27.8 (SD = 21.0) for the active and vehicle treatment groups, respectively. The average reduction in non-inflammatory lesions was 41.5 % (SD = 9.4) in the active treatment group and 27.0 % (SD = 20.9) in the vehicle group. The average percent decrease in inflammatory lesions was 52.1 (SD = 10.4) in the benzoyl peroxide group and 34.7 (SD = 22.7) in the vehicle group. The average percentage of participants achieving success per designated study outcomes was 28.6 (SD = 17.3) and 15.2 (SD = 9.5) in the active treatment and vehicle groups, respectively. Patient responses in randomized controlled trials evaluating topical acne therapies may be affected by clinical trial design, implementation, the biologic effects of vehicles, and natural disease progression. "No treatment" groups may facilitate determination of accurate treatment efficacies.

  2. Can Time of Implant Placement influence Bone Remodeling?

    PubMed

    Rafael, Caroline F; Passoni, Bernardo; Araúio, Carlos; de Araúio, Maria A; Benfatti, César; Volpato, Claudia

    2016-04-01

    Since the alveolar process is tissue "dental dependent," after the extraction of the dental element, this process suffers some degree of atrophy during the healing process, which can be reduced with the installation of immediate implants, aiming to maintain the original bone architecture. The aim of this study was to investigate the influence of the time of implant placement on bone formation around them. Seven dogs were selected and randomly divided into two groups: Group 1, where implants were placed immediately after extraction of two lower premolars without flap elevation, and group 2, where implants were delayed by 4 months after extractions. Each group received 14 implants, and 4 months after the second surgery, the samples were processed and analyzed histomorphometrically. A mean average analysis and the Kruskal-Wallis test (p < 0.05) were performed. The buccal bone-implant contact (BIC) mean average was found larger in immediate implants (42.61%) compared with delayed implants (37.69%). Group 1 had statistically higher outcomes in bone formation and BIC on the buccal bone wall. It was concluded that performing immediate implants with the palatal approach technique and leaving a buccal GAP enables a higher or at least equal rate to BIC and bone area around them, when compared with delayed implants. Actually, the patients and dentists want to do a shorter treatment with satisfactory results, but it is necessary to understand whether different times of implant placement can influence the results and longevity of the treatment.

  3. Automated characterisation of ultrasound images of ovarian tumours: the diagnostic accuracy of a support vector machine and image processing with a local binary pattern operator

    PubMed Central

    Khazendar, S.; Sayasneh, A.; Al-Assam, H.; Du, H.; Kaijser, J.; Ferrara, L.; Timmerman, D.; Jassim, S.; Bourne, T.

    2015-01-01

    Introduction: Preoperative characterisation of ovarian masses into benign or malignant is of paramount importance to optimise patient management. Objectives: In this study, we developed and validated a computerised model to characterise ovarian masses as benign or malignant. Materials and methods: Transvaginal 2D B mode static ultrasound images of 187 ovarian masses with known histological diagnosis were included. Images were first pre-processed and enhanced, and Local Binary Pattern Histograms were then extracted from 2 × 2 blocks of each image. A Support Vector Machine (SVM) was trained using stratified cross validation with randomised sampling. The process was repeated 15 times and in each round 100 images were randomly selected. Results: The SVM classified the original non-treated static images as benign or malignant masses with an average accuracy of 0.62 (95% CI: 0.59-0.65). This performance significantly improved to an average accuracy of 0.77 (95% CI: 0.75-0.79) when images were pre-processed, enhanced and treated with a Local Binary Pattern operator (mean difference 0.15: 95% 0.11-0.19, p < 0.0001, two-tailed t test). Conclusion: We have shown that an SVM can classify static 2D B mode ultrasound images of ovarian masses into benign and malignant categories. The accuracy improves if texture related LBP features extracted from the images are considered. PMID:25897367

  4. Is Scientifically Based Reading Instruction Effective for Students with Below-Average IQs?

    ERIC Educational Resources Information Center

    Allor, Jill H.; Mathes, Patricia G.; Roberts, J. Kyle; Cheatham, Jennifer P.; Al Otaiba, Stephanie

    2014-01-01

    This longitudinal randomized-control trial investigated the effectiveness of scientifically based reading instruction for students with IQs ranging from 40 to 80, including students with intellectual disability (ID). Students were randomly assigned into treatment (n = 76) and contrast (n = 65) groups. Students in the treatment group received…

  5. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  6. Topological Structure of the Space of Phenotypes: The Case of RNA Neutral Networks

    PubMed Central

    Aguirre, Jacobo; Buldú, Javier M.; Stich, Michael; Manrubia, Susanna C.

    2011-01-01

    The evolution and adaptation of molecular populations is constrained by the diversity accessible through mutational processes. RNA is a paradigmatic example of biopolymer where genotype (sequence) and phenotype (approximated by the secondary structure fold) are identified in a single molecule. The extreme redundancy of the genotype-phenotype map leads to large ensembles of RNA sequences that fold into the same secondary structure and can be connected through single-point mutations. These ensembles define neutral networks of phenotypes in sequence space. Here we analyze the topological properties of neutral networks formed by 12-nucleotides RNA sequences, obtained through the exhaustive folding of sequence space. A total of 412 sequences fragments into 645 subnetworks that correspond to 57 different secondary structures. The topological analysis reveals that each subnetwork is far from being random: it has a degree distribution with a well-defined average and a small dispersion, a high clustering coefficient, and an average shortest path between nodes close to its minimum possible value, i.e. the Hamming distance between sequences. RNA neutral networks are assortative due to the correlation in the composition of neighboring sequences, a feature that together with the symmetries inherent to the folding process explains the existence of communities. Several topological relationships can be analytically derived attending to structural restrictions and generic properties of the folding process. The average degree of these phenotypic networks grows logarithmically with their size, such that abundant phenotypes have the additional advantage of being more robust to mutations. This property prevents fragmentation of neutral networks and thus enhances the navigability of sequence space. In summary, RNA neutral networks show unique topological properties, unknown to other networks previously described. PMID:22028856

  7. Investigating the mechanisms responsible for the lack of surface energy balance closure in a central Amazonian tropical rainforest

    DOE PAGES

    Gerken, Tobias; Ruddell, Benjamin L.; Fuentes, Jose D.; ...

    2017-04-29

    This work investigates the diurnal and seasonal behavior of the energy balance residual (E) that results from the observed difference between available energy and the turbulent fluxes of sensible heat (H) and latent heat (LE) at the FLUXNET BR-Ma2 site located in the Brazilian central Amazon rainforest. The behavior of E is analyzed by extending the eddy covariance averaging length from 30 min to 4 h and by applying an Information Flow Dynamical Process Network to diagnose processes and conditions affecting E across different seasons. Results show that the seasonal turbulent flux dynamics and the Bowen ratio are primarily drivenmore » by net radiation (R n), with substantial sub-seasonal variability. The Bowen ratio increased from 0.25 in April to 0.4 at the end of September. Extension of the averaging length from 0.5 (94.6% closure) to 4 h and thus inclusion of longer timescale eddies and mesoscale processes closes the energy balance and lead to an increase in the Bowen ratio, thus highlighting the importance of additional H to E. Information flow analysis reveals that the components of the energy balance explain between 25 and 40% of the total Shannon entropy with higher values during the wet season than the dry season. Dry season information flow from the buoyancy flux to E are 30–50% larger than that from H, indicating the potential importance of buoyancy fluxes to closing E. While the low closure highlights additional sources not captured in the flux data and random measurement errors contributing to E, the findings of the information flow and averaging length analysis are consistent with the impact of mesoscale circulations, which tend to transport more H than LE, on the lack of closure.« less

  8. Investigating the mechanisms responsible for the lack of surface energy balance closure in a central Amazonian tropical rainforest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerken, Tobias; Ruddell, Benjamin L.; Fuentes, Jose D.

    This work investigates the diurnal and seasonal behavior of the energy balance residual (E) that results from the observed difference between available energy and the turbulent fluxes of sensible heat (H) and latent heat (LE) at the FLUXNET BR-Ma2 site located in the Brazilian central Amazon rainforest. The behavior of E is analyzed by extending the eddy covariance averaging length from 30 min to 4 h and by applying an Information Flow Dynamical Process Network to diagnose processes and conditions affecting E across different seasons. Results show that the seasonal turbulent flux dynamics and the Bowen ratio are primarily drivenmore » by net radiation (R n), with substantial sub-seasonal variability. The Bowen ratio increased from 0.25 in April to 0.4 at the end of September. Extension of the averaging length from 0.5 (94.6% closure) to 4 h and thus inclusion of longer timescale eddies and mesoscale processes closes the energy balance and lead to an increase in the Bowen ratio, thus highlighting the importance of additional H to E. Information flow analysis reveals that the components of the energy balance explain between 25 and 40% of the total Shannon entropy with higher values during the wet season than the dry season. Dry season information flow from the buoyancy flux to E are 30–50% larger than that from H, indicating the potential importance of buoyancy fluxes to closing E. While the low closure highlights additional sources not captured in the flux data and random measurement errors contributing to E, the findings of the information flow and averaging length analysis are consistent with the impact of mesoscale circulations, which tend to transport more H than LE, on the lack of closure.« less

  9. Search efficiency of biased migration towards stationary or moving targets in heterogeneously structured environments

    NASA Astrophysics Data System (ADS)

    Azimzade, Youness; Mashaghi, Alireza

    2017-12-01

    Efficient search acts as a strong selective force in biological systems ranging from cellular populations to predator-prey systems. The search processes commonly involve finding a stationary or mobile target within a heterogeneously structured environment where obstacles limit migration. An open generic question is whether random or directionally biased motions or a combination of both provide an optimal search efficiency and how that depends on the motility and density of targets and obstacles. To address this question, we develop a simple model that involves a random walker searching for its targets in a heterogeneous medium of bond percolation square lattice and used mean first passage time (〈T 〉 ) as an indication of average search time. Our analysis reveals a dual effect of directional bias on the minimum value of 〈T 〉 . For a homogeneous medium, directionality always decreases 〈T 〉 and a pure directional migration (a ballistic motion) serves as the optimized strategy, while for a heterogeneous environment, we find that the optimized strategy involves a combination of directed and random migrations. The relative contribution of these modes is determined by the density of obstacles and motility of targets. Existence of randomness and motility of targets add to the efficiency of search. Our study reveals generic and simple rules that govern search efficiency. Our findings might find application in a number of areas including immunology, cell biology, ecology, and robotics.

  10. Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers

    NASA Astrophysics Data System (ADS)

    Sendersky, Dmitry

    2000-10-01

    The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.

  11. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  12. Method for improving instrument response

    DOEpatents

    Hahn, David W.; Hencken, Kenneth R.; Johnsen, Howard A.; Flower, William L.

    2000-01-01

    This invention pertains generally to a method for improving the accuracy of particle analysis under conditions of discrete particle loading and particularly to a method for improving signal-to-noise ratio and instrument response in laser spark spectroscopic analysis of particulate emissions. Under conditions of low particle density loading (particles/m.sup.3) resulting from low overall metal concentrations and/or large particle size uniform sampling can not be guaranteed. The present invention discloses a technique for separating laser sparks that arise from sample particles from those that do not; that is, a process for systematically "gating" the instrument response arising from "sampled" particles from those responses which do not, is dislosed as a solution to his problem. The disclosed approach is based on random sampling combined with a conditional analysis of each pulse. A threshold value is determined for the ratio of the intensity of a spectral line for a given element to a baseline region. If the threshold value is exceeded, the pulse is classified as a "hit" and that data is collected and an average spectrum is generated from an arithmetic average of "hits". The true metal concentration is determined from the averaged spectrum.

  13. Two-part random effects growth modeling to identify risks associated with alcohol and cannabis initiation, initial average use and changes in drug consumption in a sample of adult, male twins

    PubMed Central

    Gillespie, Nathan A.; Lubke, Gitta H.; Gardner, Charles O.; Neale, Michael C.; Kendler, Kenneth S.

    2012-01-01

    Aims Our aim was to profile alcohol and cannabis initiation and to characterize the effects of developmental and environmental risk factors on changes in average drug use over time. Design We fitted a two-part random effects growth model to identify developmental and environmental risks associated with alcohol and cannabis initiation, initial average use and changes in average use. Participants 1796 males aged 24–63 from the Virginia Adult Twin Study of Psychiatric and Substance Use Disorders. Measurements Data from three interview waves included self-report measures of average alcohol and cannabis use between ages 15 and 24, genetic risk of problem drug use, childhood environmental risks, personality, psychiatric symptoms, as well as personal, family and social risk factors. Findings Average alcohol and cannabis use were correlated at all ages. Genetic risk of drug use based on family history, higher sensation seeking, and peer group deviance predicted both alcohol and cannabis initiation. Higher drug availability predicted cannabis initiation while less parental monitoring and drug availability were the best predictors of how much cannabis individuals consumed over time. Conclusion The liability to initiate alcohol and cannabis, average drug use as well as changes in drug use during teenage years and young adulthood is associated with known risk factors. PMID:22177896

  14. Fabrication of polymer micro-lens array with pneumatically diaphragm-driven drop-on-demand inkjet technology.

    PubMed

    Xie, Dan; Zhang, Honghai; Shu, Xiayun; Xiao, Junfeng

    2012-07-02

    The paper reports an effective method to fabricate micro-lens arrays with the ultraviolet-curable polymer, using an original pneumatically diaphragm-driven drop-on-demand inkjet system. An array of plano convex micro-lenses can be formed on the glass substrate due to surface tension and hydrophobic effect. The micro-lens arrays have uniform focusing function, smooth and real planar surface. The fabrication process showed good repeatability as well, fifty micro-lenses randomly selected form 9 × 9 miro-lens array with an average diameter of 333.28μm showed 1.1% variations. Also, the focal length, the surface roughness and optical property of the fabricated micro-lenses are measured, analyzed and proved satisfactory. The technique shows great potential for fabricating polymer micro-lens arrays with high flexibility, simple technological process and low production cost.

  15. Computer Modeling of High-Intensity Cs-Sputter Ion Sources

    NASA Astrophysics Data System (ADS)

    Brown, T. A.; Roberts, M. L.; Southon, J. R.

    The grid-point mesh program NEDLab has been used to computer model the interior of the high-intensity Cs-sputter source used in routine operations at the Center for Accelerator Mass Spectrometry (CAMS), with the goal of improving negative ion output. NEDLab has several features that are important to realistic modeling of such sources. First, space-charge effects are incorporated in the calculations through an automated ion-trajectories/Poissonelectric-fields successive-iteration process. Second, space charge distributions can be averaged over successive iterations to suppress model instabilities. Third, space charge constraints on ion emission from surfaces can be incorporate under Child's Law based algorithms. Fourth, the energy of ions emitted from a surface can be randomly chosen from within a thermal energy distribution. And finally, ions can be emitted from a surface at randomized angles The results of our modeling effort indicate that significant modification of the interior geometry of the source will double Cs+ ion production from our spherical ionizer and produce a significant increase in negative ion output from the source.

  16. The Enhancement of 3D Scans Depth Resolution Obtained by Confocal Scanning of Porous Materials

    NASA Astrophysics Data System (ADS)

    Martisek, Dalibor; Prochazkova, Jana

    2017-12-01

    The 3D reconstruction of simple structured materials using a confocal microscope is widely used in many different areas including civil engineering. Nonetheless, scans of porous materials such as concrete or cement paste are highly problematic. The well-known problem of these scans is low depth resolution in comparison to the horizontal and vertical resolution. The degradation of the image depth resolution is caused by systematic errors and especially by different random events. Our method is focused on the elimination of such random events, mainly the additive noise. We use an averaging method based on the Lindeberg-Lévy theorem that improves the final depth resolution to a level comparable with horizontal and vertical resolution. Moreover, using the least square method, we also precisely determine the limit value of a depth resolution. Therefore, we can continuously evaluate the difference between current resolution and the optimal one. This substantially simplifies the scanning process because the operator can easily determine the required number of scans.

  17. Quantifying memory in complex physiological time-series.

    PubMed

    Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.

  18. Quantifying Memory in Complex Physiological Time-Series

    PubMed Central

    Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811

  19. Stochastic modelling of animal movement.

    PubMed

    Smouse, Peter E; Focardi, Stefano; Moorcroft, Paul R; Kie, John G; Forester, James D; Morales, Juan M

    2010-07-27

    Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal 'settling down', accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry.

  20. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  1. SETI and SEH (Statistical Equation for Habitables)

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2011-01-01

    The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book "Habitable planets for man" (1964). In this paper, we first provide the statistical generalization of the original and by now too simplistic Dole equation. In other words, a product of ten positive numbers is now turned into the product of ten positive random variables. This we call the SEH, an acronym standing for "Statistical Equation for Habitables". The mathematical structure of the SEH is then derived. The proof is based on the central limit theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be arbitrarily distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov form of the CLT, or the Lindeberg form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the lognormal distribution. By construction, the mean value of this lognormal distribution is the total number of habitable planets as given by the statistical Dole equation. But now we also derive the standard deviation, the mode, the median and all the moments of this new lognormal NHab random variable. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. An application of our SEH then follows. The (average) distancebetween any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies in 2008. Data Enrichment Principle. It should be noticed that ANY positive number of random variables in the SEH is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the SEH we call the "Data Enrichment Principle", and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. A practical example is then given of how our SEH works numerically. We work out in detail the case where each of the ten random variables is uniformly distributed around its own mean value as given by Dole back in 1964 and has an assumed standard deviation of 10%. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million±200 million, and the average distance in between any couple of nearby habitable planets should be about 88 light years±40 light years. Finally, we match our SEH results against the results of the Statistical Drake Equation that we introduced in our 2008 IAC presentation. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). And the average distance between any two nearby habitable planets turns out to be much smaller than the average distance between any two neighboring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any couple of adjacent habitable planets.

  2. Analog model for quantum gravity effects: phonons in random fluids.

    PubMed

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  3. Simulation study of entropy production in the one-dimensional Vlasov system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie

    2016-07-15

    The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.

  4. Random pulse generator

    NASA Technical Reports Server (NTRS)

    Lindsey, R. S., Jr. (Inventor)

    1975-01-01

    An exemplary embodiment of the present invention provides a source of random width and random spaced rectangular voltage pulses whose mean or average frequency of operation is controllable within prescribed limits of about 10 hertz to 1 megahertz. A pair of thin-film metal resistors are used to provide a differential white noise voltage pulse source. Pulse shaping and amplification circuitry provide relatively short duration pulses of constant amplitude which are applied to anti-bounce logic circuitry to prevent ringing effects. The pulse outputs from the anti-bounce circuits are then used to control two one-shot multivibrators whose output comprises the random length and random spaced rectangular pulses. Means are provided for monitoring, calibrating and evaluating the relative randomness of the generator.

  5. X-ray microtomography study of the compaction process of rods under tapping.

    PubMed

    Fu, Yang; Xi, Yan; Cao, Yixin; Wang, Yujie

    2012-05-01

    We present an x-ray microtomography study of the compaction process of cylindrical rods under tapping. The process is monitored by measuring the evolution of the orientational order parameter, local, and overall packing densities as a function of the tapping number for different tapping intensities. The slow relaxation dynamics of the orientational order parameter can be well fitted with a stretched-exponential law with stretching exponents ranging from 0.9 to 1.6. The corresponding relaxation time versus tapping intensity follows an Arrhenius behavior which is reminiscent of the slow dynamics in thermal glassy systems. We also investigated the boundary effect on the ordering process and found that boundary rods order faster than interior ones. In searching for the underlying mechanism of the slow dynamics, we estimated the initial random velocities of the rods under tapping and found that the ordering process is compatible with a diffusion mechanism. The average coordination number as a function of the tapping number at different tapping intensities has also been measured, which spans a range from 6 to 8.

  6. Focusing light through random scattering media by four-element division algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin

    2018-01-01

    The focusing of light through random scattering materials using wavefront shaping is studied in detail. We propose a newfangled approach namely four-element division algorithm to improve the average convergence rate and signal-to-noise ratio of focusing. Using 4096 independently controlled segments of light, the intensity at the target is 72 times enhanced over the original intensity at the same position. The four-element division algorithm and existing phase control algorithms of focusing through scattering media are compared by both of the numerical simulation and the experiment. It is found that four-element division algorithm is particularly advantageous to improve the average convergence rate of focusing.

  7. Distribution of G concurrence of random pure states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol

    2006-12-15

    The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.

  8. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  9. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  10. Stochastic modelling for lake thermokarst and peatland patterns in permafrost and near permafrost zones

    NASA Astrophysics Data System (ADS)

    Orlov, Timofey; Sadkov, Sergey; Panchenko, Evgeniy; Zverev, Andrey

    2017-04-01

    Peatlands occupy a significant share of the cryolithozone area. They are currently experiencing an intense affection by oil and gas field development, as well as by the construction of infrastructure. That poses the importance of the peatland studies, including those dealing with the forecast of peatland evolution. Earlier we conducted a similar probabilistic modelling for the areas of thermokarst development. Principle points of that were: 1. Appearance of a thermokarst depression within an area given is the random event which probability is directly proportional to the size of the area ( Δs). For small sites the probability of one thermokarst depression to appear is much greater than that for several ones, i.e. p1 = γ Δs + o (Δs) pk = o (Δs) \\quad k=2,3 ... 2. Growth of a new thermokarst depression is a random variable independent on other depressions' growth. It happens due to thermoabrasion and, hence, is directly proportional to the amount of heat in the lake and is inversely proportional to the lateral surface area of the lake depression. By using this model, we are able to get analytically two main laws of the morphological pattern for lake thermokarst plains. First, the distribution of a number of thermokarst depressions (centers) at a random plot obey the Poisson law: P(k,s) = (γ s)^k/k! e-γ s. where γ is an average number of depressions per area unit, s is a square of a trial sites. Second, lognormal distribution of diameters of thermokarst lakes is true at any time, i.e. density distribution is given by the equation: fd (x,t)=1/√{2πσ x √{t}} e-

  11. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    NASA Astrophysics Data System (ADS)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  12. Pattern selection and super-patterns in the bounded confidence model

    DOE PAGES

    Ben-Naim, E.; Scheel, A.

    2015-10-26

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes ofmore » the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. Furthermore, the spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.« less

  13. Pattern selection and super-patterns in the bounded confidence model

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Scheel, A.

    2015-10-01

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.

  14. Random errors of oceanic monthly rainfall derived from SSM/I using probability distribution functions

    NASA Technical Reports Server (NTRS)

    Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.

    1993-01-01

    Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.

  15. Descriptive parameter for photon trajectories in a turbid medium

    NASA Astrophysics Data System (ADS)

    Gandjbakhche, Amir H.; Weiss, George H.

    2000-06-01

    In many applications of laser techniques for diagnostic or therapeutic purposes it is necessary to be able to characterize photon trajectories to know which parts of the tissue are being interrogated. In this paper, we consider the cw reflectance experiment on a semi-infinite medium with uniform optical parameters and having a planar interface. The analysis is carried out in terms of a continuous-time random walk and the relation between the occupancy of a plane parallel to the surface to the maximum depth reached by the random walker is studied. The first moment of the ratio of average depth to the average maximum depth yields information about the volume of tissue interrogated as well as giving some indication of the region of tissue that gets the most light. We have also calculated the standard deviation of this random variable. It is not large enough to qualitatively affect information contained in the first moment.

  16. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  17. Speckle phase near random surfaces

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoyi; Cheng, Chuanfu; An, Guoqiang; Han, Yujing; Rong, Zhenyu; Zhang, Li; Zhang, Meina

    2018-03-01

    Based on Kirchhoff approximation theory, the speckle phase near random surfaces with different roughness is numerically simulated. As expected, the properties of the speckle phase near the random surfaces are different from that in far field. In addition, as scattering distances and roughness increase, the average fluctuations of the speckle phase become larger. Unusually, the speckle phase is somewhat similar to the corresponding surface topography. We have performed experiments to verify the theoretical simulation results. Studies in this paper contribute to understanding the evolution of speckle phase near a random surface and provide a possible way to identify a random surface structure based on its speckle phase.

  18. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  19. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  20. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  1. Digital servo control of random sound fields

    NASA Technical Reports Server (NTRS)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  2. A Method of Reducing Random Drift in the Combined Signal of an Array of Inertial Sensors

    DTIC Science & Technology

    2015-09-30

    stability of the collective output, Bayard et al, US Patent 6,882,964. The prior art methods rely upon the use of Kalman filtering and averaging...including scale-factor errors, quantization effects, temperature effects, random drift, and additive noise. A comprehensive account of all of these

  3. Statistical Analysis for Multisite Trials Using Instrumental Variables with Random Coefficients

    ERIC Educational Resources Information Center

    Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako

    2012-01-01

    Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…

  4. Improving Learning in Primary Schools of Developing Countries: A Meta-Analysis of Randomized Experiments

    ERIC Educational Resources Information Center

    McEwan, Patrick J.

    2015-01-01

    I gathered 77 randomized experiments (with 111 treatment arms) that evaluated the effects of school-based interventions on learning in developing-country primary schools. On average, monetary grants and deworming treatments had mean effect sizes that were close to zero and not statistically significant. Nutritional treatments, treatments that…

  5. The level crossing rates and associated statistical properties of a random frequency response function

    NASA Astrophysics Data System (ADS)

    Langley, Robin S.

    2018-03-01

    This work is concerned with the statistical properties of the frequency response function of the energy of a random system. Earlier studies have considered the statistical distribution of the function at a single frequency, or alternatively the statistics of a band-average of the function. In contrast the present analysis considers the statistical fluctuations over a frequency band, and results are obtained for the mean rate at which the function crosses a specified level (or equivalently, the average number of times the level is crossed within the band). Results are also obtained for the probability of crossing a specified level at least once, the mean rate of occurrence of peaks, and the mean trough-to-peak height. The analysis is based on the assumption that the natural frequencies and mode shapes of the system have statistical properties that are governed by the Gaussian Orthogonal Ensemble (GOE), and the validity of this assumption is demonstrated by comparison with numerical simulations for a random plate. The work has application to the assessment of the performance of dynamic systems that are sensitive to random imperfections.

  6. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  7. A randomized clinical trial on the effects of remote intercessory prayer in the adverse outcomes of pregnancies.

    PubMed

    da Rosa, Maria Inês; Silva, Fabio Rosa; Silva, Bruno Rosa; Costa, Luciana Carvalho; Bergamo, Angela Mendes; Silva, Napoleão Chiaramonte; Medeiros, Lidia Rosi de Freitas; Battisti, Iara Denise Endruweit; Azevedo, Rafael

    2013-08-01

    The scope of this article was to investigate whether intercessory prayer (IP) influences the adverse outcomes of pregnancies. A double-blind, randomized clinical trial was conducted with 564 pregnant women attending a prenatal public health care service. The women were randomly assigned to an IP group or to a control group (n = 289 per group). They were simultaneously and randomly assigned to practice prayer off-site or not. The following parameters were evaluated: Apgar scores, type of delivery and birth weight. The mean age of the women was 25.1 years of age (± 7.4), and the average gestational age was 23.4 weeks (± 8.1). The average number of years of schooling for the women was 8.1 years (± 3.1). The women in the IP and control groups presented a similar number of adverse medical events with non-significant p. No significant differences were detected in the frequency of adverse outcomes in pregnant women who practiced IP and those in the control group.

  8. Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process.

    PubMed

    Moran, John L; Solomon, Patricia J

    2013-05-24

    Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.

  9. Esophagus segmentation in CT via 3D fully convolutional neural network and random walk.

    PubMed

    Fechter, Tobias; Adebahr, Sonja; Baltas, Dimos; Ben Ayed, Ismail; Desrosiers, Christian; Dolz, Jose

    2017-12-01

    Precise delineation of organs at risk is a crucial task in radiotherapy treatment planning for delivering high doses to the tumor while sparing healthy tissues. In recent years, automated segmentation methods have shown an increasingly high performance for the delineation of various anatomical structures. However, this task remains challenging for organs like the esophagus, which have a versatile shape and poor contrast to neighboring tissues. For human experts, segmenting the esophagus from CT images is a time-consuming and error-prone process. To tackle these issues, we propose a random walker approach driven by a 3D fully convolutional neural network (CNN) to automatically segment the esophagus from CT images. First, a soft probability map is generated by the CNN. Then, an active contour model (ACM) is fitted to the CNN soft probability map to get a first estimation of the esophagus location. The outputs of the CNN and ACM are then used in conjunction with a probability model based on CT Hounsfield (HU) values to drive the random walker. Training and evaluation were done on 50 CTs from two different datasets, with clinically used peer-reviewed esophagus contours. Results were assessed regarding spatial overlap and shape similarity. The esophagus contours generated by the proposed algorithm showed a mean Dice coefficient of 0.76 ± 0.11, an average symmetric square distance of 1.36 ± 0.90 mm, and an average Hausdorff distance of 11.68 ± 6.80, compared to the reference contours. These results translate to a very good agreement with reference contours and an increase in accuracy compared to existing methods. Furthermore, when considering the results reported in the literature for the publicly available Synapse dataset, our method outperformed all existing approaches, which suggests that the proposed method represents the current state-of-the-art for automatic esophagus segmentation. We show that a CNN can yield accurate estimations of esophagus location, and that the results of this model can be refined by a random walk step taking pixel intensities and neighborhood relationships into account. One of the main advantages of our network over previous methods is that it performs 3D convolutions, thus fully exploiting the 3D spatial context and performing an efficient volume-wise prediction. The whole segmentation process is fully automatic and yields esophagus delineations in very good agreement with the gold standard, showing that it can compete with previously published methods. © 2017 American Association of Physicists in Medicine.

  10. Quantifying rapid changes in cardiovascular state with a moving ensemble average.

    PubMed

    Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T

    2018-04-01

    MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.

  11. The Hypertension Optimal Treatment (HOT) Study--patient characteristics: randomization, risk profiles, and early blood pressure results.

    PubMed

    Hansson, L; Zanchetti, A

    1994-09-01

    The Hypertension Optimal Treatment (HOT) Study is a prospective, randomized, multicenter trial being conducted in 26 countries. Its main aim is to evaluate the relationship between three levels of target diastolic blood pressure (< or = 90, < or = 85 or < or = 80 mmHg) and cardiovascular morbidity and mortality in hypertensive patients. In addition, the study will examine the effects on morbidity and mortality of a low dose, 75 mg daily, of acetylsalicylic acid (ASA, aspirin) or placebo. In the HOT Study, basic antihypertensive treatment is initiated with the calcium antagonist felodipine at a dose of 5 mg daily. If target blood pressure is not reached, additional antihypertensive therapy with either an angiotensin converting enzyme (ACE) inhibitor or a beta-adrenoceptor blocking agent is given. Further dosage adjustments are made in accordance with a set protocol. As a fifth and final step, a diuretic may be added. Inclusion of patients was stopped on April 30, 1994. At that time 19,196 patients had been randomized. There were 9,055 (47%) women and 10,141 (53%) men with an average age of 61.5 +/- 7.5 (SD) years. At enrollment, 52% of patients were receiving antihypertensive treatment. These patients entered a wash-out period of at least 2 weeks before randomization. The average randomization blood pressure in untreated patients was 169 +/- 14/106 +/- 3 mmHg and in the treated patients 170 +/- 14/105 +/- 3 mmHg. On August 15, 1994, blood pressure data were available for 14,710 and 10,275 patients, who had completed 3 and 6 months treatment, respectively. The average reduction in diastolic blood pressure was 22 mmHg after 6 months.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  13. Community-based intervention packages for reducing maternal and neonatal morbidity and mortality and improving neonatal outcomes.

    PubMed

    Lassi, Zohra S; Bhutta, Zulfiqar A

    2015-03-23

    While maternal, infant and under-five child mortality rates in developing countries have declined significantly in the past two to three decades, newborn mortality rates have reduced much more slowly. While it is recognised that almost half of the newborn deaths can be prevented by scaling up evidence-based available interventions (such as tetanus toxoid immunisation to mothers, clean and skilled care at delivery, newborn resuscitation, exclusive breastfeeding, clean umbilical cord care, and/or management of infections in newborns), many require facility-based and outreach services. It has also been stated that a significant proportion of these mortalities and morbidities could also be potentially addressed by developing community-based packaged interventions which should also be supplemented by developing and strengthening linkages with the local health systems. Some of the recent community-based studies of interventions targeting women of reproductive age have shown variable impacts on maternal outcomes and hence it is uncertain if these strategies have consistent benefit across the continuum of maternal and newborn care. To assess the effectiveness of community-based intervention packages in reducing maternal and neonatal morbidity and mortality; and improving neonatal outcomes. We searched the Cochrane Pregnancy and Childbirth Group's Trials Register (31 May 2014), World Bank's JOLIS (25 May 2014), BLDS at IDS and IDEAS database of unpublished working papers (25 May 2014), Google and Google Scholar (25 May 2014). All prospective randomised, cluster-randomised and quasi-randomised trials evaluating the effectiveness of community-based intervention packages in reducing maternal and neonatal mortality and morbidities, and improving neonatal outcomes. Two review authors independently assessed trials for inclusion, assessed trial quality and extracted the data. Data were checked for accuracy. The review included 26 cluster-randomised/quasi-randomised trials, covering a wide range of interventional packages, including two subsets from three trials. Assessment of risk of bias in these studies suggests concerns regarding insufficient information on sequence generation and regarding failure to adequately address incomplete outcome data, particularly from randomised controlled trials. We incorporated data from these trials using generic inverse variance method in which logarithms of risk ratio (RR) estimates were used along with the standard error of the logarithms of RR estimates.Our review showed a possible effect in terms of a reduction in maternal mortality (RR 0.80; 95% confidence interval (CI) 0.64 to 1.00, random-effects (11 studies, n = 167,311; random-effects, Tau² = 0.03, I² 20%). However, significant reduction was observed in maternal morbidity (average RR 0.75; 95% CI 0.61 to 0.92; four studies, n = 138,290; random-effects, Tau² = 0.02, I² = 28%); neonatal mortality (average RR 0.75; 95% CI 0.67 to 0.83; 21 studies, n = 302,646; random-effects, Tau² = 0.06, I² = 85%) including both early and late mortality; stillbirths (average RR 0.81; 95% CI 0.73 to 0.91; 15 studies, n = 201,181; random-effects, Tau² = 0.03, I² = 66%); and perinatal mortality (average RR 0.78; 95% CI 0.70 to 0.86; 17 studies, n = 282,327; random-effects Tau² = 0.04, I² = 88%) as a consequence of implementation of community-based interventional care packages.Community-based intervention packages also increased the uptake of tetanus immunisation by 5% (average RR 1.05; 95% CI 1.02 to 1.09; seven studies, n = 71,622; random-effects Tau² = 0.00, I² = 52%); use of clean delivery kits by 82% (average RR 1.82; 95% CI 1.10 to 3.02; four studies, n = 54,254; random-effects, Tau² = 0.23, I² = 90%); rates of institutional deliveries by 20% (average RR 1.20; 95% CI 1.04 to 1.39; 14 studies, n = 147,890; random-effects, Tau² = 0.05, I² = 80%); rates of early breastfeeding by 93% (average RR 1.93; 95% CI 1.55 to 2.39; 11 studies, n = 72,464; random-effects, Tau² = 0.14, I² = 98%), and healthcare seeking for neonatal morbidities by 42% (average RR 1.42; 95% CI 1.14 to 1.77, nine studies, n = 66,935, random-effects, Tau² = 0.09, I² = 92%). The review also showed a possible effect on increasing the uptake of iron/folic acid supplementation during pregnancy (average RR 1.47; 95% CI 0.99 to 2.17; six studies, n = 71,622; random-effects, Tau² = 0.26; I² = 99%).It has no impact on improving referrals for maternal morbidities, healthcare seeking for maternal morbidities, iron/folate supplementation, attendance of skilled birth attendance on delivery, and other neonatal care-related outcomes. We did not find studies that reported the impact of community-based intervention package on improving exclusive breastfeeding rates at six months of age. We assessed our primary outcomes for publication bias and observed slight asymmetry on the funnel plot for maternal mortality. Our review offers encouraging evidence that community-based intervention packages reduce morbidity for women, mortality and morbidity for babies, and improves care-related outcomes particularly in low- and middle-income countries. It has highlighted the value of integrating maternal and newborn care in community settings through a range of interventions, which can be packaged effectively for delivery through a range of community health workers and health promotion groups. While the importance of skilled delivery and facility-based services for maternal and newborn care cannot be denied, there is sufficient evidence to scale up community-based care through packages which can be delivered by a range of community-based workers.

  14. Analysis of the habitat of Henslow's sparrows and Grasshopper sparrows compared to random grassland areas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Kristen; Walton, Rod; Kasper, Peter

    2005-01-01

    Henslow's Sparrows are endangered prairie birds, and Grasshopper Sparrows are considered rare prairie birds. Both of these birds were abundant in Illinois, but their populations have been declining due to loss of the grasslands. This begins an ongoing study of the birds habitat so Fermilab can develop a land management plan for the Henslow's and Grasshoppers. The Henslow's were found at ten sites and Grasshoppers at eight sites. Once the birds were located, the vegetation at their sites was studied. Measurements of the maximum plant height, average plant height, and duff height were taken and estimates of the percent ofmore » grass, forbs, duff, and bare ground were recorded for each square meter studied. The same measurements were taken at ten random grassland sites on Fermilab property. Several t-tests were performed on the data, and it was found that both Henslow's Sparrows and Grasshopper Sparrows preferred areas with a larger percentage of grass than random areas. Henslow's also preferred areas with less bare ground than random areas, while Grasshoppers preferred areas with more bare ground than random areas. In addition, Grasshopper Sparrows preferred a lower percentage of forbs than was found in random areas and a shorter average plant height than the random locations. Two-sample variance tests suggested significantly less variance for both Henslow's Sparrows and Grasshopper Sparrows for maximum plant height in comparison to the random sites. For both birds, the test suggested a significant difference in the variance of the percentage of bare ground compared to random sites, but only the Grasshopper Sparrow showed significance in the variation in the percentage of forbs.« less

  15. Empirical likelihood inference in randomized clinical trials.

    PubMed

    Zhang, Biao

    2017-01-01

    In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.

  16. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  17. Modified Directly Observed Therapy to Facilitate Highly Active Antiretroviral Therapy Adherence in Beira, Mozambique

    PubMed Central

    Pearson, Cynthia R.; Micek, Mark; Simoni, Jane M.; Matediana, Eduardo; Martin, Diane P.; Gloyd, Stephen

    2016-01-01

    Summary As resource-limited countries expand access to highly active antiretroviral therapy (HAART) treatment, innovative programs are needed to support adherence in the context of significant health system barriers. Modified directly observed therapy (mDOT) is one such strategy, but little is known about the process of designing and implementing mDOT programs for HAART in resource-limited settings. In this descriptive study, we used a mixed-methods approach to describe the process of implementing mDOT for an ongoing randomized control trial (RCT) in Beira, Mozambique. Interviews with clinic staff, mDOT peers, and participants provided information on design elements, problems with implementation, satisfaction, and benefits. Acceptability and feasibility measures were obtained from the RCT. Most (81%, N = 350) eligible persons agreed to participate, and of those randomized to mDOT (n = 174), 95% reported that their time with peers was beneficial. On average, participants kept 93% of the 30 required daily mDOT visits. Key components of the intervention’s success included using peers who were well accepted by clinic staff, adequate training and retention of peers, adapting daily visit requirements to participants’ work schedules and physical conditions, and reimbursing costs of transportation. This study identified aspects of mDOT that are effective and can be adopted by other clinics treating HIV patients. PMID:17133197

  18. The Effect of Teaching Model ‘Learning Cycles 5E’ toward Students’ Achievement in Learning Mathematic at X Years Class SMA Negeri 1 Banuhampu 2013/2014 Academic Year

    NASA Astrophysics Data System (ADS)

    Yeni, N.; Suryabayu, E. P.; Handayani, T.

    2017-02-01

    Based on the survey showed that mathematics teacher still dominated in teaching and learning process. The process of learning is centered on the teacher while the students only work based on instructions provided by the teacher without any creativity and activities that stimulate students to explore their potential. Realized the problem above the writer interested in finding the solution by applying teaching model ‘Learning Cycles 5E’. The purpose of his research is to know whether teaching model ‘Learning Cycles 5E’ is better than conventional teaching in teaching mathematic. The type of the research is quasi experiment by Randomized Control test Group Only Design. The population in this research were all X years class students. The sample is chosen randomly after doing normality, homogeneity test and average level of students’ achievement. As the sample of this research was X.7’s class as experiment class used teaching model learning cycles 5E and X.8’s class as control class used conventional teaching. The result showed us that the students achievement in the class that used teaching model ‘Learning Cycles 5E’ is better than the class which did not use the model.

  19. Stochastically gated local and occupation times of a Brownian particle

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.

    2017-01-01

    We generalize the Feynman-Kac formula to analyze the local and occupation times of a Brownian particle moving in a stochastically gated one-dimensional domain. (i) The gated local time is defined as the amount of time spent by the particle in the neighborhood of a point in space where there is some target that only receives resources from (or detects) the particle when the gate is open; the target does not interfere with the motion of the Brownian particle. (ii) The gated occupation time is defined as the amount of time spent by the particle in the positive half of the real line, given that it can only cross the origin when a gate placed at the origin is open; in the closed state the particle is reflected. In both scenarios, the gate randomly switches between the open and closed states according to a two-state Markov process. We derive a stochastic, backward Fokker-Planck equation (FPE) for the moment-generating function of the two types of gated Brownian functional, given a particular realization of the stochastic gate, and analyze the resulting stochastic FPE using a moments method recently developed for diffusion processes in randomly switching environments. In particular, we obtain dynamical equations for the moment-generating function, averaged with respect to realizations of the stochastic gate.

  20. Estimation and Identification of the Complier Average Causal Effect Parameter in Education RCTs

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2011-01-01

    In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…

  1. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    NASA Astrophysics Data System (ADS)

    Löwe, H.; Helbig, N.

    2012-04-01

    We provide a new quasi-analytical method to compute the topographic influence on the effective albedo of complex topography as required for meteorological, land-surface or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain averages of direct, diffuse and terrain radiation and the sky view factor. Domain averaged quantities are related to a type of level-crossing probability of the random field which is approximated by longstanding results developed for acoustic scattering at ocean boundaries. This allows us to express all non-local horizon effects in terms of a local terrain parameter, namely the mean squared slope. Emerging integrals are computed numerically and fit formulas are given for practical purposes. As an implication of our approach we provide an expression for the effective albedo of complex terrain in terms of the sun elevation angle, mean squared slope, the area averaged surface albedo, and the direct-to-diffuse ratio of solar radiation. As an application, we compute the effective albedo for the Swiss Alps and discuss possible generalizations of the method.

  2. Comparative Effectiveness of Two Walking Interventions on Participation, Step Counts, and Health.

    PubMed

    Smith-McLallen, Aaron; Heller, Debbie; Vernisi, Kristin; Gulick, Diana; Cruz, Samantha; Snyder, Richard L

    2017-03-01

    To (1) compare the effects of two worksite-based walking interventions on employee participation rates; (2) compare average daily step counts between conditions, and; (3) examine the effects of increases in average daily step counts on biometric and psychologic outcomes. We conducted a cluster-randomized trial in which six employer groups were randomly selected and randomly assigned to condition. Four manufacturing worksites and two office-based worksite served as the setting. A total of 474 employees from six employer groups were included. A standard walking program was compared to an enhanced program that included incentives, feedback, competitive challenges, and monthly wellness workshops. Walking was measured by self-reported daily step counts. Survey measures and biometric screenings were administered at baseline and 3, 6, and 9 months after baseline. Analysis used linear mixed models with repeated measures. During 9 months, participants in the enhanced condition averaged 726 more steps per day compared with those in the standard condition (p < .001). A 1000-step increase in average daily steps was associated with significant weight loss for both men (-3.8 lbs.) and women (-2.1 lbs.), and reductions in body mass index (-0.41 men, -0.31 women). Higher step counts were also associated with improvements in mood, having more energy, and higher ratings of overall health. An enhanced walking program significantly increases participation rates and daily step counts, which were associated with weight loss and reductions in body mass index.

  3. A Randomized Controlled Trial of Employer Matching of Employees' Monetary Contributions to Deposit Contracts to Promote Weight Loss.

    PubMed

    Kullgren, Jeffrey T; Troxel, Andrea B; Loewenstein, George; Norton, Laurie A; Gatto, Dana; Tao, Yuanyuan; Zhu, Jingsan; Schofield, Heather; Shea, Judy A; Asch, David A; Pellathy, Thomas; Driggers, Jay; Volpp, Kevin G

    2016-07-01

    To test whether employer matching of employees' monetary contributions increases employees' (1) participation in deposit contracts to promote weight loss and (2) weight loss. A 36-week randomized trial. Large employer in the northeast United States. One hundred thirty-two obese employees. Over 24 weeks, participants were asked to lose 24 pounds and randomized to monthly weigh-ins or daily weigh-ins with monthly opportunities to deposit $1 to $3 per day that was not matched, matched 1:1, or matched 2:1. Deposits and matched funds were returned to participants for each day they were below their goal weight. Rates of making ≥1 deposit, weight loss at 24 weeks (primary outcome), and 36 weeks. Deposit rates were compared using χ(2) tests. Weight loss was compared using t tests. Among participants eligible to make deposits, 29% made ≥1 deposit and matching did not increase participation. At 24 weeks, control participants gained an average of 1.0 pound, whereas 1:1 match participants lost an average of 5.3 pounds (P = .005). After 36 weeks, control participants gained an average of 2.1 pounds, whereas no match participants lost an average of 5.1 pounds (P = .008). Participation in deposit contracts to promote weight loss was low, and matching deposits did not increase participation. For deposit contracts to impact population health, ongoing participation will need to be higher. © The Author(s) 2016.

  4. Football fever: goal distributions and non-Gaussian statistics

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  5. The random fractional matching problem

    NASA Astrophysics Data System (ADS)

    Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele

    2018-05-01

    We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.

  6. An easy and inexpensive method for quantitative analysis of endothelial damage by using vital dye staining and Adobe Photoshop software.

    PubMed

    Saad, Hisham A; Terry, Mark A; Shamie, Neda; Chen, Edwin S; Friend, Daniel F; Holiman, Jeffrey D; Stoeger, Christopher

    2008-08-01

    We developed a simple, practical, and inexpensive technique to analyze areas of endothelial cell loss and/or damage over the entire corneal area after vital dye staining by using a readily available, off-the-shelf, consumer software program, Adobe Photoshop. The purpose of this article is to convey a method of quantifying areas of cell loss and/or damage. Descemet-stripping automated endothelial keratoplasty corneal transplant surgery was performed by using 5 precut corneas on a human cadaver eye. Corneas were removed and stained with trypan blue and alizarin red S and subsequently photographed. Quantitative assessment of endothelial damage was performed by using Adobe Photoshop 7.0 software. The average difference for cell area damage for analyses performed by 1 observer twice was 1.41%. For analyses performed by 2 observers, the average difference was 1.71%. Three masked observers were 100% successful in matching the randomized stained corneas to their randomized processed Adobe images. Vital dye staining of corneal endothelial cells can be combined with Adobe Photoshop software to yield a quantitative assessment of areas of acute endothelial cell loss and/or damage. This described technique holds promise for a more consistent and accurate method to evaluate the surgical trauma to the endothelial cell layer in laboratory models. This method of quantitative analysis can probably be generalized to any area of research that involves areas that are differentiated by color or contrast.

  7. Euclidean commute time distance embedding and its application to spectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Albano, James A.; Messinger, David W.

    2012-06-01

    Spectral image analysis problems often begin by performing a preprocessing step composed of applying a transformation that generates an alternative representation of the spectral data. In this paper, a transformation based on a Markov-chain model of a random walk on a graph is introduced. More precisely, we quantify the random walk using a quantity known as the average commute time distance and find a nonlinear transformation that embeds the nodes of a graph in a Euclidean space where the separation between them is equal to the square root of this quantity. This has been referred to as the Commute Time Distance (CTD) transformation and it has the important characteristic of increasing when the number of paths between two nodes decreases and/or the lengths of those paths increase. Remarkably, a closed form solution exists for computing the average commute time distance that avoids running an iterative process and is found by simply performing an eigendecomposition on the graph Laplacian matrix. Contained in this paper is a discussion of the particular graph constructed on the spectral data for which the commute time distance is then calculated from, an introduction of some important properties of the graph Laplacian matrix, and a subspace projection that approximately preserves the maximal variance of the square root commute time distance. Finally, RX anomaly detection and Topological Anomaly Detection (TAD) algorithms will be applied to the CTD subspace followed by a discussion of their results.

  8. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  9. Abnormal neural hierarchy in processing of verbal information in patients with schizophrenia.

    PubMed

    Lerner, Yulia; Bleich-Cohen, Maya; Solnik-Knirsh, Shimrit; Yogev-Seligmann, Galit; Eisenstein, Tamir; Madah, Waheed; Shamir, Alon; Hendler, Talma; Kremer, Ilana

    2018-01-01

    Previous research indicates abnormal comprehension of verbal information in patients with schizophrenia. Yet the neural mechanism underlying the breakdown of verbal information processing in schizophrenia is poorly understood. Imaging studies in healthy populations have shown a network of brain areas involved in hierarchical processing of verbal information over time. Here, we identified critical aspects of this hierarchy, examining patients with schizophrenia. Using functional magnetic resonance imaging, we examined various levels of information comprehension elicited by naturally presented verbal stimuli; from a set of randomly shuffled words to an intact story. Specifically, patients with first episode schizophrenia ( N  = 15), their non-manifesting siblings ( N  = 14) and healthy controls ( N  = 15) listened to a narrated story and randomly scrambled versions of it. To quantify the degree of dissimilarity between the groups, we adopted an inter-subject correlation (inter-SC) approach, which estimates differences in synchronization of neural responses within and between groups. The temporal topography found in healthy and siblings groups were consistent with our previous findings - high synchronization in responses from early sensory toward high order perceptual and cognitive areas. In patients with schizophrenia, stimuli with short and intermediate temporal scales evoked a typical pattern of reliable responses, whereas story condition (long temporal scale) revealed robust and widespread disruption of the inter-SCs. In addition, the more similar the neural activity of patients with schizophrenia was to the average response in the healthy group, the less severe the positive symptoms of the patients. Our findings suggest that system-level neural indication of abnormal verbal information processing in schizophrenia reflects disease manifestations.

  10. Evaluation of feedback interventions for improving the quality assurance of cancer screening in Japan: study design and report of the baseline survey.

    PubMed

    Machii, Ryoko; Saika, Kumiko; Higashi, Takahiro; Aoki, Ayako; Hamashima, Chisato; Saito, Hiroshi

    2012-02-01

    The importance of quality assurance in cancer screening has recently gained increasing attention in Japan. To evaluate and improve quality, checklists and process indicators have been developed. To explore effective methods of enhancing quality in cancer screening, we started a randomized control study of the methods of evaluation and feedback for cancer control from 2009 to 2014. We randomly assigned 1270 municipal governments, equivalent to 71% of all Japanese municipal governments that performed screening programs, into three groups. The high-intensity intervention groups (n = 425) were individually evaluated using both checklist performance and process indicator values, while the low-intensity intervention groups (n= 421) were individually evaluated on the basis of only checklist performance. The control group (n = 424) received only a basic report that included the national average of checklist performance scores. We repeated the survey for each municipality's quality assurance activity performance using checklists and process indicators. In this paper, we report our study design and the result of the baseline survey. The checklist adherence rates were especially low in the checklist elements related to invitation of individuals, detailed monitoring of process indicators such as cancer detection rates according to screening histories and appropriate selection of screening facilities. Screening rate and percentage of examinees who underwent detailed examination tended to be lower for large cities when compared with smaller cities for all cancer sites. The performance of the Japanese cancer screening program in 2009 was identified for the first time.

  11. Processed dairy beverages pH evaluation: consequences of temperature variation.

    PubMed

    Ferreira, Fabiana Vargas; Pozzobon, Roselaine Terezinha

    2009-01-01

    This study assessed the pH from processed dairy beverages as well as eventual consequences deriving from different ingestion temperatures. 50 adults who accompanied children attended to at the Dentistry School were randomly selected and they answered a questionnaire on beverages. The beverages were divided into 4 groups: yogurt (GI) fermented milk (GII), chocolate-based products (GIII) and fermented dairy beverages (GIV). They were asked which type, flavor and temperature. The most popular beverages were selected, and these made up the sample. A pH meter Quimis 400A device was used to verify pH. The average pH from each beverage was calculated and submitted to statistical analysis (Variance and Tukey test with a 5% significance level). for groups I, II and III beverages, type x temperature interaction was significant, showing the pH averages were influenced by temperature variation. At iced temperatures, they presented lower pH values, which were considered statistically significant when compared to the values found for the same beverages at room temperature. All dairy beverages, with the exception of the chocolate-based type presented pH below critical level for enamel and present corrosive potential; as to ingestion temperature, iced temperature influenced pH reducing its values, in vitro.

  12. Rigorous theory of molecular orientational nonlinear optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwak, Chong Hoon, E-mail: chkwak@ynu.ac.kr; Kim, Gun Yeup

    2015-01-15

    Classical statistical mechanics of the molecular optics theory proposed by Buckingham [A. D. Buckingham and J. A. Pople, Proc. Phys. Soc. A 68, 905 (1955)] has been extended to describe the field induced molecular orientational polarization effects on nonlinear optics. In this paper, we present the generalized molecular orientational nonlinear optical processes (MONLO) through the calculation of the classical orientational averaging using the Boltzmann type time-averaged orientational interaction energy in the randomly oriented molecular system under the influence of applied electric fields. The focal points of the calculation are (1) the derivation of rigorous tensorial components of the effective molecularmore » hyperpolarizabilities, (2) the molecular orientational polarizations and the electronic polarizations including the well-known third-order dc polarization, dc electric field induced Kerr effect (dc Kerr effect), optical Kerr effect (OKE), dc electric field induced second harmonic generation (EFISH), degenerate four wave mixing (DFWM) and third harmonic generation (THG). We also present some of the new predictive MONLO processes. For second-order MONLO, second-order optical rectification (SOR), Pockels effect and difference frequency generation (DFG) are described in terms of the anisotropic coefficients of first hyperpolarizability. And, for third-order MONLO, third-order optical rectification (TOR), dc electric field induced difference frequency generation (EFIDFG) and pump-probe transmission are presented.« less

  13. Stable estimate of primary OC/EC ratios in the EC tracer method

    NASA Astrophysics Data System (ADS)

    Chu, Shao-Hang

    In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.

  14. Active management of labor

    PubMed Central

    Rogers, Rebecca G; Gardner, Michael O; Tool, Kevin J; Ainsley, Jeanne; Gilson, George

    2000-01-01

    Objective To compare the costs of a protocol of active management of labor with those of traditional labor management. Design Cost analysis of a randomized controlled trial. Methods From August 1992 to April 1996, we randomly allocated 405 women whose infants were delivered at the University of New Mexico Health Sciences Center, Albuquerque, to an active management of labor protocol that had substantially reduced the duration of labor or a control protocol. We calculated the average cost for each delivery, using both actual costs and charges. Results The average cost for women assigned to the active management protocol was $2,480.79 compared with an average cost of $2,528.61 for women in the control group (P = 0.55). For women whose infant was delivered by cesarean section, the average cost was $4,771.54 for active management of labor and $4,468.89 for the control protocol (P = 0.16). Spontaneous vaginal deliveries cost an average of $27.00 more for actively managed patients compared with the cost for the control protocol. Conclusions The reduced duration of labor by active management did not translate into significant cost savings. Overall, an average cost saving of only $47.91, or 2%, was achieved for labors that were actively managed. This reduction in cost was due to a decrease in the rate of cesarean sections in women whose labor was actively managed and not to a decreased duration of labor. PMID:10778374

  15. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  16. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  17. Surface Modification and Surface - Subsurface Exchange Processes on Europa

    NASA Astrophysics Data System (ADS)

    Phillips, Cynthia B.; Molaro, Jamie; Hand, Kevin P.

    2017-10-01

    The surface of Jupiter’s moon Europa is modified by exogenic processes such as sputtering, gardening, radiolysis, sulfur ion implantation, and thermal processing, as well as endogenic processes including tidal shaking, mass wasting, and the effects of subsurface tectonic and perhaps cryovolcanic activity. New materials are created or deposited on the surface (radiolysis, micrometeorite impacts, sulfur ion implantation, cryovolcanic plume deposits), modified in place (thermal segregation, sintering), transported either vertically or horizontally (sputtering, gardening, mass wasting, tectonic and cryovolcanic activity), or lost from Europa completely (sputtering, plumes, larger impacts). Some of these processes vary spatially, as visible in Europa’s leading-trailing hemisphere brightness asymmetry.Endogenic geologic processes also vary spatially, depending on terrain type. The surface can be classified into general landform categories that include tectonic features (ridges, bands, cracks); disrupted “chaos-type” terrain (chaos blocks, matrix, domes, pits, spots); and impact craters (simple, complex, multi-ring). The spatial distribution of these terrain types is relatively random, with some differences in apex-antiapex cratering rates and latitudinal variation in chaos vs. tectonic features.In this work, we extrapolate surface processes and rates from the top meter of the surface in conjunction with global estimates of transport and resurfacing rates. We combine near-surface modification with an estimate of surface-subsurface (and vice versa) transport rates for various geologic terrains based on an average of proposed formation mechanisms, and a spatial distribution of each landform type over Europa’s surface area.Understanding the rates and mass balance for each of these processes, as well as their spatial and temporal variability, allows us to estimate surface - subsurface exchange rates over the average surface age (~50myr) of Europa. Quantifying the timescale and volume of transported material will yield insight on whether such a process may provide fuel to sustain a biosphere in Europa’s subsurface ocean, which is relevant to searches for life by a future mission such as a potential Europa Lander.

  18. Surface Modification and Surface - Subsurface Exchange Processes on Europa

    NASA Astrophysics Data System (ADS)

    Phillips, C. B.; Molaro, J.; Hand, K. P.

    2017-12-01

    The surface of Jupiter's moon Europa is modified by exogenic processes such as sputtering, gardening, radiolysis, sulfur ion implantation, and thermal processing, as well as endogenic processes including tidal shaking, mass wasting, and the effects of subsurface tectonic and perhaps cryovolcanic activity. New materials are created or deposited on the surface (radiolysis, micrometeorite impacts, sulfur ion implantation, cryovolcanic plume deposits), modified in place (thermal segregation, sintering), transported either vertically or horizontally (sputtering, gardening, mass wasting, tectonic and cryovolcanic activity), or lost from Europa completely (sputtering, plumes, larger impacts). Some of these processes vary spatially, as visible in Europa's leading-trailing hemisphere brightness asymmetry. Endogenic geologic processes also vary spatially, depending on terrain type. The surface can be classified into general landform categories that include tectonic features (ridges, bands, cracks); disrupted "chaos-type" terrain (chaos blocks, matrix, domes, pits, spots); and impact craters (simple, complex, multi-ring). The spatial distribution of these terrain types is relatively random, with some differences in apex-antiapex cratering rates and latitudinal variation in chaos vs. tectonic features. In this work, we extrapolate surface processes and rates from the top meter of the surface in conjunction with global estimates of transport and resurfacing rates. We combine near-surface modification with an estimate of surface-subsurface (and vice versa) transport rates for various geologic terrains based on an average of proposed formation mechanisms, and a spatial distribution of each landform type over Europa's surface area. Understanding the rates and mass balance for each of these processes, as well as their spatial and temporal variability, allows us to estimate surface - subsurface exchange rates over the average surface age ( 50myr) of Europa. Quantifying the timescale and volume of transported material will yield insight on whether such a process may provide fuel to sustain a biosphere in Europa's subsurface ocean, which is relevant to searches for life by a future mission such as a potential Europa Lander.

  19. Decoherence-induced conductivity in the one-dimensional Anderson model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stegmann, Thomas; Wolf, Dietrich E.; Ujsághy, Orsolya

    We study the effect of decoherence on the electron transport in the one-dimensional Anderson model by means of a statistical model [1, 2, 3, 4, 5]. In this model decoherence bonds are randomly distributed within the system, at which the electron phase is randomized completely. Afterwards, the transport quantity of interest (e.g. resistance or conductance) is ensemble averaged over the decoherence configurations. Averaging the resistance of the sample, the calculation can be performed analytically. In the thermodynamic limit, we find a decoherence-driven transition from the quantum-coherent localized regime to the Ohmic regime at a critical decoherence density, which is determinedmore » by the second-order generalized Lyapunov exponent (GLE) [4].« less

  20. An Empirical Comparison of Randomized Control Trials and Regression Discontinuity Estimations

    ERIC Educational Resources Information Center

    Barrera-Osorio, Felipe; Filmer, Deon; McIntyre, Joe

    2014-01-01

    Randomized controlled trials (RCTs) and regression discontinuity (RD) studies both provide estimates of causal effects. A major difference between the two is that RD only estimates local average treatment effects (LATE) near the cutoff point of the forcing variable. This has been cited as a drawback to RD designs (Cook & Wong, 2008).…

  1. Reconsidering Findings of "No Effects" in Randomized Control Trials: Modeling Differences in Treatment Impacts

    ERIC Educational Resources Information Center

    Chaney, Bradford

    2016-01-01

    The primary technique that many researchers use to analyze data from randomized control trials (RCTs)--detecting the average treatment effect (ATE)--imposes assumptions upon the data that often are not correct. Both theory and past research suggest that treatments may have significant impacts on subgroups even when showing no overall effect.…

  2. Many Children Left Behind? Textbooks and Test Scores in Kenya. NBER Working Paper No. 13300

    ERIC Educational Resources Information Center

    Glewwe, Paul; Kremer, Michael; Moulin, Sylvie

    2007-01-01

    A randomized evaluation suggests that a program which provided official textbooks to randomly selected rural Kenyan primary schools did not increase test scores for the average student. In contrast, the previous literature suggests that textbook provision has a large impact on test scores. Disaggregating the results by students' initial academic…

  3. A coherent light scanner for optical processing of large format transparencies

    NASA Technical Reports Server (NTRS)

    Callen, W. R.; Weaver, J. E.; Shackelford, R. G.; Walsh, J. R.

    1975-01-01

    A laser scanner is discussed in which the scanning beam is random-access addressable and perpendicular to the image input plane and the irradiance of the scanned beam is controlled so that a constant average irradiance is maintained after passage through the image plane. The scanner's optical system and design are described, and its performance is evaluated. It is noted that with this scanner, data in the form of large-format transparencies can be processed without the expense, space, maintenance, and precautions attendant to the operation of a high-power laser with large-aperture collimating optics. It is shown that the scanned format as well as the diameter of the scanning beam may be increased by simple design modifications and that higher scan rates can be achieved at the expense of resolution by employing acousto-optic deflectors with different relay optics.

  4. Robust Learning Control Design for Quantum Unitary Transformations.

    PubMed

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  5. Mean first passage time of active Brownian particle in one dimension

    NASA Astrophysics Data System (ADS)

    Scacchi, A.; Sharma, A.

    2018-02-01

    We investigate the mean first passage time of an active Brownian particle in one dimension using numerical simulations. The activity in one dimension is modelled as a two state model; the particle moves with a constant propulsion strength but its orientation switches from one state to other as in a random telegraphic process. We study the influence of a finite resetting rate r on the mean first passage time to a fixed target of a single free active Brownian particle and map this result using an effective diffusion process. As in the case of a passive Brownian particle, we can find an optimal resetting rate r* for an active Brownian particle for which the target is found with the minimum average time. In the case of the presence of an external potential, we find good agreement between the theory and numerical simulations using an effective potential approach.

  6. Strategic wellness management in Finland: The first national survey of the management of employee well-being.

    PubMed

    Aura, Ossi; Ahonen, Guy; Ilmarinen, Juhani

    2010-12-01

    To examine the scope of strategic wellness management (SWM) in Finland. To measure management of wellness a strategic wellness management index (SWMI) was developed. On the basis of the developed SWM model an Internet questionnaire was conducted for randomly selected employers representing seven business areas and three size categories. Corporate activities and SWMI for each employer and for business area and size groups were calculated. Results highlighted relatively good activity in strategic wellness (SW) processes and fairly low level of SWM procedures. The average values (± SD) of SWMI were 53.6 ± 12.3 for large, 42.8 ± 11.7 for medium-size, and 32.8 ± 12.1 for small companies. SWMI can be a positive new, strong concept to measure SW processes and thus improve both the well-being of the employees and the productivity of the enterprise.

  7. Thermally Cross-Linked Anion Exchange Membranes from Solvent Processable Isoprene Containing Ionomers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Tsung-Han; Ertem, S. Piril; Maes, Ashley M.

    2015-01-28

    Random copolymers of isoprene and 4-vinylbenzyl chloride (VBCl) with varying compositions were synthesized via nitroxide-mediated polymerization. Subsequent quaternization afforded solvent processable and cross-linkable ionomers with a wide range of ion exchange capacities (IECs). Solution cast membranes were thermally cross-linked to form anion exchange membranes. Cross-linking was achieved by taking advantage of the unsaturations on the polyisoprene backbone, without added cross-linkers. A strong correlation was found between water uptake and ion conductivity of the membranes: conductivities of the membranes with IECs beyond a critical value were found to be constant related to their high water absorption. Environmentally controlled small-angle X-ray scatteringmore » experiments revealed a correlation between the average distance between ionic clusters and the ion conductivity, indicating that a well-connected network of ion clusters is necessary for efficient ion conduction and high ion conductivity.« less

  8. Rockfall travel distances theoretical distributions

    NASA Astrophysics Data System (ADS)

    Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea

    2017-04-01

    The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.

  9. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  10. High-speed, random-access fluorescence microscopy: I. High-resolution optical recording with voltage-sensitive dyes and ion indicators.

    PubMed

    Bullen, A; Patel, S S; Saggau, P

    1997-07-01

    The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging.

  11. A Robust Random Forest-Based Approach for Heart Rate Monitoring Using Photoplethysmography Signal Contaminated by Intense Motion Artifacts.

    PubMed

    Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin

    2017-02-16

    The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.

  12. Prediction of truly random future events using analysis of prestimulus electroencephalographic data

    NASA Astrophysics Data System (ADS)

    Baumgart, Stephen L.; Franklin, Michael S.; Jimbo, Hiroumi K.; Su, Sharon J.; Schooler, Jonathan

    2017-05-01

    Our hypothesis is that pre-stimulus physiological data can be used to predict truly random events tied to perceptual stimuli (e.g., lights and sounds). Our experiment presents light and sound stimuli to a passive human subject while recording electrocortical potentials using a 32-channel Electroencephalography (EEG) system. For every trial a quantum random number generator (qRNG) chooses from three possible selections with equal probability: a light stimulus, a sound stimulus, and no stimulus. Time epochs are defined preceding and post-ceding each stimulus for which mean average potentials were computed across all trials for the three possible stimulus types. Data from three regions of the brain are examined. In all three regions mean potential for light stimuli was generally enhanced relative to baseline during the period starting approximately 2 seconds before the stimulus. For sound stimuli, mean potential decreased relative to baseline during the period starting approximately 2 seconds before the stimulus. These changes from baseline may indicated the presence of evoked potentials arising from the stimulus. A P200 peak was observed in data recorded from frontal electrodes. The P200 is a well-known potential arising from the brain's processing of visual stimuli and its presence represents a replication of a known neurological phenomenon.

  13. High-speed, random-access fluorescence microscopy: I. High-resolution optical recording with voltage-sensitive dyes and ion indicators.

    PubMed Central

    Bullen, A; Patel, S S; Saggau, P

    1997-01-01

    The design and implementation of a high-speed, random-access, laser-scanning fluorescence microscope configured to record fast physiological signals from small neuronal structures with high spatiotemporal resolution is presented. The laser-scanning capability of this nonimaging microscope is provided by two orthogonal acousto-optic deflectors under computer control. Each scanning point can be randomly accessed and has a positioning time of 3-5 microseconds. Sampling time is also computer-controlled and can be varied to maximize the signal-to-noise ratio. Acquisition rates up to 200k samples/s at 16-bit digitizing resolution are possible. The spatial resolution of this instrument is determined by the minimal spot size at the level of the preparation (i.e., 2-7 microns). Scanning points are selected interactively from a reference image collected with differential interference contrast optics and a video camera. Frame rates up to 5 kHz are easily attainable. Intrinsic variations in laser light intensity and scanning spot brightness are overcome by an on-line signal-processing scheme. Representative records obtained with this instrument by using voltage-sensitive dyes and calcium indicators demonstrate the ability to make fast, high-fidelity measurements of membrane potential and intracellular calcium at high spatial resolution (2 microns) without any temporal averaging. Images FIGURE 6 PMID:9199810

  14. An efficient ASIC implementation of 16-channel on-line recursive ICA processor for real-time EEG system.

    PubMed

    Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.

  15. Exact solutions for kinetic models of macromolecular dynamics.

    PubMed

    Chemla, Yann R; Moffitt, Jeffrey R; Bustamante, Carlos

    2008-05-15

    Dynamic biological processes such as enzyme catalysis, molecular motor translocation, and protein and nucleic acid conformational dynamics are inherently stochastic processes. However, when such processes are studied on a nonsynchronized ensemble, the inherent fluctuations are lost, and only the average rate of the process can be measured. With the recent development of methods of single-molecule manipulation and detection, it is now possible to follow the progress of an individual molecule, measuring not just the average rate but the fluctuations in this rate as well. These fluctuations can provide a great deal of detail about the underlying kinetic cycle that governs the dynamical behavior of the system. However, extracting this information from experiments requires the ability to calculate the general properties of arbitrarily complex theoretical kinetic schemes. We present here a general technique that determines the exact analytical solution for the mean velocity and for measures of the fluctuations. We adopt a formalism based on the master equation and show how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space. With this analytic solution, we can then calculate the mean velocity and fluctuation-related parameters, such as the randomness parameter (a dimensionless ratio of the diffusion constant and the velocity) and the dwell time distributions, which fully characterize the fluctuations of the system, both commonly used kinetic parameters in single-molecule measurements. Furthermore, we show that this formalism allows calculation of these parameters for a much wider class of general kinetic models than demonstrated with previous methods.

  16. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    NASA Astrophysics Data System (ADS)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  17. Peer Teaching to Foster Learning in Physiology.

    PubMed

    Srivastava, Tripti K; Waghmare, Lalitbhushan S; Mishra, Ved Prakash; Rawekar, Alka T; Quazi, Nazli; Jagzape, Arunita T

    2015-08-01

    Peer teaching is an effective tool to promote learning and retention of knowledge. By preparing to teach, students are encouraged to construct their own learning program, so that they can explain effectively to fellow learners. Peer teaching is introduced in present study to foster learning and pedagogical skills amongst first year medical under-graduates in physiology with a Hypothesis that teaching is linked to learning on part of the teacher. Non-randomized, Interventional study, with mixed methods design. Cases experienced peer teaching whereas controls underwent tutorials for four consecutive classes. Quantitative Evaluation was done through pre/post test score analysis for Class average normalized gain and tests of significance, difference in average score in surprise class test after one month and percentage of responses in closed ended items of feedback questionnaire. Qualitative Evaluation was done through categorization of open ended items and coding of reflective statements. The average pre and post test score was statistically significant within cases (p = 0.01) and controls (p = 0.023). The average post test scores was more for cases though not statistically significant. The class average normalized gain (g) for Tutorials was 49% and for peer teaching 53%. Surprise test had average scoring of 36 marks (out of 50) for controls and 41 marks for cases. Analysed section wise, the average score was better for Long answer question (LAQ) in cases. Section wise analysis suggested that through peer teaching, retention was better for descriptive answers as LAQ has better average score in cases. Feedback responses were predominantly positive for efficacy of peer teaching as a learning method. The reflective statements were sorted into reflection in action, reflection on action, claiming evidence, describing experience, and recognizing discrepancies. Teaching can stimulate further learning as it involves interplay of three processes: metacognitive awareness; deliberate practice, and self-explanation. Coupled with immediate feedback and reflective exercises, learning can be measurably enhanced along with improved teaching skills.

  18. Peer Teaching to Foster Learning in Physiology

    PubMed Central

    Srivastava, Tripti K; Waghmare, Lalitbhushan S.; Mishra, Ved Prakash; Rawekar, Alka T; Quazi, Nazli; Jagzape, Arunita T

    2015-01-01

    Introduction Peer teaching is an effective tool to promote learning and retention of knowledge. By preparing to teach, students are encouraged to construct their own learning program, so that they can explain effectively to fellow learners. Peer teaching is introduced in present study to foster learning and pedagogical skills amongst first year medical under-graduates in physiology with a Hypothesis that teaching is linked to learning on part of the teacher. Materials and Methods Non-randomized, Interventional study, with mixed methods design. Cases experienced peer teaching whereas controls underwent tutorials for four consecutive classes. Quantitative Evaluation was done through pre/post test score analysis for Class average normalized gain and tests of significance, difference in average score in surprise class test after one month and percentage of responses in closed ended items of feedback questionnaire. Qualitative Evaluation was done through categorization of open ended items and coding of reflective statements. Results The average pre and post test score was statistically significant within cases (p = 0.01) and controls (p = 0.023). The average post test scores was more for cases though not statistically significant. The class average normalized gain (g) for Tutorials was 49% and for peer teaching 53%. Surprise test had average scoring of 36 marks (out of 50) for controls and 41 marks for cases. Analysed section wise, the average score was better for Long answer question (LAQ) in cases. Section wise analysis suggested that through peer teaching, retention was better for descriptive answers as LAQ has better average score in cases. Feedback responses were predominantly positive for efficacy of peer teaching as a learning method. The reflective statements were sorted into reflection in action, reflection on action, claiming evidence, describing experience, and recognizing discrepancies. Conclusion Teaching can stimulate further learning as it involves interplay of three processes: metacognitive awareness; deliberate practice, and self-explanation. Coupled with immediate feedback and reflective exercises, learning can be measurably enhanced along with improved teaching skills. PMID:26435969

  19. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  20. A Response to Holster and Lake Regarding Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Stewart, Jeffrey; McLean, Stuart; Kramer, Brandon

    2017-01-01

    Stewart questioned vocabulary size estimation methods proposed by Beglar and Nation for the Vocabulary Size Test, further arguing Rasch mean square (MSQ) fit statistics cannot determine the proportion of random guesses contained in the average learner's raw score, because the average value will be near 1 by design. He illustrated this by…

  1. Learning More from Educational Intervention Studies: Estimating Complier Average Causal Effects in a Relevance Intervention

    ERIC Educational Resources Information Center

    Nagengast, Benjamin; Brisson, Brigitte M.; Hulleman, Chris S.; Gaspard, Hanna; Häfner, Isabelle; Trautwein, Ulrich

    2018-01-01

    An emerging literature demonstrates that relevance interventions, which ask students to produce written reflections on how what they are learning relates to their lives, improve student learning outcomes. As part of a randomized evaluation of a relevance intervention (N = 1,978 students from 82 ninth-grade classes), we used Complier Average Causal…

  2. The Computer as a Teaching Aid for Eleventh Grade Mathematics: A Comparison Study.

    ERIC Educational Resources Information Center

    Kieren, Thomas Ervin

    To determine the effect of learning computer programming and the use of a computer on mathematical achievement of eleventh grade students, for each of two years, average and above average students were randomly assigned to an experimental and control group. The experimental group wrote computer programs and used the output from the computer in…

  3. Preventing central venous catheter-associated primary bloodstream infections: characteristics of practices among hospitals participating in the Evaluation of Processes and Indicators in Infection Control (EPIC) study.

    PubMed

    Braun, Barbara I; Kritchevsky, Stephen B; Wong, Edward S; Solomon, Steve L; Steele, Lynn; Richards, Cheryl L; Simmons, Bryan P

    2003-12-01

    To describe the conceptual framework and methodology of the Evaluation of Processes and Indicators in Infection Control (EPIC) study and present results of CVC insertion characteristics and organizational practices for preventing BSIs. The goal of the EPIC study was to evaluate relationships among processes of care, organizational characteristics, and the outcome of BSI. This was a multicenter prospective observational study of variation in hospital practices related to preventing CVC-associated BSIs. Process of care information (eg, barrier use during insertions and experience of the inserting practitioner) was collected for a random sample of approximately 5 CVC insertions per month per hospital during November 1998 to December 1999. Organization demographic and practice information (eg, surveillance activities and staff and ICU nurse staffing levels) was also collected. Medical, surgical, or medical-surgical ICUs from 55 hospitals (41 U.S. and 14 international sites). Process information was obtained for 3,320 CVC insertions with an average of 58.2 (+/- 16.1) insertions per hospital. Fifty-four hospitals provided policy and practice information. Staff spent an average of 13 hours per week in study ICU surveillance. Most patients received nontunneled, multiple lumen CVCs, of which fewer than 25% were coated with antimicrobial material. Regarding barriers, most clinicians wore masks (81.5%) and gowns (76.8%); 58.1% used large drapes. Few hospitals (18.1%) used an intravenous team to manage ICU CVCs. Substantial variation exists in CVC insertion practice and BSI prevention activities. Understanding which practices have the greatest impact on BSI rates can help hospitals better target improvement interventions.

  4. The randomized benchmarking number is not what you think it is

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy; Rudinger, Kenneth; Blume-Kohout, Robin; Sarovar, Mohan; Young, Kevin

    Randomized benchmarking (RB) is a widely used technique for characterizing a gate set, whereby random sequences of gates are used to probe the average behavior of the gate set. The gates are chosen to ideally compose to the identity, and the rate of decay in the survival probability of an initial state with increasing length sequences is extracted from a set of experiments - this is the `RB number'. For reasonably well-behaved noise and particular gate sets, it has been claimed that the RB number is a reliable estimate of the average gate fidelity (AGF) of each noisy gate to the ideal target unitary, averaged over all gates in the set. Contrary to this widely held view, we show that this is not the case. We show that there are physically relevant situations, in which RB was thought to be provably reliable, where the RB number is many orders of magnitude away from the AGF. These results have important implications for interpreting the RB protocol, and immediate consequences for many advanced RB techniques. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. Rumor Processes in Random Environment on and on Galton-Watson Trees

    NASA Astrophysics Data System (ADS)

    Bertacchi, Daniela; Zucca, Fabio

    2013-11-01

    The aim of this paper is to study rumor processes in random environment. In a rumor process a signal starts from the stations of a fixed vertex (the root) and travels on a graph from vertex to vertex. We consider two rumor processes. In the firework process each station, when reached by the signal, transmits it up to a random distance. In the reverse firework process, on the other hand, stations do not send any signal but they “listen” for it up to a random distance. The first random environment that we consider is the deterministic 1-dimensional tree with a random number of stations on each vertex; in this case the root is the origin of . We give conditions for the survival/extinction on almost every realization of the sequence of stations. Later on, we study the processes on Galton-Watson trees with random number of stations on each vertex. We show that if the probability of survival is positive, then there is survival on almost every realization of the infinite tree such that there is at least one station at the root. We characterize the survival of the process in some cases and we give sufficient conditions for survival/extinction.

  6. Extreme events and event size fluctuations in biased random walks on networks.

    PubMed

    Kishore, Vimal; Santhanam, M S; Amritkar, R E

    2012-05-01

    Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.

  7. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  8. Wound Infiltration With Extended-Release Versus Short-Acting Bupivacaine Before Laparoscopic Hysterectomy: A Randomized Controlled Trial.

    PubMed

    Barron, Kenneth I; Lamvu, Georgine M; Schmidt, R Cole; Fisk, Matthew; Blanton, Emily; Patanwala, Insiyyah

    2017-02-01

    To evaluate if preincision infiltration with extended-release liposomal bupivacaine provides improved overall pain relief compared with 0.25% bupivacaine after laparoscopic or robotic-assisted hysterectomy. A single-center double-masked randomized controlled trial (Canadian Task Force Classification I). A tertiary-care community hospital. Patients recruited from July 2015 through January 2016. Sixty-four patients were randomized, and 59 were analyzed for the primary outcome. Women scheduled to undergo multiport laparoscopic or robotic-assisted total hysterectomy for benign indications were randomized to receive preincision infiltration with undiluted liposomal bupivacaine or 0.25% bupivacaine. The primary outcome was overall average pain intensity by numeric rating scale (0-10) using the Brief Pain Inventory (BPI) via telephone survey on postoperative day (POD) 3. A sample size of 28 per group (N = 56) was planned to detect a 30% change in pain scores. Secondary outcomes were overall average and worst numeric pain scores on PODs 1, 2, and 14; pain scores in hospital; BPI pain interference scores; and total opioid use. There were no demographic differences between the 2 groups. For the primary outcome, we found a decrease in the average (p = .02) pain scores on POD 3 in the liposomal bupivacaine group. We also found a decrease in worst pain scores on POD 2 (p = .03) and POD 3 (p = .01). There were no differences in pain scores while in the hospital or on POD 1 or POD 14. There were no differences in BPI pain interference scores, opioid use, or reported adverse effects. For laparoscopic and robotic-assisted multiport hysterectomies, there is evidence of decreased average postoperative pain with liposomal bupivacaine compared with 0.25% bupivacaine for port-site analgesia on POD 3, but no difference in opioid use or measures of functioning. Published by Elsevier Inc.

  9. Using environmental heterogeneity to plan for sea-level rise.

    PubMed

    Hunter, Elizabeth A; Nibbelink, Nathan P

    2017-12-01

    Environmental heterogeneity is increasingly being used to select conservation areas that will provide for future biodiversity under a variety of climate scenarios. This approach, termed conserving nature's stage (CNS), assumes environmental features respond to climate change more slowly than biological communities, but will CNS be effective if the stage were to change as rapidly as the climate? We tested the effectiveness of using CNS to select sites in salt marshes for conservation in coastal Georgia (U.S.A.), where environmental features will change rapidly as sea level rises. We calculated species diversity based on distributions of 7 bird species with a variety of niches in Georgia salt marshes. Environmental heterogeneity was assessed across six landscape gradients (e.g., elevation, salinity, and patch area). We used 2 approaches to select sites with high environmental heterogeneity: site complementarity (environmental diversity [ED]) and local environmental heterogeneity (environmental richness [ER]). Sites selected based on ER predicted present-day species diversity better than randomly selected sites (up to an 8.1% improvement), were resilient to areal loss from SLR (1.0% average areal loss by 2050 compared with 0.9% loss of randomly selected sites), and provided habitat to a threatened species (0.63 average occupancy compared with 0.6 average occupancy of randomly selected sites). Sites selected based on ED predicted species diversity no better or worse than random and were not resilient to SLR (2.9% average areal loss by 2050). Despite the discrepancy between the 2 approaches, CNS is a viable strategy for conservation site selection in salt marshes because the ER approach was successful. It has potential for application in other coastal areas where SLR will affect environmental features, but its performance may depend on the magnitude of geological changes caused by SLR. Our results indicate that conservation planners that had heretofore excluded low-lying coasts from CNS planning could include coastal ecosystems in regional conservation strategies. © 2017 Society for Conservation Biology.

  10. [Triage duration times: a prospective descriptive study in a level 1° emergency department].

    PubMed

    Bambi, Stefano; Ruggeri, Marco

    2017-01-01

    Triage is the most important tool for clinical risk management in emergency departments (ED). The timing measurement of its phases is fundamental to establish indicators and standards for the optimization of the system. To evaluate the duration time of the phases of triage; to evaluate some variables exerting influence on nurses' performance. prospective descriptive study performed in the ED of Careggi Teaching Hospital in Florence. 14 nurses enrolled by stratified randomization proportion (1/3 of the whole staff ), according to classes of length of service. Triage processes on 150 adult patients were recorded. The mean age of nurses was 39.7 years (SD ± 5.2, range 29-50); the average length of service was 10.3 years (SD ± 4.4, range 3-18); average of triage experience was 8.6 years (SD ± 4.3, range 2-13). The median time from patient's arrival to the end of the triage process was 04': 04" (range 00':47"- 18':08"); the median duration of triage was 01':11" (range 00':07" -11':27"). The length of service and triage experience did not influence the medians of recorded intervals of time, but there were some limitations due to the low sample size. Interruptions were observed 111 (74%) of triage cases. the recorded triage time intervals were similar to those reported in international literature. Actions are needed to reduce the impact of interruptions on triage process' times.

  11. Distributed and Dynamic Neural Encoding of Multiple Motion Directions of Transparently Moving Stimuli in Cortical Area MT

    PubMed Central

    Xiao, Jianbo

    2015-01-01

    Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869

  12. Estimation of treatment efficacy with complier average causal effects (CACE) in a randomized stepped wedge trial.

    PubMed

    Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M

    2014-05-01

    Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.

  13. Efficient encapsulation of proteins with random copolymers.

    PubMed

    Nguyen, Trung Dac; Qiao, Baofu; Olvera de la Cruz, Monica

    2018-06-12

    Membraneless organelles are aggregates of disordered proteins that form spontaneously to promote specific cellular functions in vivo. The possibility of synthesizing membraneless organelles out of cells will therefore enable fabrication of protein-based materials with functions inherent to biological matter. Since random copolymers contain various compositions and sequences of solvophobic and solvophilic groups, they are expected to function in nonbiological media similarly to a set of disordered proteins in membraneless organelles. Interestingly, the internal environment of these organelles has been noted to behave more like an organic solvent than like water. Therefore, an adsorbed layer of random copolymers that mimics the function of disordered proteins could, in principle, protect and enhance the proteins' enzymatic activity even in organic solvents, which are ideal when the products and/or the reactants have limited solubility in aqueous media. Here, we demonstrate via multiscale simulations that random copolymers efficiently incorporate proteins into different solvents with the potential to optimize their enzymatic activity. We investigate the key factors that govern the ability of random copolymers to encapsulate proteins, including the adsorption energy, copolymer average composition, and solvent selectivity. The adsorbed polymer chains have remarkably similar sequences, indicating that the proteins are able to select certain sequences that best reduce their exposure to the solvent. We also find that the protein surface coverage decreases when the fluctuation in the average distance between the protein adsorption sites increases. The results herein set the stage for computational design of random copolymers for stabilizing and delivering proteins across multiple media.

  14. Effect of Arrangement of Stick Figures on Estimates of Proportion in Risk Graphics

    PubMed Central

    Ancker, Jessica S.; Weber, Elke U.; Kukafka, Rita

    2017-01-01

    Background Health risks are sometimes illustrated with stick figures, with a certain proportion colored to indicate they are affected by the disease. Perception of these graphics may be affected by whether the affected stick figures are scattered randomly throughout the group or arranged in a block. Objective To assess the effects of stick-figure arrangement on first impressions of estimates of proportion, under a 10-s deadline. Design Questionnaire. Participants and Setting Respondents recruited online (n = 100) or in waiting rooms at an urban hospital (n = 65). Intervention Participants were asked to estimate the proportion represented in 6 unlabeled graphics, half randomly arranged and half sequentially arranged. Measurements Estimated proportions. Results Although average estimates were fairly good, the variability of estimates was high. Overestimates of random graphics were larger than overestimates of sequential ones, except when the proportion was near 50%; variability was also higher with random graphics. Although the average inaccuracy was modest, it was large enough that more than one quarter of respondents confused 2 graphics depicting proportions that differed by 11 percentage points. Low numeracy and educational level were associated with inaccuracy. Limitations Participants estimated proportions but did not report perceived risk. Conclusions Randomly arranged arrays of stick figures should be used with care because viewers’ ability to estimate the proportion in these graphics is so poor that moderate differences between risks may not be visible. In addition, random arrangements may create an initial impression that proportions, especially large ones, are larger than they are. PMID:20671209

  15. Comparison of Muscle Onset Activation Sequences between a Golf or Tennis Swing and Common Training Exercises Using Surface Electromyography: A Pilot Study.

    PubMed

    Vasudevan, John M; Logan, Andrew; Shultz, Rebecca; Koval, Jeffrey J; Roh, Eugene Y; Fredericson, Michael

    2016-01-01

    Aim. The purpose of this pilot study is to use surface electromyography to determine an individual athlete's typical muscle onset activation sequence when performing a golf or tennis forward swing and to use the method to assess to what degree the sequence is reproduced with common conditioning exercises and a machine designed for this purpose. Methods. Data for 18 healthy male subjects were collected for 15 muscles of the trunk and lower extremities. Data were filtered and processed to determine the average onset of muscle activation for each motion. A Spearman correlation estimated congruence of activation order between the swing and each exercise. Correlations of each group were pooled with 95% confidence intervals using a random effects meta-analytic strategy. Results. The averaged sequences differed among each athlete tested, but pooled correlations demonstrated a positive association between each exercise and the participants' natural muscle onset activation sequence. Conclusion. The selected training exercises and Turning Point™ device all partially reproduced our athletes' averaged muscle onset activation sequences for both sports. The results support consideration of a larger, adequately powered study using this method to quantify to what degree each of the selected exercises is appropriate for use in both golf and tennis.

  16. Comparison of Muscle Onset Activation Sequences between a Golf or Tennis Swing and Common Training Exercises Using Surface Electromyography: A Pilot Study

    PubMed Central

    Shultz, Rebecca; Fredericson, Michael

    2016-01-01

    Aim. The purpose of this pilot study is to use surface electromyography to determine an individual athlete's typical muscle onset activation sequence when performing a golf or tennis forward swing and to use the method to assess to what degree the sequence is reproduced with common conditioning exercises and a machine designed for this purpose. Methods. Data for 18 healthy male subjects were collected for 15 muscles of the trunk and lower extremities. Data were filtered and processed to determine the average onset of muscle activation for each motion. A Spearman correlation estimated congruence of activation order between the swing and each exercise. Correlations of each group were pooled with 95% confidence intervals using a random effects meta-analytic strategy. Results. The averaged sequences differed among each athlete tested, but pooled correlations demonstrated a positive association between each exercise and the participants' natural muscle onset activation sequence. Conclusion. The selected training exercises and Turning Point™ device all partially reproduced our athletes' averaged muscle onset activation sequences for both sports. The results support consideration of a larger, adequately powered study using this method to quantify to what degree each of the selected exercises is appropriate for use in both golf and tennis. PMID:27403454

  17. A digital boxcar integrator for IMS spectra

    NASA Technical Reports Server (NTRS)

    Cohen, Martin J.; Stimac, Robert M.; Wernlund, Roger F.; Parker, Donald C.

    1995-01-01

    When trying to detect or quantify a signal at or near the limit of detectability, it is invariably embeded in the noise. This statement is true for nearly all detectors of any physical phenomena and the limit of detectability, hopefully, occurs at very low signal-to-noise levels. This is particularly true of IMS (Ion Mobility Spectrometers) spectra due to the low vapor pressure of several chemical compounds of great interest and the small currents associated with the ionic detection process. Gated Integrators and Boxcar Integrators or Averagers are designed to recover fast, repetitive analog signals. In a typical application, a time 'Gate' or 'Window' is generated, characterized by a set delay from a trigger or gate pulse and a certain width. A Gated Integrator amplifies and integrates the signal that is present during the time the gate is open, ignoring noise and interference that may be present at other times. Boxcar Integration refers to the practice of averaging the output of the Gated Integrator over many sweeps of the detector. Since any signal present during the gate will add linearly, while noise will add in a 'random walk' fashion as the square root of the number of sweeps, averaging N sweeps will improve the 'Signal-to-Noise Ratio' by a factor of the square root of N.

  18. Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain.

    PubMed

    Tan, W Katherine; Hassanpour, Saeed; Heagerty, Patrick J; Rundell, Sean D; Suri, Pradeep; Huhdanpaa, Hannu T; James, Kathryn; Carrell, David S; Langlotz, Curtis P; Organ, Nancy L; Meier, Eric N; Sherman, Karen J; Kallmes, David F; Luetmer, Patrick H; Griffith, Brent; Nerenz, David R; Jarvik, Jeffrey G

    2018-03-28

    To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC. Copyright © 2018 The Association of University Radiologists. All rights reserved.

  19. Improved outcome with pulses of vincristine and corticosteroids in continuation therapy of children with average risk acute lymphoblastic leukemia (ALL) and lymphoblastic non-Hodgkin lymphoma (NHL): report of the EORTC randomized phase 3 trial 58951.

    PubMed

    De Moerloose, Barbara; Suciu, Stefan; Bertrand, Yves; Mazingue, Françoise; Robert, Alain; Uyttebroeck, Anne; Yakouben, Karima; Ferster, Alice; Margueritte, Geneviève; Lutz, Patrick; Munzer, Martine; Sirvent, Nicolas; Norton, Lucilia; Boutard, Patrick; Plantaz, Dominique; Millot, Frederic; Philippet, Pierre; Baila, Liliana; Benoit, Yves; Otten, Jacques

    2010-07-08

    The European Organisation for Research and Treatment of Cancer 58951 trial for children with acute lymphoblastic leukemia (ALL) or non-Hodgkin lymphoma (NHL) addressed 3 randomized questions, including the evaluation of dexamethasone (DEX) versus prednisolone (PRED) in induction and, for average-risk patients, the evaluation of vincristine and corticosteroid pulses during continuation therapy. The corticosteroid used in the pulses was that assigned at induction. Overall, 411 patients were randomly assigned: 202 initially randomly assigned to PRED (60 mg/m(2)/d), 201 to DEX (6 mg/m(2)/d), and 8 nonrandomly assigned to PRED. At a median follow-up of 6.3 years, there were 19 versus 34 events for pulses versus no pulses; 6-year disease-free survival (DFS) rate was 90.6% (standard error [SE], 2.1%) and 82.8% (SE, 2.8%), respectively (hazard ratio [HR] = 0.54; 95% confidence interval, 0.31-0.94; P = .027). The effect of pulses was similar in the PRED (HR = 0.56) and DEX groups (HR = 0.59) but more pronounced in girls (HR = 0.24) than in boys (HR = 0.71). Grade 3 to 4 hepatic toxicity was 30% versus 40% in pulses versus no pulses group and grade 2 to 3 osteonecrosis was 4.4% versus 2%. For average-risk patients treated according to Berlin-Frankfurt-Muenster-based protocols, pulses should become a standard component of therapy.

  20. Mindfulness meditation for the treatment of chronic low back pain in older adults: A randomized controlled pilot study

    PubMed Central

    Morone, Natalia E.; Greco, Carol M.; Weiner, Debra K.

    2008-01-01

    The objectives of this pilot study were to assess the feasibility of recruitment and adherence to an eight-session mindfulness meditation program for community-dwelling older adults with chronic low back pain (CLBP) and to develop initial estimates of treatment effects. It was designed as a randomized, controlled clinical trial. Participants were 37 community-dwelling older adults aged 65 years and older with CLBP of moderate intensity occurring daily or almost every day. Participants were randomized to an 8-week mindfulness-based meditation program or to a wait-list control group. Baseline, 8-week and 3-month follow-up measures of pain, physical function, attention, and quality of life were assessed. Eighty-nine older adults were screened and 37 found to be eligible and randomized within a 6-month period. The mean age of the sample was 74.9 years, 21/37 (57%) of participants were female and 33/37 (89%) were white. At the end of the intervention 30/37 (81%) participants completed 8-week assessments. Average class attendance of the intervention arm was 6.7 out of 8. They meditated an average of 4.3 days a week and the average minutes per day was 31.6. Compared to the control group, the intervention group displayed significant improvement in the Chronic Pain Acceptance Questionnaire Total Score and Activities Engagement subscale (P = .008, P = .004) and SF-36 Physical Function (P = .03). An 8-week mindfulness-based meditation program is feasible for older adults with CLBP. The program may lead to improvement in pain acceptance and physical function. PMID:17544212

  1. Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements.

    PubMed

    Xiong, Chunbao; Lu, Huali; Zhu, Jinsong

    2017-02-23

    Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation.

  2. Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements

    PubMed Central

    Xiong, Chunbao; Lu, Huali; Zhu, Jinsong

    2017-01-01

    Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation. PMID:28241472

  3. Randomized Trial of Continuing Care Enhancements for Cocaine-Dependent Patients following Initial Engagement

    ERIC Educational Resources Information Center

    McKay, James R.; Lynch, Kevin G.; Coviello, Donna; Morrison, Rebecca; Cary, Mark S.; Skalina, Lauren; Plebani, Jennifer

    2010-01-01

    Objective: The effects of cognitive-behavioral relapse prevention (RP), contingency management (CM), and their combination (CM + RP) were evaluated in a randomized trial with 100 cocaine-dependent patients (58% female, 89% African American) who were engaged in treatment for at least 2 weeks and had an average of 44 days of abstinence at baseline.…

  4. An Analytical Framework for Fast Estimation of Capacity and Performance in Communication Networks

    DTIC Science & Technology

    2012-01-25

    standard random graph (due to Erdos- Renyi ) in the regime where the average degrees remain fixed (and above 1) and the number of nodes get large, is not...abs/1010.3305 (Oct 2010). [6] O. Narayan, I. Saniee, G. H. Tucci, “Lack of Spectral Gap and Hyperbolicity in Asymptotic Erdös- Renyi Random Graphs

  5. How Much Do the Effects of Education and Training Programs Vary across Sites? Evidence from Past Multisite Randomized Trials

    ERIC Educational Resources Information Center

    Weiss, Michael J.; Bloom, Howard S.; Verbitsky-Savitz, Natalya; Gupta, Himani; Vigil, Alma E.; Cullinan, Daniel N.

    2017-01-01

    Multisite trials, in which individuals are randomly assigned to alternative treatment arms within sites, offer an excellent opportunity to estimate the cross-site average effect of treatment assignment (intent to treat or ITT) "and" the amount by which this impact varies across sites. Although both of these statistics are substantively…

  6. Modeling of Academic Achievement of Primary School Students in Ethiopia Using Bayesian Multilevel Approach

    ERIC Educational Resources Information Center

    Sebro, Negusse Yohannes; Goshu, Ayele Taye

    2017-01-01

    This study aims to explore Bayesian multilevel modeling to investigate variations of average academic achievement of grade eight school students. A sample of 636 students is randomly selected from 26 private and government schools by a two-stage stratified sampling design. Bayesian method is used to estimate the fixed and random effects. Input and…

  7. Self-avoiding walks on scale-free networks

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.

    2005-01-01

    Several kinds of walks on complex networks are currently used to analyze search and navigation in different systems. Many analytical and computational results are known for random walks on such networks. Self-avoiding walks (SAW’s) are expected to be more suitable than unrestricted random walks to explore various kinds of real-life networks. Here we study long-range properties of random SAW’s on scale-free networks, characterized by a degree distribution P (k) ˜ k-γ . In the limit of large networks (system size N→∞ ), the average number sn of SAW’s starting from a generic site increases as μn , with μ= < k2 > / -1 . For finite N , sn is reduced due to the presence of loops in the network, which causes the emergence of attrition of the paths. For kinetic growth walks, the average maximum length increases as a power of the system size: ˜ Nα , with an exponent α increasing as the parameter γ is raised. We discuss the dependence of α on the minimum allowed degree in the network. A similar power-law dependence is found for the mean self-intersection length of nonreversal random walks. Simulation results support our approximate analytical calculations.

  8. Subharmonic response of a single-degree-of-freedom nonlinear vibro-impact system to a narrow-band random excitation.

    PubMed

    Haiwu, Rong; Wang, Xiangdong; Xu, Wei; Fang, Tong

    2009-08-01

    The subharmonic response of single-degree-of-freedom nonlinear vibro-impact oscillator with a one-sided barrier to narrow-band random excitation is investigated. The narrow-band random excitation used here is a filtered Gaussian white noise. The analysis is based on a special Zhuravlev transformation, which reduces the system to one without impacts, or velocity jumps, thereby permitting the applications of asymptotic averaging over the "fast" variables. The averaged stochastic equations are solved exactly by the method of moments for the mean-square response amplitude for the case of linear system with zero offset. A perturbation-based moment closure scheme is proposed and the formula of the mean-square amplitude is obtained approximately for the case of linear system with nonzero offset. The perturbation-based moment closure scheme is used once again to obtain the algebra equation of the mean-square amplitude of the response for the case of nonlinear system. The effects of damping, detuning, nonlinear intensity, bandwidth, and magnitudes of random excitations are analyzed. The theoretical analyses are verified by numerical results. Theoretical analyses and numerical simulations show that the peak amplitudes may be strongly reduced at large detunings or large nonlinear intensity.

  9. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  10. Fidelity decay in interacting two-level boson systems: Freezing and revivals

    NASA Astrophysics Data System (ADS)

    Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.

    2011-05-01

    We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.

  11. Mean first passage time for random walk on dual structure of dendrimer

    NASA Astrophysics Data System (ADS)

    Li, Ling; Guan, Jihong; Zhou, Shuigeng

    2014-12-01

    The random walk approach has recently been widely employed to study the relations between the underlying structure and dynamic of complex systems. The mean first-passage time (MFPT) for random walks is a key index to evaluate the transport efficiency in a given system. In this paper we study analytically the MFPT in a dual structure of dendrimer network, Husimi cactus, which has different application background and different structure (contains loops) from dendrimer. By making use of the iterative construction, we explicitly determine both the partial mean first-passage time (PMFT, the average of MFPTs to a given target) and the global mean first-passage time (GMFT, the average of MFPTs over all couples of nodes) on Husimi cactus. The obtained closed-form results show that PMFPT and EMFPT follow different scaling with the network order, suggesting that the target location has essential influence on the transport efficiency. Finally, the impact that loop structure could bring is analyzed and discussed.

  12. Peer Influence, Genetic Propensity, and Binge Drinking: A Natural Experiment and a Replication.

    PubMed

    Guo, Guang; Li, Yi; Wang, Hongyu; Cai, Tianji; Duncan, Greg J

    2015-11-01

    The authors draw data from the College Roommate Study (ROOM) and the National Longitudinal Study of Adolescent Health to investigate gene-environment interaction effects on youth binge drinking. In ROOM, the environmental influence was measured by the precollege drinking behavior of randomly assigned roommates. Random assignment safeguards against friend selection and removes the threat of gene-environment correlation that makes gene-environment interaction effects difficult to interpret. On average, being randomly assigned a drinking peer as opposed to a nondrinking peer increased college binge drinking by 0.5-1.0 episodes per month, or 20%-40% the average amount of binge drinking. However, this peer influence was found only among youths with a medium level of genetic propensity for alcohol use; those with either a low or high genetic propensity were not influenced by peer drinking. A replication of the findings is provided in data drawn from Add Health. The study shows that gene-environment interaction analysis can uncover social-contextual effects likely to be missed by traditional sociological approaches.

  13. Unimodular lattice triangulations as small-world and scale-free random graphs

    NASA Astrophysics Data System (ADS)

    Krüger, B.; Schmidt, E. M.; Mecke, K.

    2015-02-01

    Real-world networks, e.g., the social relations or world-wide-web graphs, exhibit both small-world and scale-free behaviour. We interpret lattice triangulations as planar graphs by identifying triangulation vertices with graph nodes and one-dimensional simplices with edges. Since these triangulations are ergodic with respect to a certain Pachner flip, applying different Monte Carlo simulations enables us to calculate average properties of random triangulations, as well as canonical ensemble averages, using an energy functional that is approximately the variance of the degree distribution. All considered triangulations have clustering coefficients comparable with real-world graphs; for the canonical ensemble there are inverse temperatures with small shortest path length independent of system size. Tuning the inverse temperature to a quasi-critical value leads to an indication of scale-free behaviour for degrees k≥slant 5. Using triangulations as a random graph model can improve the understanding of real-world networks, especially if the actual distance of the embedded nodes becomes important.

  14. Effects of Long-Term Acupuncture Treatment on Resting-State Brain Activity in Migraine Patients: A Randomized Controlled Trial on Active Acupoints and Inactive Acupoints

    PubMed Central

    Zhao, Ling; Liu, Jixin; Zhang, Fuwen; Dong, Xilin; Peng, Yulin; Qin, Wei; Wu, Fumei; Li, Ying; Yuan, Kai; von Deneen, Karen M.; Gong, Qiyong; Tang, Zili; Liang, Fanrong

    2014-01-01

    Background Acupuncture has been commonly used for preventing migraine attacks and relieving pain during a migraine, although there is limited knowledge on the physiological mechanism behind this method. The objectives of this study were to compare the differences in brain activities evoked by active acupoints and inactive acupoints and to investigate the possible correlation between clinical variables and brain responses. Methods and Results A randomized controlled trial and resting-state functional magnetic resonance imaging (fMRI) were conducted. A total of eighty migraineurs without aura were enrolled to receive either active acupoint acupuncture or inactive acupoint acupuncture treatment for 8 weeks, and twenty patients in each group were randomly selected for the fMRI scan at the end of baseline and at the end of treatment. The neuroimaging data indicated that long-term active acupoint therapy elicited a more extensive and remarkable cerebral response compared with acupuncture at inactive acupoints. Most of the regions were involved in the pain matrix, lateral pain system, medial pain system, default mode network, and cognitive components of pain processing. Correlation analysis showed that the decrease in the visual analogue scale (VAS) was significantly related to the increased average Regional homogeneity (ReHo) values in the anterior cingulate cortex in the two groups. Moreover, the decrease in the VAS was associated with increased average ReHo values in the insula which could be detected in the active acupoint group. Conclusions Long-term active acupoint therapy and inactive acupoint therapy have different brain activities. We postulate that acupuncture at the active acupoint might have the potential effect of regulating some disease-affected key regions and the pain circuitry for migraine, and promote establishing psychophysical pain homeostasis. Trial Registration Chinese Clinical Trial Registry ChiCTR-TRC-13003635 PMID:24915066

  15. On the Concept of Random Orientation in Far-Field Electromagnetic Scattering by Nonspherical Particles

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Yurkin, Maxim A.

    2017-01-01

    Although the model of randomly oriented nonspherical particles has been used in a great variety of applications of far-field electromagnetic scattering, it has never been defined in strict mathematical terms. In this Letter we use the formalism of Euler rigid-body rotations to clarify the concept of statistically random particle orientations and derive its immediate corollaries in the form of most general mathematical properties of the orientation-averaged extinction and scattering matrices. Our results serve to provide a rigorous mathematical foundation for numerous publications in which the notion of randomly oriented particles and its light-scattering implications have been considered intuitively obvious.

  16. A mathematical study of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.

  17. Evolutionary mixed games in structured populations: Cooperation and the benefits of heterogeneity

    NASA Astrophysics Data System (ADS)

    Amaral, Marco A.; Wardil, Lucas; Perc, Matjaž; da Silva, Jafferson K. L.

    2016-04-01

    Evolutionary games on networks traditionally involve the same game at each interaction. Here we depart from this assumption by considering mixed games, where the game played at each interaction is drawn uniformly at random from a set of two different games. While in well-mixed populations the random mixture of the two games is always equivalent to the average single game, in structured populations this is not always the case. We show that the outcome is, in fact, strongly dependent on the distance of separation of the two games in the parameter space. Effectively, this distance introduces payoff heterogeneity, and the average game is returned only if the heterogeneity is small. For higher levels of heterogeneity the distance to the average game grows, which often involves the promotion of cooperation. The presented results support preceding research that highlights the favorable role of heterogeneity regardless of its origin, and they also emphasize the importance of the population structure in amplifying facilitators of cooperation.

  18. Results of a large-scale randomized behavior change intervention on road safety in Kenya.

    PubMed

    Habyarimana, James; Jack, William

    2015-08-25

    Road accidents kill 1.3 million people each year, most in the developing world. We test the efficacy of evocative messages, delivered on stickers placed inside Kenyan matatus, or minibuses, in reducing road accidents. We randomize the intervention, which nudges passengers to complain to their drivers directly, across 12,000 vehicles and find that on average it reduces insurance claims rates of matatus by between one-quarter and one-third and is associated with 140 fewer road accidents per year than predicted. Messages promoting collective action are especially effective, and evocative images are an important motivator. Average maximum speeds and average moving speeds are 1-2 km/h lower in vehicles assigned to treatment. We cannot reject the null hypothesis of no placebo effect. We were unable to discern any impact of a complementary radio campaign on insurance claims. Finally, the sticker intervention is inexpensive: we estimate the cost-effectiveness of the most impactful stickers to be between $10 and $45 per disability-adjusted life-year saved.

  19. Evolutionary mixed games in structured populations: Cooperation and the benefits of heterogeneity.

    PubMed

    Amaral, Marco A; Wardil, Lucas; Perc, Matjaž; da Silva, Jafferson K L

    2016-04-01

    Evolutionary games on networks traditionally involve the same game at each interaction. Here we depart from this assumption by considering mixed games, where the game played at each interaction is drawn uniformly at random from a set of two different games. While in well-mixed populations the random mixture of the two games is always equivalent to the average single game, in structured populations this is not always the case. We show that the outcome is, in fact, strongly dependent on the distance of separation of the two games in the parameter space. Effectively, this distance introduces payoff heterogeneity, and the average game is returned only if the heterogeneity is small. For higher levels of heterogeneity the distance to the average game grows, which often involves the promotion of cooperation. The presented results support preceding research that highlights the favorable role of heterogeneity regardless of its origin, and they also emphasize the importance of the population structure in amplifying facilitators of cooperation.

  20. OSI Network-layer Abstraction: Analysis of Simulation Dynamics and Performance Indicators

    NASA Astrophysics Data System (ADS)

    Lawniczak, Anna T.; Gerisch, Alf; Di Stefano, Bruno

    2005-06-01

    The Open Systems Interconnection (OSI) reference model provides a conceptual framework for communication among computers in a data communication network. The Network Layer of this model is responsible for the routing and forwarding of packets of data. We investigate the OSI Network Layer and develop an abstraction suitable for the study of various network performance indicators, e.g. throughput, average packet delay, average packet speed, average packet path-length, etc. We investigate how the network dynamics and the network performance indicators are affected by various routing algorithms and by the addition of randomly generated links into a regular network connection topology of fixed size. We observe that the network dynamics is not simply the sum of effects resulting from adding individual links to the connection topology but rather is governed nonlinearly by the complex interactions caused by the existence of all randomly added and already existing links in the network. Data for our study was gathered using Netzwerk-1, a C++ simulation tool that we developed for our abstraction.

  1. The fractional urinary fluoride excretion of adults consuming naturally and artificially fluoridated water and the influence of water hardness: a randomized trial.

    PubMed

    Villa, A; Cabezas, L; Anabalón, M; Rugg-Gunn, A

    2009-09-01

    To assess whether there was any significant difference in the average fractional urinary fluoride excretion (FUFE) values among adults consuming (NaF) fluoridated Ca-free water (reference water), naturally fluoridated hard water and an artificially (H2SiF6) fluoridated soft water. Sixty adult females (N=20 for each treatment) participated in this randomized, double-blind trial. The experimental design of this study provided an indirect estimation of the fluoride absorption in different types of water through the assessment of the fractional urinary fluoride excretion of volunteers. Average daily FUFE values (daily amount of fluoride excreted in urine/daily total fluoride intake) were not significantly different between the three treatments (Kruskal-Wallis; p = 0.62). The average 24-hour FUFE value (n=60) was 0.69; 95% C.I. 0.65-0.73. The results of this study suggest that the absorption of fluoride is not affected by water hardness.

  2. Literacy Learning of At-Risk First-Grade Students in the Reading Recovery Early Intervention

    ERIC Educational Resources Information Center

    Schwartz, Robert M.

    2005-01-01

    This study investigated the effectiveness and efficiency of the Reading Recovery early intervention. At-risk 1st-grade students were randomly assigned to receive the intervention during the 1st or 2nd half of the school year. High-average and low-average students from the same classrooms provided additional comparisons. Thirty-seven teachers from…

  3. Using Propensity Score Matching Methods to Improve Generalization from Randomized Experiments

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2011-01-01

    The main result of an experiment is typically an estimate of the average treatment effect (ATE) and its standard error. In most experiments, the number of covariates that may be moderators is large. One way this issue is typically skirted is by interpreting the ATE as the average effect for "some" population. Cornfield and Tukey (1956)…

  4. Classifying plant series-level forest potential types: methods for subbasins sampled in the midscale assessment of the interior Columbia basin.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; Scott D. Kreiter; Craig A. Miller; Cecilia H. McNicoll; Michele. Wasienko-Holland

    2000-01-01

    In the interior Columbia River basin midscale ecological assessment, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and composition, and landscape vulnerability to wildfires...

  5. Efficacy of mummy on healing of pressure ulcers: A randomized controlled clinical trial on hospitalized patients in intensive care unit

    PubMed Central

    Moghadari, Masoud; Rezvanipour, Mozafar; Mehrabani, Mitra; Ahmadinejad, Mehdi; Hashempur, Mohammad Hashem

    2018-01-01

    Background Mummy is a mineral substance which according to Persian medicine texts, may be useful in treatment of chronic ulcers. Objective The present study was performed with the aim of determining the effect of mummy on healing of pressure in male patients who had been hospitalized due to cerebrospinal injury in the Intensive Care Unit. Methods This randomized, placebo-controlled clinical trial was performed on 75 patients who had pressure ulcer at Shahid Bahonar Hospital in Kerman, Iran, from September 2016 to March 2017. The control group received normal saline and routine wound dressing, while the intervention group received mummy water solution 20% in addition to normal saline and routine wound dressing on a daily basis. Data was recorded based on the PUSH method. In both groups, ulcers were evaluated on days 0, 7, 14, 21 and 28 for the variables of ulcer surface area, the amount of exudate and type of tissue. Data analysis was done through SPSS 21 and using t-test, Repeated Measure Analysis, Cox Regression and Chi-square. Results Both groups showed reduction in the average ulcer surface area (3.26 to 0.53 in the intervention group and 5.1 to 3.46 in the control group), the average exudate amount (1.26 to 0.26 in the intervention group and 1.83 to 1.06 in the control group) and the average tissue score (1.36 to 0.23 in the intervention group and 2.13 to 1.26 in the control group). Over the entire study period, the intervention group showed more acceptable signs of healing compared to the control group (p<0.05). Conclusion The healing process was more prominent in the intervention group than the control group. Clinical trial registration The trial was registered at the Iranian Registry of Clinical Trials with registered NO. (IRCT2014042917494N1) (29/04/2014). Funding No financial support for the research. PMID:29588812

  6. Non-universal tracer diffusion in crowded media of non-inert obstacles.

    PubMed

    Ghosh, Surya K; Cherstvy, Andrey G; Metzler, Ralf

    2015-01-21

    We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer-obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer-obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer-crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.

  7. A scattering database of marine particles and its application in optical analysis

    NASA Astrophysics Data System (ADS)

    Xu, G.; Yang, P.; Kattawar, G.; Zhang, X.

    2016-12-01

    In modeling the scattering properties of marine particles (e.g. phytoplankton), the laboratory studies imply a need to properly account for the influence of particle morphology, in addition to size and composition. In this study, a marine particle scattering database is constructed using a collection of distorted hexahedral shapes. Specifically, the scattering properties of each size bin and refractive index are obtained by the ensemble average associated with distorted hexahedra with randomly tilted facets and selected aspect ratios (from elongated to flattened). The randomness degree in shape-generation process defines the geometric irregularity of the particles in the group. The geometric irregularity and particle aspect ratios constitute a set of "shape factors" to be accounted for (e.g. in best-fit analysis). To cover most of the marine particle size range, we combine the Invariant Imbedding T-matrix (II-TM) method and the Physical-Geometric Optics Hybrid (PGOH) method in the calculations. The simulated optical properties are shown and compared with those obtained from Lorenz-Mie Theory. Using the scattering database, we present a preliminary optical analysis of laboratory-measured optical properties of marine particles.

  8. Signatures of bifurcation on quantum correlations: Case of the quantum kicked top

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.; Santhanam, M. S.

    2017-01-01

    Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.

  9. Temporal coherence of the acoustic field forward propagated through a continental shelf with random internal waves.

    PubMed

    Gong, Zheng; Chen, Tianrun; Ratilal, Purnima; Makris, Nicholas C

    2013-11-01

    An analytical model derived from normal mode theory for the accumulated effects of range-dependent multiple forward scattering is applied to estimate the temporal coherence of the acoustic field forward propagated through a continental-shelf waveguide containing random three-dimensional internal waves. The modeled coherence time scale of narrow band low-frequency acoustic field fluctuations after propagating through a continental-shelf waveguide is shown to decay with a power-law of range to the -1/2 beyond roughly 1 km, decrease with increasing internal wave energy, to be consistent with measured acoustic coherence time scales. The model should provide a useful prediction of the acoustic coherence time scale as a function of internal wave energy in continental-shelf environments. The acoustic coherence time scale is an important parameter in remote sensing applications because it determines (i) the time window within which standard coherent processing such as matched filtering may be conducted, and (ii) the number of statistically independent fluctuations in a given measurement period that determines the variance reduction possible by stationary averaging.

  10. Random bits, true and unbiased, from atmospheric turbulence

    PubMed Central

    Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo

    2014-01-01

    Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499

  11. Ensemble Feature Learning of Genomic Data Using Support Vector Machine

    PubMed Central

    Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.

    2016-01-01

    The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923

  12. Epidemiology, epigenetics and the 'Gloomy Prospect': embracing randomness in population health research and practice.

    PubMed

    Smith, George Davey

    2011-06-01

    Epidemiologists aim to identify modifiable causes of disease, this often being a prerequisite for the application of epidemiological findings in public health programmes, health service planning and clinical medicine. Despite successes in identifying causes, it is often claimed that there are missing additional causes for even reasonably well-understood conditions such as lung cancer and coronary heart disease. Several lines of evidence suggest that largely chance events, from the biographical down to the sub-cellular, contribute an important stochastic element to disease risk that is not epidemiologically tractable at the individual level. Epigenetic influences provide a fashionable contemporary explanation for such seemingly random processes. Chance events-such as a particular lifelong smoker living unharmed to 100 years-are averaged out at the group level. As a consequence population-level differences (for example, secular trends or differences between administrative areas) can be entirely explicable by causal factors that appear to account for only a small proportion of individual-level risk. In public health terms, a modifiable cause of the large majority of cases of a disease may have been identified, with a wild goose chase continuing in an attempt to discipline the random nature of the world with respect to which particular individuals will succumb. The quest for personalized medicine is a contemporary manifestation of this dream. An evolutionary explanation of why randomness exists in the development of organisms has long been articulated, in terms of offering a survival advantage in changing environments. Further, the basic notion that what is near-random at one level may be almost entirely predictable at a higher level is an emergent property of many systems, from particle physics to the social sciences. These considerations suggest that epidemiological approaches will remain fruitful as we enter the decade of the epigenome.

  13. High power tunable mid-infrared optical parametric oscillator enabled by random fiber laser.

    PubMed

    Wu, Hanshuo; Wang, Peng; Song, Jiaxin; Ye, Jun; Xu, Jiangming; Li, Xiao; Zhou, Pu

    2018-03-05

    Random fiber laser, as a kind of novel fiber laser that utilizes random distributed feedback as well as Raman gain, has become a research focus owing to its advantages of wavelength flexibility, modeless property and output stability. Herein, a tunable optical parametric oscillator (OPO) enabled by a random fiber laser is reported for the first time. By exploiting a tunable random fiber laser to pump the OPO, the central wavelength of idler light can be continuously tuned from 3977.34 to 4059.65 nm with stable temporal average output power. The maximal output power achieved is 2.07 W. So far as we know, this is the first demonstration of a continuous-wave tunable OPO pumped by a tunable random fiber laser, which could not only provide a new approach for achieving tunable mid-infrared (MIR) emission, but also extend the application scenarios of random fiber lasers.

  14. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    NASA Astrophysics Data System (ADS)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  15. A random-walk/giant-loop model for interphase chromosomes.

    PubMed Central

    Sachs, R K; van den Engh, G; Trask, B; Yokota, H; Hearst, J E

    1995-01-01

    Fluorescence in situ hybridization data on distances between defined genomic sequences are used to construct a quantitative model for the overall geometric structure of a human chromosome. We suggest that the large-scale geometry during the G0/G1 part of the cell cycle may consist of flexible chromatin loops, averaging approximately 3 million bp, with a random-walk backbone. A fully explicit, three-parametric polymer model of this random-walk/giant-loop structure can account well for the data. More general models consistent with the data are briefly discussed. PMID:7708711

  16. A random matrix approach to credit risk.

    PubMed

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  17. A Random Matrix Approach to Credit Risk

    PubMed Central

    Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864

  18. Multiple Scattering in Planetary Regoliths Using Incoherent Interactions

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Markkanen, J.; Vaisanen, T.; Penttilä, A.

    2017-12-01

    We consider scattering of light by a planetary regolith using novel numerical methods for discrete random media of particles. Understanding the scattering process is of key importance for spectroscopic, photometric, and polarimetric modeling of airless planetary objects, including radar studies. In our modeling, the size of the spherical random medium can range from microscopic to macroscopic sizes, whereas the particles are assumed to be of the order of the wavelength in size. We extend the radiative transfer and coherent backscattering method (RT-CB) to the case of dense packing of particles by adopting the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path. Furthermore, we replace the far-field interactions of the RT-CB method with rigorous interactions facilitated by the Superposition T-matrix method (STMM). This gives rise to a new RT-RT method, radiative transfer with reciprocal interactions. For microscopic random media, we then compare the new results to asymptotically exact results computed using the STMM, succeeding in the numerical validation of the new methods.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.

  19. Hydroclimatic projections for the Murray-Darling Basin based on an ensemble derived from Intergovernmental Panel on Climate Change AR4 climate models

    NASA Astrophysics Data System (ADS)

    Sun, Fubao; Roderick, Michael L.; Lim, Wee Ho; Farquhar, Graham D.

    2011-12-01

    We assess hydroclimatic projections for the Murray-Darling Basin (MDB) using an ensemble of 39 Intergovernmental Panel on Climate Change AR4 climate model runs based on the A1B emissions scenario. The raw model output for precipitation, P, was adjusted using a quantile-based bias correction approach. We found that the projected change, ΔP, between two 30 year periods (2070-2099 less 1970-1999) was little affected by bias correction. The range for ΔP among models was large (˜±150 mm yr-1) with all-model run and all-model ensemble averages (4.9 and -8.1 mm yr-1) near zero, against a background climatological P of ˜500 mm yr-1. We found that the time series of actually observed annual P over the MDB was indistinguishable from that generated by a purely random process. Importantly, nearly all the model runs showed similar behavior. We used these facts to develop a new approach to understanding variability in projections of ΔP. By plotting ΔP versus the variance of the time series, we could easily identify model runs with projections for ΔP that were beyond the bounds expected from purely random variations. For the MDB, we anticipate that a purely random process could lead to differences of ±57 mm yr-1 (95% confidence) between successive 30 year periods. This is equivalent to ±11% of the climatological P and translates into variations in runoff of around ±29%. This sets a baseline for gauging modeled and/or observed changes.

  20. Dynamics of intracranial electroencephalographic recordings from epilepsy patients using univariate and bivariate recurrence networks.

    PubMed

    Subramaniyam, Narayan Puthanmadam; Hyttinen, Jari

    2015-02-01

    Recently Andrezejak et al. combined the randomness and nonlinear independence test with iterative amplitude adjusted Fourier transform (iAAFT) surrogates to distinguish between the dynamics of seizure-free intracranial electroencephalographic (EEG) signals recorded from epileptogenic (focal) and nonepileptogenic (nonfocal) brain areas of epileptic patients. However, stationarity is a part of the null hypothesis for iAAFT surrogates and thus nonstationarity can violate the null hypothesis. In this work we first propose the application of the randomness and nonlinear independence test based on recurrence network measures to distinguish between the dynamics of focal and nonfocal EEG signals. Furthermore, we combine these tests with both iAAFT and truncated Fourier transform (TFT) surrogate methods, which also preserves the nonstationarity of the original data in the surrogates along with its linear structure. Our results indicate that focal EEG signals exhibit an increased degree of structural complexity and interdependency compared to nonfocal EEG signals. In general, we find higher rejections for randomness and nonlinear independence tests for focal EEG signals compared to nonfocal EEG signals. In particular, the univariate recurrence network measures, the average clustering coefficient C and assortativity R, and the bivariate recurrence network measure, the average cross-clustering coefficient C(cross), can successfully distinguish between the focal and nonfocal EEG signals, even when the analysis is restricted to nonstationary signals, irrespective of the type of surrogates used. On the other hand, we find that the univariate recurrence network measures, the average path length L, and the average betweenness centrality BC fail to distinguish between the focal and nonfocal EEG signals when iAAFT surrogates are used. However, these two measures can distinguish between focal and nonfocal EEG signals when TFT surrogates are used for nonstationary signals. We also report an improvement in the performance of nonlinear prediction error N and nonlinear interdependence measure L used by Andrezejak et al., when TFT surrogates are used for nonstationary EEG signals. We also find that the outcome of the nonlinear independence test based on the average cross-clustering coefficient C(cross) is independent of the outcome of the randomness test based on the average clustering coefficient C. Thus, the univariate and bivariate recurrence network measures provide independent information regarding the dynamics of the focal and nonfocal EEG signals. In conclusion, recurrence network analysis combined with nonstationary surrogates can be applied to derive reliable biomarkers to distinguish between epileptogenic and nonepileptogenic brain areas using EEG signals.

  1. Dynamics of intracranial electroencephalographic recordings from epilepsy patients using univariate and bivariate recurrence networks

    NASA Astrophysics Data System (ADS)

    Subramaniyam, Narayan Puthanmadam; Hyttinen, Jari

    2015-02-01

    Recently Andrezejak et al. combined the randomness and nonlinear independence test with iterative amplitude adjusted Fourier transform (iAAFT) surrogates to distinguish between the dynamics of seizure-free intracranial electroencephalographic (EEG) signals recorded from epileptogenic (focal) and nonepileptogenic (nonfocal) brain areas of epileptic patients. However, stationarity is a part of the null hypothesis for iAAFT surrogates and thus nonstationarity can violate the null hypothesis. In this work we first propose the application of the randomness and nonlinear independence test based on recurrence network measures to distinguish between the dynamics of focal and nonfocal EEG signals. Furthermore, we combine these tests with both iAAFT and truncated Fourier transform (TFT) surrogate methods, which also preserves the nonstationarity of the original data in the surrogates along with its linear structure. Our results indicate that focal EEG signals exhibit an increased degree of structural complexity and interdependency compared to nonfocal EEG signals. In general, we find higher rejections for randomness and nonlinear independence tests for focal EEG signals compared to nonfocal EEG signals. In particular, the univariate recurrence network measures, the average clustering coefficient C and assortativity R , and the bivariate recurrence network measure, the average cross-clustering coefficient Ccross, can successfully distinguish between the focal and nonfocal EEG signals, even when the analysis is restricted to nonstationary signals, irrespective of the type of surrogates used. On the other hand, we find that the univariate recurrence network measures, the average path length L , and the average betweenness centrality BC fail to distinguish between the focal and nonfocal EEG signals when iAAFT surrogates are used. However, these two measures can distinguish between focal and nonfocal EEG signals when TFT surrogates are used for nonstationary signals. We also report an improvement in the performance of nonlinear prediction error N and nonlinear interdependence measure L used by Andrezejak et al., when TFT surrogates are used for nonstationary EEG signals. We also find that the outcome of the nonlinear independence test based on the average cross-clustering coefficient Ccross is independent of the outcome of the randomness test based on the average clustering coefficient C . Thus, the univariate and bivariate recurrence network measures provide independent information regarding the dynamics of the focal and nonfocal EEG signals. In conclusion, recurrence network analysis combined with nonstationary surrogates can be applied to derive reliable biomarkers to distinguish between epileptogenic and nonepileptogenic brain areas using EEG signals.

  2. Noise reduction in single time frame optical DNA maps

    PubMed Central

    Müller, Vilhelm; Westerlund, Fredrik

    2017-01-01

    In optical DNA mapping technologies sequence-specific intensity variations (DNA barcodes) along stretched and stained DNA molecules are produced. These “fingerprints” of the underlying DNA sequence have a resolution of the order one kilobasepairs and the stretching of the DNA molecules are performed by surface adsorption or nano-channel setups. A post-processing challenge for nano-channel based methods, due to local and global random movement of the DNA molecule during imaging, is how to align different time frames in order to produce reproducible time-averaged DNA barcodes. The current solutions to this challenge are computationally rather slow. With high-throughput applications in mind, we here introduce a parameter-free method for filtering a single time frame noisy barcode (snap-shot optical map), measured in a fraction of a second. By using only a single time frame barcode we circumvent the need for post-processing alignment. We demonstrate that our method is successful at providing filtered barcodes which are less noisy and more similar to time averaged barcodes. The method is based on the application of a low-pass filter on a single noisy barcode using the width of the Point Spread Function of the system as a unique, and known, filtering parameter. We find that after applying our method, the Pearson correlation coefficient (a real number in the range from -1 to 1) between the single time-frame barcode and the time average of the aligned kymograph increases significantly, roughly by 0.2 on average. By comparing to a database of more than 3000 theoretical plasmid barcodes we show that the capabilities to identify plasmids is improved by filtering single time-frame barcodes compared to the unfiltered analogues. Since snap-shot experiments and computational time using our method both are less than a second, this study opens up for high throughput optical DNA mapping with improved reproducibility. PMID:28640821

  3. Optimal two-stage dynamic treatment regimes from a classification perspective with censored survival data.

    PubMed

    Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie

    2018-05-18

    Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.

  4. Diagnostic brain residues of dieldrin: Some new insights

    USGS Publications Warehouse

    Heinz, G.H.; Johnson, R.W.; Lamb, D.W.; Kenaga, E.E.

    1981-01-01

    Forty adult male cowbirds were fed a diet containing 20 ppm dieldrin; 20 of the birds were randomly selected to die from dieldrin poisoning and 20 were sacrificed when dieldrin had made them too sick to eat. An average of 6.8 ppm dieldrin (range of 1.51 to 11.7) in the brain on a wet-weight basis was associated with a treatment-related cessation of feeding, whereas an average of 16.3 ppm (range of 9.84 to 23.5) was found in the brains of birds that died from dieldrin poisoning; the latter concentrations agreed with those determined in other studies. Dieldrin-induced starvation was generally irreversible; therefore, brain levels of dieldrin that are clearly sublethal may nevertheless present a grave hazard to birds by initiating a process that leads to death. Fatter cowbirds were able to survive longer on dieldrin treatment but contained brain residues similar to those in cowbirds that died sooner. Some cowbirds survived for 2 months or longer with unexpectedly large amounts of body fat remaining when they died or were sacrificed. Fatter cowbirds also survived longer after they had stopped eating.

  5. Variance Analysis of Unevenly Spaced Time Series Data

    NASA Technical Reports Server (NTRS)

    Hackman, Christine; Parker, Thomas E.

    1996-01-01

    We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

  6. Average inactivity time model, associated orderings and reliability properties

    NASA Astrophysics Data System (ADS)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  7. Conceptualizing and Testing Random Indirect Effects and Moderated Mediation in Multilevel Models: New Procedures and Recommendations

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.

    2006-01-01

    The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…

  8. LR: Compact connectivity representation for triangle meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurung, T; Luffel, M; Lindstrom, P

    2011-01-28

    We propose LR (Laced Ring) - a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constant-time adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearly-Hamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard random-access, constant-time operators formore » traversing a mesh, and show that LR often saves both space and traversal time over competing representations.« less

  9. Experimental Permeability Measurements on a Strut-Supported Transpiration-Cooled Turbine Blade with Stainless-Steel Shell made by the Federal-Mogul Corporation under Bureau of Aeronautics Contract N0as 51613-C

    NASA Technical Reports Server (NTRS)

    Richards, Hadley T.

    1954-01-01

    A turbine blade with a porous stainless-steel shell sintered to a supporting steel strut has been fabricated for tests at the NACA by Federal-Mogul Corporation under contract from the Bureau of Aeronautics, Department of the Navy. The apparent permeability of this blade, on the average, more nearly approaches the values specified by the NAGA than did two strut-supported bronze blades in a previous investigation. Random variations of permeability in the present blade are substantialy greater than those of the bronze blades, but projected improvements in certain phases of the fabrication process are expected to reduce these variations.

  10. Randomized Trial of Reduced-Nicotine Standards for Cigarettes.

    PubMed

    Donny, Eric C; Denlinger, Rachel L; Tidey, Jennifer W; Koopmeiners, Joseph S; Benowitz, Neal L; Vandrey, Ryan G; al'Absi, Mustafa; Carmella, Steven G; Cinciripini, Paul M; Dermody, Sarah S; Drobes, David J; Hecht, Stephen S; Jensen, Joni; Lane, Tonya; Le, Chap T; McClernon, F Joseph; Montoya, Ivan D; Murphy, Sharon E; Robinson, Jason D; Stitzer, Maxine L; Strasser, Andrew A; Tindle, Hilary; Hatsukami, Dorothy K

    2015-10-01

    The Food and Drug Administration can set standards that reduce the nicotine content of cigarettes. We conducted a double-blind, parallel, randomized clinical trial between June 2013 and July 2014 at 10 sites. Eligibility criteria included an age of 18 years or older, smoking of five or more cigarettes per day, and no current interest in quitting smoking. Participants were randomly assigned to smoke for 6 weeks either their usual brand of cigarettes or one of six types of investigational cigarettes, provided free. The investigational cigarettes had nicotine content ranging from 15.8 mg per gram of tobacco (typical of commercial brands) to 0.4 mg per gram. The primary outcome was the number of cigarettes smoked per day during week 6. A total of 840 participants underwent randomization, and 780 completed the 6-week study. During week 6, the average number of cigarettes smoked per day was lower for participants randomly assigned to cigarettes containing 2.4, 1.3, or 0.4 mg of nicotine per gram of tobacco (16.5, 16.3, and 14.9 cigarettes, respectively) than for participants randomly assigned to their usual brand or to cigarettes containing 15.8 mg per gram (22.2 and 21.3 cigarettes, respectively; P<0.001). Participants assigned to cigarettes with 5.2 mg per gram smoked an average of 20.8 cigarettes per day, which did not differ significantly from the average number among those who smoked control cigarettes. Cigarettes with lower nicotine content, as compared with control cigarettes, reduced exposure to and dependence on nicotine, as well as craving during abstinence from smoking, without significantly increasing the expired carbon monoxide level or total puff volume, suggesting minimal compensation. Adverse events were generally mild and similar among groups. In this 6-week study, reduced-nicotine cigarettes versus standard-nicotine cigarettes reduced nicotine exposure and dependence and the number of cigarettes smoked. (Funded by the National Institute on Drug Abuse and the Food and Drug Administration Center for Tobacco Products; ClinicalTrials.gov number, NCT01681875.).

  11. Did a quality improvement collaborative make stroke care better? A cluster randomized trial

    PubMed Central

    2014-01-01

    Background Stroke can result in death and long-term disability. Fast and high-quality care can reduce the impact of stroke, but UK national audit data has demonstrated variability in compliance with recommended processes of care. Though quality improvement collaboratives (QICs) are widely used, whether a QIC could improve reliability of stroke care was unknown. Methods Twenty-four NHS hospitals in the Northwest of England were randomly allocated to participate either in Stroke 90:10, a QIC based on the Breakthrough Series (BTS) model, or to a control group giving normal care. The QIC focused on nine processes of quality care for stroke already used in the national stroke audit. The nine processes were grouped into two distinct care bundles: one relating to early hours care and one relating to rehabilitation following stroke. Using an interrupted time series design and difference-in-difference analysis, we aimed to determine whether hospitals participating in the QIC improved more than the control group on bundle compliance. Results Data were available from nine interventions (3,533 patients) and nine control hospitals (3,059 patients). Hospitals in the QIC showed a modest improvement from baseline in the odds of average compliance equivalent to a relative improvement of 10.9% (95% CI 1.3%, 20.6%) in the Early Hours Bundle and 11.2% (95% CI 1.4%, 21.5%) in the Rehabilitation Bundle. Secondary analysis suggested that some specific processes were more sensitive to an intervention effect. Conclusions Some aspects of stroke care improved during the QIC, but the effects of the QIC were modest and further improvement is needed. The extent to which a BTS QIC can improve quality of stroke care remains uncertain. Some aspects of care may respond better to collaboratives than others. Trial registration ISRCTN13893902. PMID:24690267

  12. Does hippotherapy effect use of sensory information for balance in people with multiple sclerosis?

    PubMed

    Lindroth, Jodi L; Sullivan, Jessica L; Silkwood-Sherer, Debbie

    2015-01-01

    This case-series study aimed to determine if there were observable changes in sensory processing for postural control in individuals with multiple sclerosis (MS) following physical therapy using hippotherapy (HPOT), or changes in balance and functional gait. This pre-test non-randomized design study, with follow-up assessment at 6 weeks, included two females and one male (age range 37-60 years) with diagnoses of relapse-remitting or progressive MS. The intervention consisted of twelve 40-min physical therapy sessions which included HPOT twice a week for 6 weeks. Sensory organization and balance were assessed by the Sensory Organization Test (SOT) and Berg Balance Scale (BBS). Gait was assessed using the Functional Gait Assessment (FGA). Following the intervention period, all three participants showed improvements in SOT (range 1-8 points), BBS (range 2-6 points), and FGA (average 4 points) scores. These improvements were maintained or continued to improve at follow-up assessment. Two of the three participants no longer over-relied on vision and/or somatosensory information as the primary sensory input for postural control, suggesting improved use of sensory information for balance. The results indicate that HPOT may be a beneficial physical therapy treatment strategy to improve balance, functional gait, and enhance how some individuals with MS process sensory cues for postural control. Randomized clinical trials will be necessary to validate results of this study.

  13. Memory traces for spoken words in the brain as revealed by the hemodynamic correlate of the mismatch negativity.

    PubMed

    Shtyrov, Yury; Osswald, Katja; Pulvermüller, Friedemann

    2008-01-01

    The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.

  14. Do communication training programs improve students' communication skills?--a follow-up study.

    PubMed

    Simmenroth-Nayda, Anne; Weiss, Cora; Fischer, Thomas; Himmel, Wolfgang

    2012-09-05

    Although it is taken for granted that history-taking and communication skills are learnable, this learning process should be confirmed by rigorous studies, such as randomized pre- and post-comparisons. The purpose of this paper is to analyse whether a communication course measurably improves the communicative competence of third-year medical students at a German medical school and whether technical or emotional aspects of communication changed differently. A sample of 32 randomly selected students performed an interview with a simulated patient before the communication course (pre-intervention) and a second interview after the course (post-intervention), using the Calgary-Cambridge Observation Guide (CCOG) to assess history taking ability. On average, the students improved in all of the 28 items of the CCOG. The 6 more technically-orientated communication items improved on average from 3.4 for the first interview to 2.6 in the second interview (p < 0.0001), the 6 emotional items from 2.7 to 2.3 (p = 0.023). The overall score for women improved from 3.2 to 2.5 (p = 0.0019); male students improved from 3.0 to 2.7 (n.s.). The mean interview time significantly increased from the first to the second interview, but the increase in the interview duration and the change of the overall score for the students' communication skills were not correlated (Pearson's r = 0.03; n.s.). Our communication course measurably improved communication skills, especially for female students. These improvements did not depend predominantly on an extension of the interview time. Obviously, "technical" aspects of communication can be taught better than "emotional" communication skills.

  15. The corneal transplant score: a simple corneal graft candidate calculator.

    PubMed

    Rosenfeld, Eldar; Varssano, David

    2013-07-01

    Shortage of corneas for transplantation has created long waiting lists in most countries. Transplant calculators are available for many organs. The purpose of this study is to describe a simple automatic scoring system for keratoplasty recipient candidates, based on several parameters that we consider most relevant for tissue allocation, and to compare the system's accuracy in predicting decisions made by a cornea specialist. Twenty pairs of candidate data were randomly created on an electronic spreadsheet. A single priority score was computed from the data of each candidate. A cornea surgeon and the automated system then decided independently which candidate in each pair should have surgery if only a single cornea was available. The scoring system can calculate values between 0 (lowest priority) and 18 (highest priority) for each candidate. Average score value in our randomly created cohort was 6.35 ± 2.38 (mean ± SD), range 1.28 to 10.76. Average score difference between the candidates in each pair was 3.12 ± 2.10, range 0.08 to 8.45. The manual scoring process, although theoretical, was mentally and emotionally demanding for the surgeon. Agreement was achieved between the human decision and the calculated value in 19 of 20 pairs. Disagreement was reached in the pair with the lowest score difference (0.08). With worldwide donor cornea shortage, waiting for transplantation can be long. Manual sorting of priority for transplantation in a long waiting list is difficult, time-consuming and prone to error. The suggested system may help achieve a justified distribution of available tissue.

  16. Learning semantic histopathological representation for basal cell carcinoma classification

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  17. Time Correlations of Lightning Flash Sequences in Thunderstorms Revealed by Fractal Analysis

    NASA Astrophysics Data System (ADS)

    Gou, Xueqiang; Chen, Mingli; Zhang, Guangshu

    2018-01-01

    By using the data of lightning detection and ranging system at the Kennedy Space Center, the temporal fractal and correlation of interevent time series of lightning flash sequences in thunderstorms have been investigated with Allan factor (AF), Fano factor (FF), and detrended fluctuation analysis (DFA) methods. AF, FF, and DFA methods are powerful tools to detect the time-scaling structures and correlations in point processes. Totally 40 thunderstorms with distinguishing features of a single-cell storm and apparent increase and decrease in the total flash rate were selected for the analysis. It is found that the time-scaling exponents for AF (αAF) and FF (αFF) analyses are 1.62 and 0.95 in average, respectively, indicating a strong time correlation of the lightning flash sequences. DFA analysis shows that there is a crossover phenomenon—a crossover timescale (τc) ranging from 54 to 195 s with an average of 114 s. The occurrence of a lightning flash in a thunderstorm behaves randomly at timescales <τc but shows strong time correlation at scales >τc. Physically, these may imply that the establishment of an extensive strong electric field necessary for the occurrence of a lightning flash needs a timescale >τc, which behaves strongly time correlated. But the initiation of a lightning flash within a well-established extensive strong electric field may involve the heterogeneities of the electric field at a timescale <τc, which behave randomly.

  18. Statistical process control of mortality series in the Australian and New Zealand Intensive Care Society (ANZICS) adult patient database: implications of the data generating process

    PubMed Central

    2013-01-01

    Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957

  19. An On-Demand Optical Quantum Random Number Generator with In-Future Action and Ultra-Fast Response

    PubMed Central

    Stipčević, Mario; Ursin, Rupert

    2015-01-01

    Random numbers are essential for our modern information based society e.g. in cryptography. Unlike frequently used pseudo-random generators, physical random number generators do not depend on complex algorithms but rather on a physicsal process to provide true randomness. Quantum random number generators (QRNG) do rely on a process, wich can be described by a probabilistic theory only, even in principle. Here we present a conceptualy simple implementation, which offers a 100% efficiency of producing a random bit upon a request and simultaneously exhibits an ultra low latency. A careful technical and statistical analysis demonstrates its robustness against imperfections of the actual implemented technology and enables to quickly estimate randomness of very long sequences. Generated random numbers pass standard statistical tests without any post-processing. The setup described, as well as the theory presented here, demonstrate the maturity and overall understanding of the technology. PMID:26057576

  20. Random Walks in a One-Dimensional Lévy Random Environment

    NASA Astrophysics Data System (ADS)

    Bianchi, Alessandra; Cristadoro, Giampaolo; Lenci, Marco; Ligabò, Marilena

    2016-04-01

    We consider a generalization of a one-dimensional stochastic process known in the physical literature as Lévy-Lorentz gas. The process describes the motion of a particle on the real line in the presence of a random array of marked points, whose nearest-neighbor distances are i.i.d. and long-tailed (with finite mean but possibly infinite variance). The motion is a continuous-time, constant-speed interpolation of a symmetric random walk on the marked points. We first study the quenched random walk on the point process, proving the CLT and the convergence of all the accordingly rescaled moments. Then we derive the quenched and annealed CLTs for the continuous-time process.

  1. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  2. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  3. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  4. Heterogeneity in Early Responses in ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial).

    PubMed

    Dhruva, Sanket S; Huang, Chenxi; Spatz, Erica S; Coppi, Andreas C; Warner, Frederick; Li, Shu-Xia; Lin, Haiqun; Xu, Xiao; Furberg, Curt D; Davis, Barry R; Pressel, Sara L; Coifman, Ronald R; Krumholz, Harlan M

    2017-07-01

    Randomized trials of hypertension have seldom examined heterogeneity in response to treatments over time and the implications for cardiovascular outcomes. Understanding this heterogeneity, however, is a necessary step toward personalizing antihypertensive therapy. We applied trajectory-based modeling to data on 39 763 study participants of the ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) to identify distinct patterns of systolic blood pressure (SBP) response to randomized medications during the first 6 months of the trial. Two trajectory patterns were identified: immediate responders (85.5%), on average, had a decreasing SBP, whereas nonimmediate responders (14.5%), on average, had an initially increasing SBP followed by a decrease. Compared with those randomized to chlorthalidone, participants randomized to amlodipine (odds ratio, 1.20; 95% confidence interval [CI], 1.10-1.31), lisinopril (odds ratio, 1.88; 95% CI, 1.73-2.03), and doxazosin (odds ratio, 1.65; 95% CI, 1.52-1.78) had higher adjusted odds ratios associated with being a nonimmediate responder (versus immediate responder). After multivariable adjustment, nonimmediate responders had a higher hazard ratio of stroke (hazard ratio, 1.49; 95% CI, 1.21-1.84), combined cardiovascular disease (hazard ratio, 1.21; 95% CI, 1.11-1.31), and heart failure (hazard ratio, 1.48; 95% CI, 1.24-1.78) during follow-up between 6 months and 2 years. The SBP response trajectories provided superior discrimination for predicting downstream adverse cardiovascular events than classification based on difference in SBP between the first 2 measurements, SBP at 6 months, and average SBP during the first 6 months. Our findings demonstrate heterogeneity in response to antihypertensive therapies and show that chlorthalidone is associated with more favorable initial response than the other medications. © 2017 American Heart Association, Inc.

  5. Distribution of randomly diffusing particles in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A.

    2017-09-01

    Diffusion can be conceptualized, at microscopic scales, as the random hopping of particles between neighboring lattice sites. In the case of diffusion in inhomogeneous media, distinct spatial domains in the system may yield distinct particle hopping rates. Starting from the master equations (MEs) governing diffusion in inhomogeneous media we derive here, for arbitrary spatial dimensions, the deterministic lattice equations (DLEs) specifying the average particle number at each lattice site for randomly diffusing particles in inhomogeneous media. We consider the case of free (Fickian) diffusion with no steric constraints on the maximum particle number per lattice site as well as the case of diffusion under steric constraints imposing a maximum particle concentration. We find, for both transient and asymptotic regimes, excellent agreement between the DLEs and kinetic Monte Carlo simulations of the MEs. The DLEs provide a computationally efficient method for predicting the (average) distribution of randomly diffusing particles in inhomogeneous media, with the number of DLEs associated with a given system being independent of the number of particles in the system. From the DLEs we obtain general analytic expressions for the steady-state particle distributions for free diffusion and, in special cases, diffusion under steric constraints in inhomogeneous media. We find that, in the steady state of the system, the average fraction of particles in a given domain is independent of most system properties, such as the arrangement and shape of domains, and only depends on the number of lattice sites in each domain, the particle hopping rates, the number of distinct particle species in the system, and the total number of particles of each particle species in the system. Our results provide general insights into the role of spatially inhomogeneous particle hopping rates in setting the particle distributions in inhomogeneous media.

  6. Stochastic modeling of central apnea events in preterm infants.

    PubMed

    Clark, Matthew T; Delos, John B; Lake, Douglas E; Lee, Hoshik; Fairchild, Karen D; Kattwinkel, John; Moorman, J Randall

    2016-04-01

    A near-ubiquitous pathology in very low birth weight infants is neonatal apnea, breathing pauses with slowing of the heart and falling blood oxygen. Events of substantial duration occasionally occur after an infant is discharged from the neonatal intensive care unit (NICU). It is not known whether apneas result from a predictable process or from a stochastic process, but the observation that they occur in seemingly random clusters justifies the use of stochastic models. We use a hidden-Markov model to analyze the distribution of durations of apneas and the distribution of times between apneas. The model suggests the presence of four breathing states, ranging from very stable (with an average lifetime of 12 h) to very unstable (with an average lifetime of 10 s). Although the states themselves are not visible, the mathematical analysis gives estimates of the transition rates among these states. We have obtained these transition rates, and shown how they change with post-menstrual age; as expected, the residence time in the more stable breathing states increases with age. We also extrapolated the model to predict the frequency of very prolonged apnea during the first year of life. This paradigm-stochastic modeling of cardiorespiratory control in neonatal infants to estimate risk for severe clinical events-may be a first step toward personalized risk assessment for life threatening apnea events after NICU discharge.

  7. Tooth segmentation system with intelligent editing for cephalometric analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shoupu

    2015-03-01

    Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient's head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.

  8. An Experimental Examination of Peers’ Influence on Adolescent Girls’ Intent to Engage in Maladaptive Weight-Related Behaviors

    PubMed Central

    Rancourt, Diana; Choukas-Bradley, Sophia; Cohen, Geoffrey L.; Prinstein, Mitchell J.

    2015-01-01

    Objective Social psychological theories provide bases for understanding how social comparison processes may impact peer influence. This study examined two peer characteristics that may impact peer influence on adolescent girls’ weight-related behavior intentions: body size and popularity. Method A school-based sample of 66 9th grade girls (12–15 years old) completed an experimental paradigm in which they believed they were interacting with other students (i.e., “e-confederates”). The body size and popularity of the e-confederates were experimentally manipulated. Participants were randomly assigned to one of the three experimental conditions in which they were exposed to identical maladaptive weight-related behavior norms communicated by ostensible female peers who were either: (1) Thin and Popular; (2) Thin and Average Popularity; or (3) Heavy and Average Popularity. Participants’ intent to engage in weight-related behaviors was measured pre-experiment and during public and private segments of the experiment. Results A significant effect of condition on public conformity was observed. Participants exposed to peers’ maladaptive weight-related behavior norms in the Heavy and Average condition reported significantly less intent to engage in weight-related behaviors than participants in either of the thin-peer conditions (F(2) = 3.93, p = .025). Peer influence on private acceptance of weight-related behavior intentions was similar across conditions (F(2) = .47, p = .63). Discussion Body size comparison may be the most salient component of peer influence processes on weight-related behaviors. Peer influence on weight-related behavior intention also appears to impact private beliefs. Considering peer norms in preventive interventions combined with dissonance-based approaches may be useful. PMID:24482093

  9. A qualitative assessment of a random process proposed as an atmospheric turbulence model

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1977-01-01

    A random process is formed by the product of two Gaussian processes and the sum of that product with a third Gaussian process. The resulting total random process is interpreted as the sum of an amplitude modulated process and a slowly varying, random mean value. The properties of the process are examined, including an interpretation of the process in terms of the physical structure of atmospheric motions. The inclusion of the mean value variation gives an improved representation of the properties of atmospheric motions, since the resulting process can account for the differences in the statistical properties of atmospheric velocity components and their gradients. The application of the process to atmospheric turbulence problems, including the response of aircraft dynamic systems, is examined. The effects of the mean value variation upon aircraft loads are small in most cases, but can be important in the measurement and interpretation of atmospheric turbulence data.

  10. A Comparative Study on Ni-Based Coatings Prepared by HVAF, HVOF, and APS Methods for Corrosion Protection Applications

    NASA Astrophysics Data System (ADS)

    Sadeghimeresht, E.; Markocsan, N.; Nylén, P.

    2016-12-01

    Selection of the thermal spray process is the most important step toward a proper coating solution for a given application as important coating characteristics such as adhesion and microstructure are highly dependent on it. In the present work, a process-microstructure-properties-performance correlation study was performed in order to figure out the main characteristics and corrosion performance of the coatings produced by different thermal spray techniques such as high-velocity air fuel (HVAF), high-velocity oxy fuel (HVOF), and atmospheric plasma spraying (APS). Previously optimized HVOF and APS process parameters were used to deposit Ni, NiCr, and NiAl coatings and compare with HVAF-sprayed coatings with randomly selected process parameters. As the HVAF process presented the best coating characteristics and corrosion behavior, few process parameters such as feed rate and standoff distance (SoD) were investigated to systematically optimize the HVAF coatings in terms of low porosity and high corrosion resistance. The Ni and NiAl coatings with lower porosity and better corrosion behavior were obtained at an average SoD of 300 mm and feed rate of 150 g/min. The NiCr coating sprayed at a SoD of 250 mm and feed rate of 75 g/min showed the highest corrosion resistance among all investigated samples.

  11. Intensive glycemic control is not associated with fractures or falls in the ACCORD randomized trial.

    PubMed

    Schwartz, Ann V; Margolis, Karen L; Sellmeyer, Deborah E; Vittinghoff, Eric; Ambrosius, Walter T; Bonds, Denise E; Josse, Robert G; Schnall, Adrian M; Simmons, Debra L; Hue, Trisha F; Palermo, Lisa; Hamilton, Bruce P; Green, Jennifer B; Atkinson, Hal H; O'Connor, Patrick J; Force, Rex W; Bauer, Douglas C

    2012-07-01

    Older adults with type 2 diabetes are at high risk of fractures and falls, but the effect of glycemic control on these outcomes is unknown. To determine the effect of intensive versus standard glycemic control, we assessed fractures and falls as outcomes in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) randomized trial. ACCORD participants were randomized to intensive or standard glycemia strategies, with an achieved median A1C of 6.4 and 7.5%, respectively. In the ACCORD BONE ancillary study, fractures were assessed at 54 of the 77 ACCORD clinical sites that included 7,287 of the 10,251 ACCORD participants. At annual visits, 6,782 participants were asked about falls in the previous year. During an average follow-up of 3.8 (SD 1.3) years, 198 of 3,655 participants in the intensive glycemia and 189 of 3,632 participants in the standard glycemia group experienced at least one nonspine fracture. The average rate of first nonspine fracture was 13.9 and 13.3 per 1,000 person-years in the intensive and standard groups, respectively (hazard ratio 1.04 [95% CI 0.86-1.27]). During an average follow-up of 2.0 years, 1,122 of 3,364 intensive- and 1,133 of 3,418 standard-therapy participants reported at least one fall. The average rate of falls was 60.8 and 55.3 per 100 person-years in the intensive and standard glycemia groups, respectively (1.10 [0.84-1.43]). Compared with standard glycemia, intensive glycemia did not increase or decrease fracture or fall risk in ACCORD.

  12. Cost-Effectiveness analysis of Recovery Management Checkups (RMC) for adults with chronic substance use disorders: evidence from a four-year randomized trial

    PubMed Central

    McCollister, Kathryn E.; French, Michael T.; Freitas, Derek M.; Dennis, Michael L.; Scott, Christy K.; Funk, Rodney R.

    2013-01-01

    Aims This study performs the first cost-effectiveness analysis (CEA) of Recovery Management Checkups (RMC) for adults with chronic substance use disorders. Design Cost-effectiveness analysis of a randomized clinical trial of RMC. Participants were randomly assigned to a control condition of outcome monitoring (OM-only) or the experimental condition OM-plus-RMC, with quarterly follow-up for four years. Setting Participants were recruited from the largest central intake unit for substance abuse treatment in Chicago, Illinois, USA. Participants 446 participants who were 38 years old on average, 54 percent male, and predominantly African American (85%). Measurements Data on the quarterly cost per participant come from a previous study of OM and RMC intervention costs. Effectiveness is measured as the number of days of abstinence and number of substance-use-related problems. Findings Over the four-year trial, OM-plus-RMC cost on average $2,184 more than OM-only (p<0.01). Participants in OM-plus-RMC averaged 1,026 days abstinent and had 89 substance-use-related problems. OM-only averaged 932 days abstinent and reported 126 substance-use-related problems. Mean differences for both effectiveness measures were statistically significant (p<0.01). The incremental cost-effectiveness ratio for OM-plus-RMC was $23.38 per day abstinent and $59.51 per reduced substance-related problem. When additional costs to society were factored into the analysis, OM-plus-RMC was less costly and more effective than OM-only. Conclusions Recovery Management Checkups are a cost-effective and potentially cost-saving strategy for promoting abstinence and reducing substance-use-related problems among chronic substance users. PMID:23961833

  13. Cost-effectiveness analysis of Recovery Management Checkups (RMC) for adults with chronic substance use disorders: evidence from a 4-year randomized trial.

    PubMed

    McCollister, Kathryn E; French, Michael T; Freitas, Derek M; Dennis, Michael L; Scott, Christy K; Funk, Rodney R

    2013-12-01

    This study performs the first cost-effectiveness analysis (CEA) of Recovery Management Checkups (RMC) for adults with chronic substance use disorders. Cost-effectiveness analysis of a randomized clinical trial of RMC. Participants were assigned randomly to a control condition of outcome monitoring (OM-only) or the experimental condition OM-plus-RMC, with quarterly follow-up for 4 years. Participants were recruited from the largest central intake unit for substance abuse treatment in Chicago, Illinois, USA. A total of 446 participants who were 38 years old on average, 54% male, and predominantly African American (85%). Data on the quarterly cost per participant come from a previous study of OM and RMC intervention costs. Effectiveness is measured as the number of days of abstinence and number of substance use-related problems. Over the 4-year trial, OM-plus-RMC cost on average $2184 more than OM-only (P < 0.01). Participants in OM-plus-RMC averaged 1026 days abstinent and had 89 substance use-related problems. OM-only averaged 932 days abstinent and reported 126 substance use-related problems. Mean differences for both effectiveness measures were statistically significant (P < 0.01). The incremental cost-effectiveness ratio for OM-plus-RMC was $23.38 per day abstinent and $59.51 per reduced substance-related problem. When additional costs to society were factored into the analysis, OM-plus-RMC was less costly and more effective than OM-only. Recovery Management Checkups are a cost-effective and potentially cost-saving strategy for promoting abstinence and reducing substance use-related problems among chronic substance users. © 2013 Society for the Study of Addiction.

  14. [Active carbon from Thalia dealbata residues: its preparation and adsorption performance to crystal violet].

    PubMed

    Chu, Shu-Yi; Yang, Min; Xiao, Ji-Bo; Zhang, Jun; Zhu, Yan-Ping; Yan, Xiang-Jun; Tian, Guang-Ming

    2013-06-01

    By using phosphoric acid as activation agent, active carbon was prepared from Thalia dealbata residues. The BET specific surface area of the active carbon was 1174.13 m2 x g(-1), micropore area was 426.99 m2 x g(-1), and average pore diameter was 3.23 nm. An investigation was made on the adsorption performances of the active carbon for crystal violet from aqueous solution under various conditions of pH, initial concentration of crystal violet, contact time, and contact temperature. It was shown that the adsorbed amount of crystal violet was less affected by solution pH, and the adsorption process could be divided into two stages, i. e., fast adsorption and slow adsorption, which followed the pseudo-second-order kinetics model. At the temperature 293, 303, and 313 K, the adsorption process was more accordance with Langmuir isotherm model, and the maximum adsorption capacity was 409.83, 425.53, and 438.59 mg x g(-1), respectively. In addition, the adsorption process was spontaneous and endothermic, and the randomness of crystal violet molecules increased.

  15. Student conceptions of natural selection and its role in evolution

    NASA Astrophysics Data System (ADS)

    Bishop, Beth A.; Anderson, Charles W.

    Pretests and posttests on the topic of evolution by natural selection were administered to students in a college nonmajors' biology course. Analysis of test responses revealed that most students understood evolution as a process in which species respond to environmental conditions by changing gradually over time. Student thinking differed from accepted biological theory in that (a) changes in traits were attributed to a need-driven adaptive process rather than random genetic mutation and sexual recombination, (b) no role was assigned to variation on traits within a population or differences in reproductive success, and (c) traits were seen as gradually changing in all members of a population. Although students had taken an average of 1.9 years of previous biology courses, performance on the pretest was uniformly low. There was no relationship between the amount of previous biology taken and either pretest or posttest performance. Belief in the truthfulness of evolutionary theory was also unrelated to either pretest or posttest performance. Course instruction using specially designed materials was moderately successful in improving students' understanding of the evolutionary process.

  16. Autonomous unobtrusive detection of mild cognitive impairment in older adults.

    PubMed

    Akl, Ahmad; Taati, Babak; Mihailidis, Alex

    2015-05-01

    The current diagnosis process of dementia is resulting in a high percentage of cases with delayed detection. To address this problem, in this paper, we explore the feasibility of autonomously detecting mild cognitive impairment (MCI) in the older adult population. We implement a signal processing approach equipped with a machine learning paradigm to process and analyze real-world data acquired using home-based unobtrusive sensing technologies. Using the sensor and clinical data pertaining to 97 subjects, acquired over an average period of three years, a number of measures associated with the subjects' walking speed and general activity in the home were calculated. Different time spans of these measures were used to generate feature vectors to train and test two machine learning algorithms namely support vector machines and random forests. We were able to autonomously detect MCI in older adults with an area under the ROC curve of 0.97 and an area under the precision-recall curve of 0.93 using a time window of 24 weeks. This study is of great significance since it can potentially assist in the early detection of cognitive impairment in older adults.

  17. Grain boundary character distribution in nanocrystalline metals produced by different processing routes

    DOE PAGES

    Bober, David B.; Kumar, Mukal; Rupert, Timothy J.; ...

    2015-12-28

    Nanocrystalline materials are defined by their fine grain size, but details of the grain boundary character distribution should also be important. Grain boundary character distributions are reported for ball-milled, sputter-deposited, and electrodeposited Ni and Ni-based alloys, all with average grain sizes of ~20 nm, to study the influence of processing route. The two deposited materials had nearly identical grain boundary character distributions, both marked by a Σ3 length percentage of 23 to 25 pct. In contrast, the ball-milled material had only 3 pct Σ3-type grain boundaries and a large fraction of low-angle boundaries (16 pct), with the remainder being predominantlymore » random high angle (73 pct). Furthermore, these grain boundary character measurements are connected to the physical events that control their respective processing routes. Consequences for material properties are also discussed with a focus on nanocrystalline corrosion. As a whole, the results presented here show that grain boundary character distribution, which has often been overlooked in nanocrystalline metals, can vary significantly and influence material properties in profound ways.« less

  18. Effect of different corn processing methods on enzyme producing bacteria, protozoa, fermentation and histomorphometry of rumen in fattening lambs

    PubMed Central

    Gholami, Mohammad Amin; Forouzmand, Masihollah; Khajavi, Mokhtar; Hossienifar, Shima; Naghiha, Reza

    2018-01-01

    The purpose of this study was to investigate the effect of different corn processing methods on rumen microbial flora, histomorphometry and fermentation in fattening male lambs. Twenty male lambs (average age and weight of 90 days and 25.00 ± 1.10 kg, respectively) were used in a completely randomized design including four treatments and five replicates each over 80 days long period: 1) Lambs fed ground corn seeds; 2) Lambs fed steam-rolled corn; 3) Lambs fed soaked corn seeds (24 hr) and 4) Lambs fed soaked corn seeds (48 hr). At the end of the experiment, three lambs of each treatment were slaughtered and samples were collected for pH, volatile fatty acids, amylolytic, proteolytic, cellulytic and heterophilic bacteria and protozoa assessment. The number of proteolytic bacteria in soaked corn seeds was significantly increased in comparison with other treatments. The thickness of wall, papillae and muscular layers of rumen in the soaked corn seeds treatment was significantly increased. Overall, from a practical point of view, soaked corn processing could be generally used in lambs fattening system. PMID:29719663

  19. Breathing as a Fundamental Rhythm of Brain Function.

    PubMed

    Heck, Detlef H; McAfee, Samuel S; Liu, Yu; Babajani-Feremi, Abbas; Rezaie, Roozbeh; Freeman, Walter J; Wheless, James W; Papanicolaou, Andrew C; Ruszinkó, Miklós; Sokolov, Yury; Kozma, Robert

    2016-01-01

    Ongoing fluctuations of neuronal activity have long been considered intrinsic noise that introduces unavoidable and unwanted variability into neuronal processing, which the brain eliminates by averaging across population activity (Georgopoulos et al., 1986; Lee et al., 1988; Shadlen and Newsome, 1994; Maynard et al., 1999). It is now understood, that the seemingly random fluctuations of cortical activity form highly structured patterns, including oscillations at various frequencies, that modulate evoked neuronal responses (Arieli et al., 1996; Poulet and Petersen, 2008; He, 2013) and affect sensory perception (Linkenkaer-Hansen et al., 2004; Boly et al., 2007; Sadaghiani et al., 2009; Vinnik et al., 2012; Palva et al., 2013). Ongoing cortical activity is driven by proprioceptive and interoceptive inputs. In addition, it is partially intrinsically generated in which case it may be related to mental processes (Fox and Raichle, 2007; Deco et al., 2011). Here we argue that respiration, via multiple sensory pathways, contributes a rhythmic component to the ongoing cortical activity. We suggest that this rhythmic activity modulates the temporal organization of cortical neurodynamics, thereby linking higher cortical functions to the process of breathing.

  20. Recovery Characteristics of Anomalous Stress-Induced Leakage Current of 5.6 nm Oxide Films

    NASA Astrophysics Data System (ADS)

    Inatsuka, Takuya; Kumagai, Yuki; Kuroda, Rihito; Teramoto, Akinobu; Sugawa, Shigetoshi; Ohmi, Tadahiro

    2012-04-01

    Anomalous stress-induced leakage current (SILC), which has a much larger current density than average SILC, causes severe bit error in flash memories. To suppress anomalous SILC, detailed evaluations are strongly required. We evaluate the characteristics of anomalous SILC of 5.6 nm oxide films using a fabricated array test pattern, and recovery characteristics are observed. Some characteristics of typical anomalous cells in the time domain are measured, and the recovery characteristics of average and anomalous SILCs are examined. Some of the anomalous cells have random telegraph signals (RTSs) of gate leakage current, which are characterized as discrete and random switching phenomena. The dependence of RTSs on the applied electric field is investigated, and the recovery tendency of anomalous SILC with and without RTSs are also discussed.

  1. Vulnerability of complex networks

    NASA Astrophysics Data System (ADS)

    Mishkovski, Igor; Biey, Mario; Kocarev, Ljupco

    2011-01-01

    We consider normalized average edge betweenness of a network as a metric of network vulnerability. We suggest that normalized average edge betweenness together with is relative difference when certain number of nodes and/or edges are removed from the network is a measure of network vulnerability, called vulnerability index. Vulnerability index is calculated for four synthetic networks: Erdős-Rényi (ER) random networks, Barabási-Albert (BA) model of scale-free networks, Watts-Strogatz (WS) model of small-world networks, and geometric random networks. Real-world networks for which vulnerability index is calculated include: two human brain networks, three urban networks, one collaboration network, and two power grid networks. We find that WS model of small-world networks and biological networks (human brain networks) are the most robust networks among all networks studied in the paper.

  2. Sudden emergence of q-regular subgraphs in random graphs

    NASA Astrophysics Data System (ADS)

    Pretti, M.; Weigt, M.

    2006-07-01

    We investigate the computationally hard problem whether a random graph of finite average vertex degree has an extensively large q-regular subgraph, i.e., a subgraph with all vertices having degree equal to q. We reformulate this problem as a constraint-satisfaction problem, and solve it using the cavity method of statistical physics at zero temperature. For q = 3, we find that the first large q-regular subgraphs appear discontinuously at an average vertex degree c3 - reg simeq 3.3546 and contain immediately about 24% of all vertices in the graph. This transition is extremely close to (but different from) the well-known 3-core percolation point c3 - core simeq 3.3509. For q > 3, the q-regular subgraph percolation threshold is found to coincide with that of the q-core.

  3. Scaling of Directed Dynamical Small-World Networks with Random Responses

    NASA Astrophysics Data System (ADS)

    Zhu, Chen-Ping; Xiong, Shi-Jie; Tian, Ying-Jie; Li, Nan; Jiang, Ke-Sheng

    2004-05-01

    A dynamical model of small-world networks, with directed links which describe various correlations in social and natural phenomena, is presented. Random responses of sites to the input message are introduced to simulate real systems. The interplay of these ingredients results in the collective dynamical evolution of a spinlike variable S(t) of the whole network. The global average spreading length s and average spreading time s are found to scale as p-αln(N with different exponents. Meanwhile, S(t) behaves in a duple scaling form for N≫N*: S˜f(p-βqγt˜), where p and q are rewiring and external parameters, α, β, and γ are scaling exponents, and f(t˜) is a universal function. Possible applications of the model are discussed.

  4. Can evidence change the rate of back surgery? A randomized trial of community-based education.

    PubMed

    Goldberg, H I; Deyo, R A; Taylor, V M; Cheadle, A D; Conrad, D A; Loeser, J D; Heagerty, P J; Diehr, P

    2001-01-01

    Timely adoption of clinical practice guidelines is more likely to happen when the guidelines are used in combination with adjuvant educational strategies that address social as well as rational influences. To implement the conservative, evidence-based approach to low-back pain recommended in national guidelines, with the anticipated effect of reducing population-based rates of surgery. A randomized, controlled trial. Ten communities in western Washington State with annual rates of back surgery above the 1990 national average (158 operations per 100,000 adults). Spine surgeons, primary care physicians, patients who were surgical candidates, and hospital administrators. The five communities randomized to the intervention group received a package of six educational activities tailored to local needs by community planning groups. Surgeon study groups, primary care continuing medical education conferences, administrative consensus processes, videodisc-aided patient decision making, surgical outcomes management, and generalist academic detailing were serially implemented over a 30-month intervention period. Quarterly observations of surgical rates. After implementation of the intervention, surgery rates declined in the intervention communities but increased slightly in the control communities. The net effect of the intervention is estimated to be a decline of 20.9 operations per 100,000, a relative reduction of 8.9% (P = 0.01). We were able to use scientific evidence to engender voluntary change in back pain practice patterns across entire communities.

  5. Reach Out Churches: A Community-Based Participatory Research Pilot Trial to Assess the Feasibility of a Mobile Health Technology Intervention to Reduce Blood Pressure Among African Americans.

    PubMed

    Skolarus, Lesli E; Cowdery, Joan; Dome, Mackenzie; Bailey, Sarah; Baek, Jonggyu; Byrd, James Brian; Hartley, Sarah E; Valley, Staci C; Saberi, Sima; Wheeler, Natalie C; McDermott, Mollie; Hughes, Rebecca; Shanmugasundaram, Krithika; Morgenstern, Lewis B; Brown, Devin L

    2017-06-01

    Innovative strategies are needed to reduce the hypertension epidemic among African Americans. Reach Out was a faith-collaborative, mobile health, randomized, pilot intervention trial of four mobile health components to reduce high blood pressure (BP) compared to usual care. It was designed and tested within a community-based participatory research framework among African Americans recruited and randomized from churches in Flint, Michigan. The purpose of this pilot study was to assess the feasibility of the Reach Out processes. Feasibility was assessed by willingness to consent (acceptance of randomization), proportion of weeks participants texted their BP readings (intervention use), number lost to follow-up (retention), and responses to postintervention surveys and focus groups (acceptance of intervention). Of the 425 church members who underwent BP screening, 94 enrolled in the study and 73 (78%) completed the 6-month outcome assessment. Median age was 58 years, and 79% were women. Participants responded with their BPs on an average of 13.7 (SD = 10.7) weeks out of 26 weeks that the BP prompts were sent. All participants reported satisfaction with the intervention. Reach Out, a faith-collaborative, mobile health intervention was feasible. Further study of the efficacy of the intervention and additional mobile health strategies should be considered.

  6. Adaptive Localization of Focus Point Regions via Random Patch Probabilistic Density from Whole-Slide, Ki-67-Stained Brain Tumor Tissue

    PubMed Central

    Alomari, Yazan M.; MdZin, Reena Rahayu

    2015-01-01

    Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010

  7. Simulating propagation of coherent light in random media using the Fredholm type integral equation

    NASA Astrophysics Data System (ADS)

    Kraszewski, Maciej; Pluciński, Jerzy

    2017-06-01

    Studying propagation of light in random scattering materials is important for both basic and applied research. Such studies often require usage of numerical method for simulating behavior of light beams in random media. However, if such simulations require consideration of coherence properties of light, they may become a complex numerical problems. There are well established methods for simulating multiple scattering of light (e.g. Radiative Transfer Theory and Monte Carlo methods) but they do not treat coherence properties of light directly. Some variations of these methods allows to predict behavior of coherent light but only for an averaged realization of the scattering medium. This limits their application in studying many physical phenomena connected to a specific distribution of scattering particles (e.g. laser speckle). In general, numerical simulation of coherent light propagation in a specific realization of random medium is a time- and memory-consuming problem. The goal of the presented research was to develop new efficient method for solving this problem. The method, presented in our earlier works, is based on solving the Fredholm type integral equation, which describes multiple light scattering process. This equation can be discretized and solved numerically using various algorithms e.g. by direct solving the corresponding linear equations system, as well as by using iterative or Monte Carlo solvers. Here we present recent development of this method including its comparison with well-known analytical results and a finite-difference type simulations. We also present extension of the method for problems of multiple scattering of a polarized light on large spherical particles that joins presented mathematical formalism with Mie theory.

  8. Examination of Cognitive Function During Six Months of Calorie Restriction: Results of a Randomized Controlled Trial

    PubMed Central

    Martin, Corby K.; Anton, Stephen D.; Han, Hongmei; York-Crowe, Emily; Redman, Leanne M.; Ravussin, Eric; Williamson, Donald A.

    2009-01-01

    Background Calorie restriction increases longevity in many organisms, and calorie restriction or its mimetic might increase longevity in humans. It is unclear if calorie restriction/dieting contributes to cognitive impairment. During this randomized controlled trial, the effect of 6 months of calorie restriction on cognitive functioning was tested. Methods Participants (n = 48) were randomized to one of four groups: (1) control (weight maintenance), (2) calorie restriction (CR; 25% restriction), (3) CR plus structured exercise (CR + EX, 12.5% restriction plus 12.5% increased energy expenditure via exercise), or (4) low-calorie diet (LCD; 890 kcal/d diet until 15% weight loss, followed by weight maintenance). Cognitive tests (verbal memory, visual memory, attention/concentration) were conducted at baseline and months 3 and 6. Mixed linear models tested if cognitive function changed significantly from baseline to months 3 and 6, and if this change differed by group. Correlation analysis was used to determine if average daily energy deficit (quantified from change in body energy stores) was associated with change in cognitive test performance for the three dieting groups combined. Results No consistent pattern of verbal memory, visual retention/memory, or attention/concentration deficits emerged during the trial. Daily energy deficit was not significantly associated with change in cognitive test performance. Conclusions This randomized controlled trial suggests that calorie restriction/dieting was not associated with a consistent pattern of cognitive impairment. These conclusions must be interpreted in the context of study limitations, namely small sample size and limited statistical power. Previous reports of cognitive impairment might reflect sampling biases or information processing biases. PMID:17518698

  9. Do children with grapheme-colour synaesthesia show cognitive benefits?

    PubMed

    Simner, Julia; Bain, Angela E

    2018-02-01

    Grapheme-colour synaesthesia is characterized by conscious and consistent associations between letters and colours, or between numbers and colours (e.g., synaesthetes might see A as red, 7 as green). Our study explored the development of this condition in a group of randomly sampled child synaesthetes. Two previous studies (Simner & Bain, 2013, Frontiers in Human Neuroscience, 7, 603; Simner, Harrold, Creed, Monro, & Foulkes, 2009, Brain, 132, 57) had screened over 600 primary school children to find the first randomly sampled cohort of child synaesthetes. In this study, we evaluate this cohort to ask whether their synaesthesia is associated with a particular cognitive profile of strengths and/or weaknesses. We tested our child synaesthetes at age 10-11 years in a series of cognitive tests, in comparison with matched controls and baseline norms. One previous study (Green & Goswami, 2008, Cognition, 106, 463) had suggested that child synaesthetes might perform differently to non-synaesthetes in such tasks, although those participants may have been a special type of population independent of their synaesthesia. In our own study of randomly sampled child synaesthetes, we found no significant advantages or disadvantages in a receptive vocabulary test and a memory matrix task. However, we found that synaesthetes demonstrated above-average performance in a processing-speed task and a near-significant advantage in a letter-span task (i.e., memory/recall task of letters). Our findings point to advantages for synaesthetes that go beyond those expected from enhanced coding accounts and we present the first picture of the broader cognitive profile of a randomly sampled population of child synaesthetes. © 2017 The British Psychological Society.

  10. Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1992-01-01

    Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

  11. Multidimensional Treatment Foster Care for Girls in the Juvenile Justice System: 2-Year Follow-Up of a Randomized Clinical Trial

    ERIC Educational Resources Information Center

    Chamberlain, Patricia; Leve, Leslie D.; DeGarmo, David S.

    2007-01-01

    This study is a 2-year follow-up of girls with serious and chronic delinquency who were enrolled in a randomized clinical trial conducted from 1997 to 2002 comparing multidimensional treatment foster care (MTFC) and group care (N = 81). Girls were referred by juvenile court judges and had an average of over 11 criminal referrals when they entered…

  12. Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: A feasibility randomized control trial.

    PubMed

    Gardiner, Paula M; McCue, Kelly D; Negash, Lily M; Cheng, Teresa; White, Laura F; Yinusa-Nyahkoon, Leanne; Jack, Brian W; Bickmore, Timothy W

    2017-09-01

    This randomized controlled trial evaluates the feasibility of using an Embodied Conversational Agent (ECA) to teach lifestyle modifications to urban women. Women were randomized to either 1) an ECA (content included: mindfulness, stress management, physical activity, and healthy eating) or 2) patient education sheets mirroring same content plus a meditation CD/MP3 once a day for one month. General outcome measures included: number of stress management techniques used, physical activity levels, and eating patterns. Sixty-one women ages 18 to 50 were enrolled. On average, 51% identified as white, 26% as black, 23% as other races; and 20% as Hispanic. The major stress management techniques reported at baseline were: exercise (69%), listening to music (70%), and social support (66%). After one month, women randomized to the ECA significantly decreased alcohol consumption to reduce stress (p=0.03) and increased daily fruit consumption by an average of 2 servings compared to the control (p=0.04). It is feasible to use an ECA to promote health behaviors on stress management and healthy eating among diverse urban women. Compared to patient information sheets, ECAs provide promise as a way to teach healthy lifestyle behaviors to diverse urban women. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A pilot randomized controlled trial evaluating motivationally matched pedometer feedback to increase physical activity behavior in older adults.

    PubMed

    Strath, Scott J; Swartz, Ann M; Parker, Sarah J; Miller, Nora E; Grimm, Elizabeth K; Cashin, Susan E

    2011-09-01

    Increasing physical activity (PA) levels in older adults represents an important public health challenge. The purpose of this study was to evaluate the feasibility of combining individualized motivational messaging with pedometer walking step targets to increase PA in previously inactive and insufficiently active older adults. In this 12-week intervention study older adults were randomized to 1 of 4 study arms: Group 1--control; Group 2--pedometer 10,000 step goal; Group 3--pedometer step goal plus individualized motivational feedback; or Group 4--everything in Group 3 augmented with biweekly telephone feedback. 81 participants were randomized into the study, 61 participants completed the study with an average age of 63.8 ± 6.0 years. Group 1 did not differ in accumulated steps/day following the 12-week intervention compared with participants in Group 2. Participants in Groups 3 and 4 took on average 2159 (P < .001) and 2488 (P < .001) more steps/day, respectively, than those in Group 1 after the 12-week intervention. In this 12-week pilot randomized control trial, a pedometer feedback intervention partnered with individually matched motivational messaging was an effective intervention strategy to significantly increase PA behavior in previously inactive and insufficiently active older adults.

  14. Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Chen; Sichitiu, Mihail L.

    Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.

  15. Method and apparatus for in-situ characterization of energy storage and energy conversion devices

    DOEpatents

    Christophersen, Jon P [Idaho Falls, ID; Motloch, Chester G [Idaho Falls, ID; Morrison, John L [Butte, MT; Albrecht, Weston [Layton, UT

    2010-03-09

    Disclosed are methods and apparatuses for determining an impedance of an energy-output device using a random noise stimulus applied to the energy-output device. A random noise signal is generated and converted to a random noise stimulus as a current source correlated to the random noise signal. A bias-reduced response of the energy-output device to the random noise stimulus is generated by comparing a voltage at the energy-output device terminal to an average voltage signal. The random noise stimulus and bias-reduced response may be periodically sampled to generate a time-varying current stimulus and a time-varying voltage response, which may be correlated to generate an autocorrelated stimulus, an autocorrelated response, and a cross-correlated response. Finally, the autocorrelated stimulus, the autocorrelated response, and the cross-correlated response may be combined to determine at least one of impedance amplitude, impedance phase, and complex impedance.

  16. An Empirical Study of Design Parameters for Assessing Differential Impacts for Students in Group Randomized Trials.

    PubMed

    Jaciw, Andrew P; Lin, Li; Ma, Boya

    2016-10-18

    Prior research has investigated design parameters for assessing average program impacts on achievement outcomes with cluster randomized trials (CRTs). Less is known about parameters important for assessing differential impacts. This article develops a statistical framework for designing CRTs to assess differences in impact among student subgroups and presents initial estimates of critical parameters. Effect sizes and minimum detectable effect sizes for average and differential impacts are calculated before and after conditioning on effects of covariates using results from several CRTs. Relative sensitivities to detect average and differential impacts are also examined. Student outcomes from six CRTs are analyzed. Achievement in math, science, reading, and writing. The ratio of between-cluster variation in the slope of the moderator divided by total variance-the "moderator gap variance ratio"-is important for designing studies to detect differences in impact between student subgroups. This quantity is the analogue of the intraclass correlation coefficient. Typical values were .02 for gender and .04 for socioeconomic status. For studies considered, in many cases estimates of differential impact were larger than of average impact, and after conditioning on effects of covariates, similar power was achieved for detecting average and differential impacts of the same size. Measuring differential impacts is important for addressing questions of equity, generalizability, and guiding interpretation of subgroup impact findings. Adequate power for doing this is in some cases reachable with CRTs designed to measure average impacts. Continuing collection of parameters for assessing differential impacts is the next step. © The Author(s) 2016.

  17. In-Line Monitoring of a Pharmaceutical Pan Coating Process by Optical Coherence Tomography.

    PubMed

    Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Buchsbaum, Andreas; Pescod, Russel; Baele, Thomas; Khinast, Johannes G

    2015-08-01

    This work demonstrates a new in-line measurement technique for monitoring the coating growth of randomly moving tablets in a pan coating process. In-line quality control is performed by an optical coherence tomography (OCT) sensor allowing nondestructive and contact-free acquisition of cross-section images of film coatings in real time. The coating thickness can be determined directly from these OCT images and no chemometric calibration models are required for quantification. Coating thickness measurements are extracted from the images by a fully automated algorithm. Results of the in-line measurements are validated using off-line OCT images, thickness calculations from tablet dimension measurements, and weight gain measurements. Validation measurements are performed on sample tablets periodically removed from the process during production. Reproducibility of the results is demonstrated by three batches produced under the same process conditions. OCT enables a multiple direct measurement of the coating thickness on individual tablets rather than providing the average coating thickness of a large number of tablets. This gives substantially more information about the coating quality, that is, intra- and intertablet coating variability, than standard quality control methods. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  18. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin

    Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less

  19. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.

    PubMed

    Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu

    2016-09-01

    Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

  20. New constraints on modelling the random magnetic field of the MW

    NASA Astrophysics Data System (ADS)

    Beck, Marcus C.; Beck, Alexander M.; Beck, Rainer; Dolag, Klaus; Strong, Andrew W.; Nielaba, Peter

    2016-05-01

    We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson & Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of -2.8±7.1 rad/m2 and 04.4±11. rad/m2 for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.

  1. Weak convergence to isotropic complex [Formula: see text] random measure.

    PubMed

    Wang, Jun; Li, Yunmeng; Sang, Liheng

    2017-01-01

    In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.

  2. Scattering of electromagnetic wave by the layer with one-dimensional random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Kogan, Lev; Zaboronkova, Tatiana; Grigoriev, Gennadii., IV.

    A great deal of attention has been paid to the study of probability characteristics of electro-magnetic waves scattered by one-dimensional fluctuations of medium dielectric permittivity. However, the problem of a determination of a density of a probability and average intensity of the field inside the stochastically inhomogeneous medium with arbitrary extension of fluc-tuations has not been considered yet. It is the purpose of the present report to find and to analyze the indicated functions for the plane electromagnetic wave scattered by the layer with one-dimensional fluctuations of permittivity. We assumed that the length and the amplitude of individual fluctuations as well the interval between them are random quantities. All of indi-cated fluctuation parameters are supposed as independent random values possessing Gaussian distribution. We considered the stationary time cases both small-scale and large-scale rarefied inhomogeneities. Mathematically such problem can be reduced to the solution of integral Fred-holm equation of second kind for Hertz potential (U). Using the decomposition of the field into the series of multiply scattered waves we obtained the expression for a probability density of the field of the plane wave and determined the moments of the scattered field. We have shown that all odd moments of the centered field (U-¡U¿) are equal to zero and the even moments depend on the intensity. It was obtained that the probability density of the field possesses the Gaussian distribution. The average field is small compared with the standard fluctuation of scattered field for all considered cases of inhomogeneities. The value of average intensity of the field is an order of a standard of fluctuations of field intensity and drops with increases the inhomogeneities length in the case of small-scale inhomogeneities. The behavior of average intensity is more complicated in the case of large-scale medium inhomogeneities. The value of average intensity is the oscillating function versus the average fluctuations length if the standard of fluctuations of inhomogeneities length is greater then the wave length. When the standard of fluctuations of medium inhomogeneities extension is smaller then the wave length, the av-erage intensity value weakly depends from the average fluctuations extension. The obtained results may be used for analysis of the electromagnetic wave propagation into the media with the fluctuating parameters caused by such factors as leafs of trees, cumulus, internal gravity waves with a chaotic phase and etc. Acknowledgment: This work was supported by the Russian Foundation for Basic Research (projects 08-02-97026 and 09-05-00450).

  3. Soil Erosion as a stochastic process

    NASA Astrophysics Data System (ADS)

    Casper, Markus C.

    2015-04-01

    The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.

  4. Compressive cryotherapy versus ice-a prospective, randomized study on postoperative pain in patients undergoing arthroscopic rotator cuff repair or subacromial decompression.

    PubMed

    Kraeutler, Matthew J; Reynolds, Kirk A; Long, Cyndi; McCarty, Eric C

    2015-06-01

    The purpose of this study was to compare the effect of compressive cryotherapy (CC) vs. ice on postoperative pain in patients undergoing shoulder arthroscopy for rotator cuff repair or subacromial decompression. A commercial device was used for postoperative CC. A standard ice wrap (IW) was used for postoperative cryotherapy alone. Patients scheduled for rotator cuff repair or subacromial decompression were consented and randomized to 1 of 2 groups; patients were randomized to use either CC or a standard IW for the first postoperative week. All patients were asked to complete a "diary" each day, which included visual analog scale scores based on average daily pain and worst daily pain as well as total pain medication usage. Pain medications were then converted to a morphine equivalent dosage. Forty-six patients completed the study and were available for analysis; 25 patients were randomized to CC and 21 patients were randomized to standard IW. No significant differences were found in average pain, worst pain, or morphine equivalent dosage on any day. There does not appear to be a significant benefit to use of CC over standard IW in patients undergoing shoulder arthroscopy for rotator cuff repair or subacromial decompression. Further study is needed to determine if CC devices are a cost-effective option for postoperative pain management in this population of patients. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  5. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  6. On the conservative nature of intragenic recombination

    PubMed Central

    Drummond, D. Allan; Silberg, Jonathan J.; Meyer, Michelle M.; Wilke, Claus O.; Arnold, Frances H.

    2005-01-01

    Intragenic recombination rapidly creates protein sequence diversity compared with random mutation, but little is known about the relative effects of recombination and mutation on protein function. Here, we compare recombination of the distantly related β-lactamases PSE-4 and TEM-1 to mutation of PSE-4. We show that, among β-lactamase variants containing the same number of amino acid substitutions, variants created by recombination retain function with a significantly higher probability than those generated by random mutagenesis. We present a simple model that accurately captures the differing effects of mutation and recombination in real and simulated proteins with only four parameters: (i) the amino acid sequence distance between parents, (ii) the number of substitutions, (iii) the average probability that random substitutions will preserve function, and (iv) the average probability that substitutions generated by recombination will preserve function. Our results expose a fundamental functional enrichment in regions of protein sequence space accessible by recombination and provide a framework for evaluating whether the relative rates of mutation and recombination observed in nature reflect the underlying imbalance in their effects on protein function. PMID:15809422

  7. On the conservative nature of intragenic recombination.

    PubMed

    Drummond, D Allan; Silberg, Jonathan J; Meyer, Michelle M; Wilke, Claus O; Arnold, Frances H

    2005-04-12

    Intragenic recombination rapidly creates protein sequence diversity compared with random mutation, but little is known about the relative effects of recombination and mutation on protein function. Here, we compare recombination of the distantly related beta-lactamases PSE-4 and TEM-1 to mutation of PSE-4. We show that, among beta-lactamase variants containing the same number of amino acid substitutions, variants created by recombination retain function with a significantly higher probability than those generated by random mutagenesis. We present a simple model that accurately captures the differing effects of mutation and recombination in real and simulated proteins with only four parameters: (i) the amino acid sequence distance between parents, (ii) the number of substitutions, (iii) the average probability that random substitutions will preserve function, and (iv) the average probability that substitutions generated by recombination will preserve function. Our results expose a fundamental functional enrichment in regions of protein sequence space accessible by recombination and provide a framework for evaluating whether the relative rates of mutation and recombination observed in nature reflect the underlying imbalance in their effects on protein function.

  8. Percolation Thresholds in Angular Grain media: Drude Directed Infiltration

    NASA Astrophysics Data System (ADS)

    Priour, Donald

    Pores in many realistic systems are not well delineated channels, but are void spaces among grains impermeable to charge or fluid flow which comprise the medium. Sparse grain concentrations lead to permeable systems, while concentrations in excess of a critical density block bulk fluid flow. We calculate percolation thresholds in porous materials made up of randomly placed (and oriented) disks, tetrahedrons, and cubes. To determine if randomly generated finite system samples are permeable, we deploy virtual tracer particles which are scattered (e.g. specularly) by collisions with impenetrable angular grains. We hasten the rate of exploration (which would otherwise scale as ncoll1 / 2 where ncoll is the number of collisions with grains if the tracers followed linear trajectories) by considering the tracer particles to be charged in conjunction with a randomly directed uniform electric field. As in the Drude treatment, where a succession of many scattering events leads to a constant drift velocity, tracer displacements on average grow linearly in ncoll. By averaging over many disorder realizations for a variety of systems sizes, we calculate the percolation threshold and critical exponent which characterize the phase transition.

  9. Fluctuation Dynamics of Exchange Rates on Indian Financial Market

    NASA Astrophysics Data System (ADS)

    Sarkar, A.; Barat, P.

    Here we investigate the scaling behavior and the complexity of the average daily exchange rate returns of the Indian Rupee against four foreign currencies namely US Dollar, Euro, Great Britain Pound and Japanese Yen. Our analysis revealed that the average daily exchange rate return of the Indian Rupee against the US Dollar exhibits a persistent scaling behavior and follow Levy stable distribution. On the contrary the average daily exchange rate returns of the other three foreign currencies show randomness and follow Gaussian distribution. Moreover, it is seen that the complexity of the average daily exchange rate return of the Indian Rupee against US Dollar is less than the other three exchange rate returns.

  10. Modeling change in potential landscape vulnerability to forest insect and pathogen disturbances: methods for forested subwatersheds sampled in the midscale interior Columbia River basin assessment.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; Craig A. Miller; Scott D. Kreiter; R. Brion Salter

    1999-01-01

    In the interior Columbia River basin midscale ecological assessment, including portions of the Klamath and Great Basins, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and...

  11. Agricultural Workers in Central California. Volume 1: In 1989; Volume 2: Phase II, 1990-91. California Agricultural Studies, 90-8 and 91-5.

    ERIC Educational Resources Information Center

    Alvarado, Andrew J.; And Others

    Two surveys developed profiles of seasonal agricultural workers and their working conditions in central California. In 1989, a random sample of 347 seasonal workers was interviewed. The sample was 30 percent female and 87 percent Mexican-born. Average age was 35 years and average educational attainment was 5.9 years. Most had parents, spouses, or…

  12. Stochastic transfer of polarized radiation in finite cloudy atmospheric media with reflective boundaries

    NASA Astrophysics Data System (ADS)

    Sallah, M.

    2014-03-01

    The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.

  13. The Feasibility of Hypnotic Analgesia in Ameliorating Pain and Anxiety Among Adults Undergoing Needle Electromyography

    PubMed Central

    Slack, David; Nelson, Lonnie; Patterson, David; Burns, Stephen; Hakimi, Kevin; Robinson, Lawrence

    2017-01-01

    Objective Our hypothesis was that hypnotic analgesia reduces pain and anxiety during electromyography [EMG]. Design Prospective randomized controlled clinical trial at outpatient electrodiagnostic clinics in teaching hospitals. Just prior to EMG, 26 subjects were randomized to one of three 20 minute audio programs: (EDU) education about EMG (n=8); (HYP-C) hypnotic induction without analgesic suggestion (n=10) or; (HYP-ANLG) hypnotic induction with analgesic suggestion (n=8). The blinded electromyographer provided a post-hypnotic suggestion at the start of EMG. After EMG, subjects rated worst and average pain, and anxiety using visual analog scales. Results Mean values for the EDU, HYP-C and HYP-ANLG groups were not significantly different (mean ± sd): worst pain 67 ± 25, 42 ± 18, 49 ± 30: average pain 35 ± 26, 27 ± 14, 25 ± 22; anxiety 44 ± 41, 42 ± 23, 22 ± 24. When hypnosis groups were merged [n=18] and compared with the EDU condition [n=8], average and worst pain and anxiety were less for the hypnosis group than EDU, but this was statistically significant only for worst pain [hypnosis - 46 ± 24 vs. EDU - 67 ± 35, p=0.049] with a 31% average reduction. Conclusions A short hypnotic induction appears to reduce worst pain during EMG. PMID:18971768

  14. Red-shouldered hawk nesting habitat preference in south Texas

    USGS Publications Warehouse

    Strobel, Bradley N.; Boal, Clint W.

    2010-01-01

    We examined nesting habitat preference by red-shouldered hawks Buteo lineatus using conditional logistic regression on characteristics measured at 27 occupied nest sites and 68 unused sites in 2005–2009 in south Texas. We measured vegetation characteristics of individual trees (nest trees and unused trees) and corresponding 0.04-ha plots. We evaluated the importance of tree and plot characteristics to nesting habitat selection by comparing a priori tree-specific and plot-specific models using Akaike's information criterion. Models with only plot variables carried 14% more weight than models with only center tree variables. The model-averaged odds ratios indicated red-shouldered hawks selected to nest in taller trees and in areas with higher average diameter at breast height than randomly available within the forest stand. Relative to randomly selected areas, each 1-m increase in nest tree height and 1-cm increase in the plot average diameter at breast height increased the probability of selection by 85% and 10%, respectively. Our results indicate that red-shouldered hawks select nesting habitat based on vegetation characteristics of individual trees as well as the 0.04-ha area surrounding the tree. Our results indicate forest management practices resulting in tall forest stands with large average diameter at breast height would benefit red-shouldered hawks in south Texas.

  15. Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment

    NASA Astrophysics Data System (ADS)

    Piatnitski, A.; Zhizhina, E.

    2017-11-01

    The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.

  16. Random walk of passive tracers among randomly moving obstacles.

    PubMed

    Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco

    2016-04-14

    This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.

  17. Surface morphology and grain analysis of successively industrially grown amorphous hydrogenated carbon films (a-C:H) on silicon

    NASA Astrophysics Data System (ADS)

    Catena, Alberto; McJunkin, Thomas; Agnello, Simonpietro; Gelardi, Franco M.; Wehner, Stefan; Fischer, Christian B.

    2015-08-01

    Silicon (1 0 0) has been gradually covered by amorphous hydrogenated carbon (a-C:H) films via an industrial process. Two types of these diamond-like carbon (DLC) coatings, one more flexible (f-DLC) and one more robust (r-DLC), have been investigated. Both types have been grown by a radio frequency plasma-enhanced chemical vapor deposition (RF-PECVD) technique with acetylene plasma. Surface morphologies have been studied in detail by atomic force microscopy (AFM) and Raman spectroscopy has been used to investigate the DLC structure. Both types appeared to have very similar morphology and sp2 carbon arrangement. The average height and area for single grains have been analyzed for all depositions. A random distribution of grain heights was found for both types. The individual grain structures between the f- and r-type revealed differences: the shape for the f-DLC grains is steeper than for the r-DLC grains. By correlating the average grain heights to the average grain areas for all depositions a limited region is identified, suggesting a certain regularity during the DLC deposition mechanisms that confines both values. A growth of the sp2 carbon entities for high r-DLC depositions is revealed and connected to a structural rearrangement of carbon atom hybridizations and hydrogen content in the DLC structure.

  18. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  19. Apparatus and Method for Compensating for Process, Voltage, and Temperature Variation of the Time Delay of a Digital Delay Line

    NASA Technical Reports Server (NTRS)

    Seefeldt, James (Inventor); Feng, Xiaoxin (Inventor); Roper, Weston (Inventor)

    2013-01-01

    A process, voltage, and temperature (PVT) compensation circuit and a method of continuously generating a delay measure are provided. The compensation circuit includes two delay lines, each delay line providing a delay output. The two delay lines may each include a number of delay elements, which in turn may include one or more current-starved inverters. The number of delay lines may differ between the two delay lines. The delay outputs are provided to a combining circuit that determines an offset pulse based on the two delay outputs and then averages the voltage of the offset pulse to determine a delay measure. The delay measure may be one or more currents or voltages indicating an amount of PVT compensation to apply to input or output signals of an application circuit, such as a memory-bus driver, dynamic random access memory (DRAM), a synchronous DRAM, a processor or other clocked circuit.

  20. A Metric on Phylogenetic Tree Shapes

    PubMed Central

    Plazzotta, G.

    2018-01-01

    Abstract The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees’ branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. PMID:28472435

Top