Sample records for random pattern model

  1. Modeling pattern in collections of parameters

    USGS Publications Warehouse

    Link, W.A.

    1999-01-01

    Wildlife management is increasingly guided by analyses of large and complex datasets. The description of such datasets often requires a large number of parameters, among which certain patterns might be discernible. For example, one may consider a long-term study producing estimates of annual survival rates; of interest is the question whether these rates have declined through time. Several statistical methods exist for examining pattern in collections of parameters. Here, I argue for the superiority of 'random effects models' in which parameters are regarded as random variables, with distributions governed by 'hyperparameters' describing the patterns of interest. Unfortunately, implementation of random effects models is sometimes difficult. Ultrastructural models, in which the postulated pattern is built into the parameter structure of the original data analysis, are approximations to random effects models. However, this approximation is not completely satisfactory: failure to account for natural variation among parameters can lead to overstatement of the evidence for pattern among parameters. I describe quasi-likelihood methods that can be used to improve the approximation of random effects models by ultrastructural models.

  2. Movement patterns of Tenebrio beetles demonstrate empirically that correlated-random-walks have similitude with a Lévy walk.

    PubMed

    Reynolds, Andy M; Leprêtre, Lisa; Bohan, David A

    2013-11-07

    Correlated random walks are the dominant conceptual framework for modelling and interpreting organism movement patterns. Recent years have witnessed a stream of high profile publications reporting that many organisms perform Lévy walks; movement patterns that seemingly stand apart from the correlated random walk paradigm because they are discrete and scale-free rather than continuous and scale-finite. Our new study of the movement patterns of Tenebrio molitor beetles in unchanging, featureless arenas provides the first empirical support for a remarkable and deep theoretical synthesis that unites correlated random walks and Lévy walks. It demonstrates that the two models are complementary rather than competing descriptions of movement pattern data and shows that correlated random walks are a part of the Lévy walk family. It follows from this that vast numbers of Lévy walkers could be hiding in plain sight.

  3. Effects of ignition location models on the burn patterns of simulated wildfires

    USGS Publications Warehouse

    Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.

    2011-01-01

    Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.

  4. A pattern-mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials.

    PubMed

    Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L

    2017-11-20

    We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.

  5. The Common Patterns of Nature

    PubMed Central

    Frank, Steven A.

    2010-01-01

    We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344

  6. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  7. A New In Vitro Co-Culture Model Using Magnetic Force-Based Nanotechnology.

    PubMed

    Takanari, Hiroki; Miwa, Keiko; Fu, XianMing; Nakai, Junichi; Ito, Akira; Ino, Kousuke; Honda, Hiroyuki; Tonomura, Wataru; Konishi, Satoshi; Opthof, Tobias; van der Heyden, Marcel Ag; Kodama, Itsuo; Lee, Jong-Kook

    2016-10-01

    Skeletal myoblast (SkMB) transplantation has been conducted as a therapeutic strategy for severe heart failure. However, arrhythmogenicity following transplantation remains unsolved. We developed an in vitro model of myoblast transplantation with "patterned" or "randomly-mixed" co-culture of SkMBs and cardiomyocytes enabling subsequent electrophysiological, and arrhythmogenic evaluation. SkMBs were magnetically labeled with magnetite nanoparticles and co-cultured with neonatal rat ventricular myocytes (NRVMs) on multi-electrode arrays. SkMBs were patterned by a magnet beneath the arrays. Excitation synchronicity was evaluated by Ca(2+) imaging using a gene-encoded Ca(2+) indicator, G-CaMP2. In the monoculture of NRVMs (control), conduction was well-organized. In the randomly-mixed co-culture of NRVMs and SkMBs (random group), there was inhomogeneous conduction from multiple origins. In the "patterned" co-culture where an en bloc SKMB-layer was inserted into the NRVM-layer, excitation homogenously propagated although conduction was distorted by the SkMB-area. The 4-mm distance conduction time (CT) in the random group was significantly longer (197 ± 126 ms) than in control (17 ± 3 ms). In the patterned group, CT through NRVM-area did not change (25 ± 3 ms), although CT through the SkMB-area was significantly longer (132 ± 77 ms). The intervals between spontaneous excitation varied beat-to-beat in the random group, while regular beating was recorded in the control and patterned groups. Synchronized Ca(2+) transients of NRVMs were observed in the patterned group, whereas those in the random group were asynchronous. Patterned alignment of SkMBs is feasible with magnetic nanoparticles. Using the novel in vitro model mimicking cell transplantation, it may become possible to predict arrhythmogenicity due to heterogenous cell transplantation. J. Cell. Physiol. 231: 2249-2256, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  9. Optimal random Lévy-loop searching: New insights into the searching behaviours of central-place foragers

    NASA Astrophysics Data System (ADS)

    Reynolds, A. M.

    2008-04-01

    A random Lévy-looping model of searching is devised and optimal random Lévy-looping searching strategies are identified for the location of a single target whose position is uncertain. An inverse-square power law distribution of loop lengths is shown to be optimal when the distance between the centre of the search and the target is much shorter than the size of the longest possible loop in the searching pattern. Optimal random Lévy-looping searching patterns have recently been observed in the flight patterns of honeybees (Apis mellifera) when attempting to locate their hive and when searching after a known food source becomes depleted. It is suggested that the searching patterns of desert ants (Cataglyphis) are consistent with the adoption of an optimal Lévy-looping searching strategy.

  10. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  11. Machine learning methods reveal the temporal pattern of dengue incidence using meteorological factors in metropolitan Manila, Philippines.

    PubMed

    Carvajal, Thaddeus M; Viacrusis, Katherine M; Hernandez, Lara Fides T; Ho, Howell T; Amalin, Divina M; Watanabe, Kozo

    2018-04-17

    Several studies have applied ecological factors such as meteorological variables to develop models and accurately predict the temporal pattern of dengue incidence or occurrence. With the vast amount of studies that investigated this premise, the modeling approaches differ from each study and only use a single statistical technique. It raises the question of whether which technique would be robust and reliable. Hence, our study aims to compare the predictive accuracy of the temporal pattern of Dengue incidence in Metropolitan Manila as influenced by meteorological factors from four modeling techniques, (a) General Additive Modeling, (b) Seasonal Autoregressive Integrated Moving Average with exogenous variables (c) Random Forest and (d) Gradient Boosting. Dengue incidence and meteorological data (flood, precipitation, temperature, southern oscillation index, relative humidity, wind speed and direction) of Metropolitan Manila from January 1, 2009 - December 31, 2013 were obtained from respective government agencies. Two types of datasets were used in the analysis; observed meteorological factors (MF) and its corresponding delayed or lagged effect (LG). After which, these datasets were subjected to the four modeling techniques. The predictive accuracy and variable importance of each modeling technique were calculated and evaluated. Among the statistical modeling techniques, Random Forest showed the best predictive accuracy. Moreover, the delayed or lag effects of the meteorological variables was shown to be the best dataset to use for such purpose. Thus, the model of Random Forest with delayed meteorological effects (RF-LG) was deemed the best among all assessed models. Relative humidity was shown to be the top-most important meteorological factor in the best model. The study exhibited that there are indeed different predictive outcomes generated from each statistical modeling technique and it further revealed that the Random forest model with delayed meteorological effects to be the best in predicting the temporal pattern of Dengue incidence in Metropolitan Manila. It is also noteworthy that the study also identified relative humidity as an important meteorological factor along with rainfall and temperature that can influence this temporal pattern.

  12. Random scalar fields and hyperuniformity

    NASA Astrophysics Data System (ADS)

    Ma, Zheng; Torquato, Salvatore

    2017-06-01

    Disordered many-particle hyperuniform systems are exotic amorphous states of matter that lie between crystals and liquids. Hyperuniform systems have attracted recent attention because they are endowed with novel transport and optical properties. Recently, the hyperuniformity concept has been generalized to characterize two-phase media, scalar fields, and random vector fields. In this paper, we devise methods to explicitly construct hyperuniform scalar fields. Specifically, we analyze spatial patterns generated from Gaussian random fields, which have been used to model the microwave background radiation and heterogeneous materials, the Cahn-Hilliard equation for spinodal decomposition, and Swift-Hohenberg equations that have been used to model emergent pattern formation, including Rayleigh-Bénard convection. We show that the Gaussian random scalar fields can be constructed to be hyperuniform. We also numerically study the time evolution of spinodal decomposition patterns and demonstrate that they are hyperuniform in the scaling regime. Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg equation are effectively hyperuniform. We show that thresholding (level-cutting) a hyperuniform Gaussian random field to produce a two-phase random medium tends to destroy the hyperuniformity of the progenitor scalar field. We then propose guidelines to achieve effectively hyperuniform two-phase media derived from thresholded non-Gaussian fields. Our investigation paves the way for new research directions to characterize the large-structure spatial patterns that arise in physics, chemistry, biology, and ecology. Moreover, our theoretical results are expected to guide experimentalists to synthesize new classes of hyperuniform materials with novel physical properties via coarsening processes and using state-of-the-art techniques, such as stereolithography and 3D printing.

  13. Bridging the gulf between correlated random walks and Lévy walks: autocorrelation as a source of Lévy walk movement patterns.

    PubMed

    Reynolds, Andy M

    2010-12-06

    For many years, the dominant conceptual framework for describing non-oriented animal movement patterns has been the correlated random walk (CRW) model in which an individual's trajectory through space is represented by a sequence of distinct, independent randomly oriented 'moves'. It has long been recognized that the transformation of an animal's continuous movement path into a broken line is necessarily arbitrary and that probability distributions of move lengths and turning angles are model artefacts. Continuous-time analogues of CRWs that overcome this inherent shortcoming have appeared in the literature and are gaining prominence. In these models, velocities evolve as a Markovian process and have exponential autocorrelation. Integration of the velocity process gives the position process. Here, through a simple scaling argument and through an exact analytical analysis, it is shown that autocorrelation inevitably leads to Lévy walk (LW) movement patterns on timescales less than the autocorrelation timescale. This is significant because over recent years there has been an accumulation of evidence from a variety of experimental and theoretical studies that many organisms have movement patterns that can be approximated by LWs, and there is now intense debate about the relative merits of CRWs and LWs as representations of non-orientated animal movement patterns.

  14. Introducing Perception and Modelling of Spatial Randomness in Classroom

    ERIC Educational Resources Information Center

    De Nóbrega, José Renato

    2017-01-01

    A strategy to facilitate understanding of spatial randomness is described, using student activities developed in sequence: looking at spatial patterns, simulating approximate spatial randomness using a grid of equally-likely squares, using binomial probabilities for approximations and predictions and then comparing with given Poisson…

  15. Large-area imaging reveals biologically driven non-random spatial patterns of corals at a remote reef

    NASA Astrophysics Data System (ADS)

    Edwards, Clinton B.; Eynaud, Yoan; Williams, Gareth J.; Pedersen, Nicole E.; Zgliczynski, Brian J.; Gleason, Arthur C. R.; Smith, Jennifer E.; Sandin, Stuart A.

    2017-12-01

    For sessile organisms such as reef-building corals, differences in the degree of dispersion of individuals across a landscape may result from important differences in life-history strategies or may reflect patterns of habitat availability. Descriptions of spatial patterns can thus be useful not only for the identification of key biological and physical mechanisms structuring an ecosystem, but also by providing the data necessary to generate and test ecological theory. Here, we used an in situ imaging technique to create large-area photomosaics of 16 plots at Palmyra Atoll, central Pacific, each covering 100 m2 of benthic habitat. We mapped the location of 44,008 coral colonies and identified each to the lowest taxonomic level possible. Using metrics of spatial dispersion, we tested for departures from spatial randomness. We also used targeted model fitting to explore candidate processes leading to differences in spatial patterns among taxa. Most taxa were clustered and the degree of clustering varied by taxon. A small number of taxa did not significantly depart from randomness and none revealed evidence of spatial uniformity. Importantly, taxa that readily fragment or tolerate stress through partial mortality were more clustered. With little exception, clustering patterns were consistent with models of fragmentation and dispersal limitation. In some taxa, dispersion was linearly related to abundance, suggesting density dependence of spatial patterning. The spatial patterns of stony corals are non-random and reflect fundamental life-history characteristics of the taxa, suggesting that the reef landscape may, in many cases, have important elements of spatial predictability.

  16. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods

    PubMed Central

    Shara, Nawar; Yassin, Sayf A.; Valaitis, Eduardas; Wang, Hong; Howard, Barbara V.; Wang, Wenyu; Lee, Elisa T.; Umans, Jason G.

    2015-01-01

    Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS). Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989–1991), 2 (1993–1995), and 3 (1998–1999) was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results. PMID:26414328

  17. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods.

    PubMed

    Shara, Nawar; Yassin, Sayf A; Valaitis, Eduardas; Wang, Hong; Howard, Barbara V; Wang, Wenyu; Lee, Elisa T; Umans, Jason G

    2015-01-01

    Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS). Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991), 2 (1993-1995), and 3 (1998-1999) was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.

  18. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  19. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  20. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  1. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    PubMed

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  2. Quantifying opening-mode fracture spatial organization in horizontal wellbore image logs, core and outcrop: Application to Upper Cretaceous Frontier Formation tight gas sandstones, USA

    NASA Astrophysics Data System (ADS)

    Li, J. Z.; Laubach, S. E.; Gale, J. F. W.; Marrett, R. A.

    2018-03-01

    The Upper Cretaceous Frontier Formation is a naturally fractured gas-producing sandstone in Wyoming. Regionally, random and statistically more clustered than random patterns exist in the same upper to lower shoreface depositional facies. East-west- and north-south-striking regional fractures sampled using image logs and cores from three horizontal wells exhibit clustered patterns, whereas data collected from east-west-striking fractures in outcrop have patterns that are indistinguishable from random. Image log data analyzed with the correlation count method shows clusters ∼35 m wide and spaced ∼50 to 90 m apart as well as clusters up to 12 m wide with periodic inter-cluster spacings. A hierarchy of cluster sizes exists; organization within clusters is likely fractal. These rocks have markedly different structural and burial histories, so regional differences in degree of clustering are unsurprising. Clustered patterns correspond to fractures having core quartz deposition contemporaneous with fracture opening, circumstances that some models suggest might affect spacing patterns by interfering with fracture growth. Our results show that quantifying and identifying patterns as statistically more or less clustered than random delineates differences in fracture patterns that are not otherwise apparent but that may influence gas and water production, and therefore may be economically important.

  3. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  4. The Impact of Five Missing Data Treatments on a Cross-Classified Random Effects Model

    ERIC Educational Resources Information Center

    Hoelzle, Braden R.

    2012-01-01

    The present study compared the performance of five missing data treatment methods within a Cross-Classified Random Effects Model environment under various levels and patterns of missing data given a specified sample size. Prior research has shown the varying effect of missing data treatment options within the context of numerous statistical…

  5. Pseudo-orthogonalization of memory patterns for associative memory.

    PubMed

    Oku, Makito; Makino, Takaki; Aihara, Kazuyuki

    2013-11-01

    A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.

  6. Pattern formations and optimal packing.

    PubMed

    Mityushev, Vladimir

    2016-04-01

    Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Chaos Modeling: Increasing Educational Researchers' Awareness of a New Tool.

    ERIC Educational Resources Information Center

    Bobner, Ronald F.; And Others

    Chaos theory is being used as a tool to study a wide variety of phenomena. It is a philosophical and empirical approach that attempts to explain relationships previously thought to be totally random. Although some relationships are truly random, many data appear to be random but reveal repeatable patterns of behavior under further investigation.…

  8. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  9. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  10. Automated time activity classification based on global positioning system (GPS) tracking data

    PubMed Central

    2011-01-01

    Background Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. Methods We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Results Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Conclusions Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns. PMID:22082316

  11. Automated time activity classification based on global positioning system (GPS) tracking data.

    PubMed

    Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph

    2011-11-14

    Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns.

  12. A computational method for optimizing fuel treatment locations

    Treesearch

    Mark A. Finney

    2006-01-01

    Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...

  13. A spatial error model with continuous random effects and an application to growth convergence

    NASA Astrophysics Data System (ADS)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  14. Statistically significant relational data mining :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less

  15. Evaluating random search strategies in three mammals from distinct feeding guilds.

    PubMed

    Auger-Méthé, Marie; Derocher, Andrew E; DeMars, Craig A; Plank, Michael J; Codling, Edward A; Lewis, Mark A

    2016-09-01

    Searching allows animals to find food, mates, shelter and other resources essential for survival and reproduction and is thus among the most important activities performed by animals. Theory predicts that animals will use random search strategies in highly variable and unpredictable environments. Two prominent models have been suggested for animals searching in sparse and heterogeneous environments: (i) the Lévy walk and (ii) the composite correlated random walk (CCRW) and its associated area-restricted search behaviour. Until recently, it was difficult to differentiate between the movement patterns of these two strategies. Using a new method that assesses whether movement patterns are consistent with these two strategies and two other common random search strategies, we investigated the movement behaviour of three species inhabiting sparse northern environments: woodland caribou (Rangifer tarandus caribou), barren-ground grizzly bear (Ursus arctos) and polar bear (Ursus maritimus). These three species vary widely in their diets and thus allow us to contrast the movement patterns of animals from different feeding guilds. Our results showed that although more traditional methods would have found evidence for the Lévy walk for some individuals, a comparison of the Lévy walk to CCRWs showed stronger support for the latter. While a CCRW was the best model for most individuals, there was a range of support for its absolute fit. A CCRW was sufficient to explain the movement of nearly half of herbivorous caribou and a quarter of omnivorous grizzly bears, but was insufficient to explain the movement of all carnivorous polar bears. Strong evidence for CCRW movement patterns suggests that many individuals may use a multiphasic movement strategy rather than one-behaviour strategies such as the Lévy walk. The fact that the best model was insufficient to describe the movement paths of many individuals suggests that some animals living in sparse environments may use strategies that are more complicated than those described by the standard random search models. Thus, our results indicate a need to develop movement models that incorporate factors such as the perceptual and cognitive capacities of animals. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  16. Colonic stem cell data are consistent with the immortal model of stem cell division under non-random strand segregation.

    PubMed

    Walters, K

    2009-06-01

    Colonic stem cells are thought to reside towards the base of crypts of the colon, but their numbers and proliferation mechanisms are not well characterized. A defining property of stem cells is that they are able to divide asymmetrically, but it is not known whether they always divide asymmetrically (immortal model) or whether there are occasional symmetrical divisions (stochastic model). By measuring diversity of methylation patterns in colon crypt samples, a recent study found evidence in favour of the stochastic model, assuming random segregation of stem cell DNA strands during cell division. Here, the effect of preferential segregation of the template strand is considered to be consistent with the 'immortal strand hypothesis', and explore the effect on conclusions of previously published results. For a sample of crypts, it is shown how, under the immortal model, to calculate mean and variance of the number of unique methylation patterns allowing for non-random strand segregation and compare them with those observed. The calculated mean and variance are consistent with an immortal model that incorporates non-random strand segregation for a range of stem cell numbers and levels of preferential strand segregation. Allowing for preferential strand segregation considerably alters previously published conclusions relating to stem cell numbers and turnover mechanisms. Evidence in favour of the stochastic model may not be as strong as previously thought.

  17. Conditional random fields for pattern recognition applied to structured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Skurikhin, Alexei

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  18. Conditional random fields for pattern recognition applied to structured data

    DOE PAGES

    Burr, Tom; Skurikhin, Alexei

    2015-07-14

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  19. Dissipative neutrino oscillations in randomly fluctuating matter

    NASA Astrophysics Data System (ADS)

    Benatti, F.; Floreanini, R.

    2005-01-01

    The generalized dynamics describing the propagation of neutrinos in randomly fluctuating media is analyzed: It takes into account matter-induced, decoherence phenomena that go beyond the standard Mikheyev-Smirnov-Wolfenstein (MSW) effect. A widely adopted density fluctuation pattern is found to be physically untenable: A more general model needs to be instead considered, leading to flavor changing effective neutrino-matter interactions. They induce new, dissipative effects that modify the neutrino oscillation pattern in a way amenable to a direct experimental analysis.

  20. Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey

    USGS Publications Warehouse

    Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.

    2014-01-01

    We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.

  1. Modeling Passive Propagation of Malwares on the WWW

    NASA Astrophysics Data System (ADS)

    Chunbo, Liu; Chunfu, Jia

    Web-based malwares host in websites fixedly and download onto user's computers automatically while users browse. This passive propagation pattern is different from that of traditional viruses and worms. A propagation model based on reverse web graph is proposed. In this model, propagation of malwares is analyzed by means of random jump matrix which combines orderness and randomness of user browsing behaviors. Explanatory experiments, which has single or multiple propagation sources respectively, prove the validity of the model. Using this model, people can evaluate the hazardness of specified websites and take corresponding countermeasures.

  2. Movement Patterns, Social Dynamics, and the Evolution of Cooperation

    PubMed Central

    Smaldino, Paul E.; Schank, Jeffrey C.

    2012-01-01

    The structure of social interactions influences many aspects of social life, including the spread of information and behavior, and the evolution of social phenotypes. After dispersal, organisms move around throughout their lives, and the patterns of their movement influence their social encounters over the course of their lifespan. Though both space and mobility are known to influence social evolution, there is little analysis of the influence of specific movement patterns on evolutionary dynamics. We explored the effects of random movement strategies on the evolution of cooperation using an agent-based prisoner’s dilemma model with mobile agents. This is the first systematic analysis of a model in which cooperators and defectors can use different random movement strategies, which we chose to fall on a spectrum between highly exploratory and highly restricted in their search tendencies. Because limited dispersal and restrictions to local neighborhood size are known to influence the ability of cooperators to effectively assort, we also assessed the robustness of our findings with respect to dispersal and local capacity constraints. We show that differences in patterns of movement can dramatically influence the likelihood of cooperator success, and that the effects of different movement patterns are sensitive to environmental assumptions about offspring dispersal and local space constraints. Since local interactions implicitly generate dynamic social interaction networks, we also measured the average number of unique and total interactions over a lifetime and considered how these emergent network dynamics helped explain the results. This work extends what is known about mobility and the evolution of cooperation, and also has general implications for social models with randomly moving agents. PMID:22838026

  3. Effects of behavioral patterns and network topology structures on Parrondo’s paradox

    PubMed Central

    Ye, Ye; Cheong, Kang Hao; Cen, Yu-wan; Xie, Neng-gang

    2016-01-01

    A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed. PMID:27845430

  4. Effects of behavioral patterns and network topology structures on Parrondo’s paradox

    NASA Astrophysics Data System (ADS)

    Ye, Ye; Cheong, Kang Hao; Cen, Yu-Wan; Xie, Neng-Gang

    2016-11-01

    A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed.

  5. Random Resistor Network Model of Minimal Conductivity in Graphene

    NASA Astrophysics Data System (ADS)

    Cheianov, Vadim V.; Fal'Ko, Vladimir I.; Altshuler, Boris L.; Aleiner, Igor L.

    2007-10-01

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  6. Movement analyses of wood cricket ( Nemobius sylvestris) (Orthoptera: Gryllidae).

    PubMed

    Brouwers, N C; Newton, A C

    2010-12-01

    Information on the dispersal ability of invertebrate species associated with woodland habitats is severely lacking. Therefore, a study was conducted examining the movement patterns of wood cricket (Nemobius sylvestris) (Orthoptera: Gryllidae) on the Isle of Wight, UK. Juvenile (i.e. nymphs) and adult wood crickets were released and observed over time within different ground surface substrates. Their movement paths were recorded and subsequently analysed using random walk models. Nymphs were found to move more slowly than adults did; and, when given a choice, both nymphs and adults showed a preference for moving through or over leaf litter compared to bare soil or grass. A correlated random walk (CRW) model accurately described the movement pattern of adult wood crickets through leaf litter, indicating a level of directional persistence in their movements. The estimated population spread through leaf litter for adults was 17.9 cm min-1. Movements of nymphs through leaf litter could not accurately be described by a random walk model, showing a change in their movement pattern over time from directed to more random movements. The estimated population spread through leaf litter for nymphs was 10.1 cm min-1. The results indicate that wood cricket adults can be considered as more powerful dispersers than nymphs; however, further analysis of how the insects move through natural heterogeneous environments at a range of spatio-temporal scales needs to be performed to provide a complete understanding of the dispersal ability of the species.

  7. A Bayesian model for time-to-event data with informative censoring

    PubMed Central

    Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo

    2012-01-01

    Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746

  8. A new computational approach to simulate pattern formation in Paenibacillus dendritiformis bacterial colonies

    NASA Astrophysics Data System (ADS)

    Tucker, Laura Jane

    Under the harsh conditions of limited nutrient and hard growth surface, Paenibacillus dendritiformis in agar plates form two classes of patterns (morphotypes). The first class, called the dendritic morphotype, has radially directed branches. The second class, called the chiral morphotype, exhibits uniform handedness. The dendritic morphotype has been modeled successfully using a continuum model on a regular lattice; however, a suitable computational approach was not known to solve a continuum chiral model. This work details a new computational approach to solving the chiral continuum model of pattern formation in P. dendritiformis. The approach utilizes a random computational lattice and new methods for calculating certain derivative terms found in the model.

  9. Human X-chromosome inactivation pattern distributions fit a model of genetically influenced choice better than models of completely random choice

    PubMed Central

    Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C

    2013-01-01

    In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377

  10. The Multigroup Multilevel Categorical Latent Growth Curve Models

    ERIC Educational Resources Information Center

    Hung, Lai-Fa

    2010-01-01

    Longitudinal data describe developmental patterns and enable predictions of individual changes beyond sampled time points. Major methodological issues in longitudinal data include modeling random effects, subject effects, growth curve parameters, and autoregressive residuals. This study embedded the longitudinal model within a multigroup…

  11. Randomizing growing networks with a time-respecting null model

    NASA Astrophysics Data System (ADS)

    Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš

    2018-05-01

    Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.

  12. Image discrimination models predict detection in fixed but not random noise

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)

    1997-01-01

    By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.

  13. Symmetries and synchronization in multilayer random networks

    NASA Astrophysics Data System (ADS)

    Saa, Alberto

    2018-04-01

    In the light of the recently proposed scenario of asymmetry-induced synchronization (AISync), in which dynamical uniformity and consensus in a distributed system would demand certain asymmetries in the underlying network, we investigate here the influence of some regularities in the interlayer connection patterns on the synchronization properties of multilayer random networks. More specifically, by considering a Stuart-Landau model of complex oscillators with random frequencies, we report for multilayer networks a dynamical behavior that could be also classified as a manifestation of AISync. We show, namely, that the presence of certain symmetries in the interlayer connection pattern tends to diminish the synchronization capability of the whole network or, in other words, asymmetries in the interlayer connections would enhance synchronization in such structured networks. Our results might help the understanding not only of the AISync mechanism itself but also its possible role in the determination of the interlayer connection pattern of multilayer and other structured networks with optimal synchronization properties.

  14. Multiple filters affect tree species assembly in mid-latitude forest communities.

    PubMed

    Kubota, Y; Kusumoto, B; Shiono, T; Ulrich, W

    2018-05-01

    Species assembly patterns of local communities are shaped by the balance between multiple abiotic/biotic filters and dispersal that both select individuals from species pools at the regional scale. Knowledge regarding functional assembly can provide insight into the relative importance of the deterministic and stochastic processes that shape species assembly. We evaluated the hierarchical roles of the α niche and β niches by analyzing the influence of environmental filtering relative to functional traits on geographical patterns of tree species assembly in mid-latitude forests. Using forest plot datasets, we examined the α niche traits (leaf and wood traits) and β niche properties (cold/drought tolerance) of tree species, and tested non-randomness (clustering/over-dispersion) of trait assembly based on null models that assumed two types of species pools related to biogeographical regions. For most plots, species assembly patterns fell within the range of random expectation. However, particularly for cold/drought tolerance-related β niche properties, deviation from randomness was frequently found; non-random clustering was predominant in higher latitudes with harsh climates. Our findings demonstrate that both randomness and non-randomness in trait assembly emerged as a result of the α and β niches, although we suggest the potential role of dispersal processes and/or species equalization through trait similarities in generating the prevalence of randomness. Clustering of β niche traits along latitudinal climatic gradients provides clear evidence of species sorting by filtering particular traits. Our results reveal that multiple filters through functional niches and stochastic processes jointly shape geographical patterns of species assembly across mid-latitude forests.

  15. Observations of diffusion-limited aggregation-like patterns by atmospheric plasma jet

    NASA Astrophysics Data System (ADS)

    Chiu, Ching-Yang; Chu, Hong-Yu

    2017-11-01

    We report on the observations of diffusion-limited aggregation-like patterns during the thin film removal process by an atmospheric plasma jet. The fractal patterns are found to have various structures like dense branching and tree-like patterns. The determination of surface morphology reveals that the footprints of discharge bursts are not as random as expected. We propose a diffusion-limited aggregation model with a few extra requirements by analogy with the experimental results, and thereby present the beauty of nature. We show that the model simulates not only the shapes of the patterns similar to the experimental observations, but also the growing sequences of fluctuating, oscillatory, and zigzag traces.

  16. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    PubMed

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-11-10

    We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.

  17. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    PubMed

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  18. Spatial filtering precedes motion detection.

    PubMed

    Morgan, M J

    1992-01-23

    When we perceive motion on a television or cinema screen, there must be some process that allows us to track moving objects over time: if not, the result would be a conflicting mass of motion signals in all directions. A possible mechanism, suggested by studies of motion displacement in spatially random patterns, is that low-level motion detectors have a limited spatial range, which ensures that they tend to be stimulated over time by the same object. This model predicts that the direction of displacement of random patterns cannot be detected reliably above a critical absolute displacement value (Dmax) that is independent of the size or density of elements in the display. It has been inferred that Dmax is a measure of the size of motion detectors in the visual pathway. Other studies, however, have shown that Dmax increases with element size, in which case the most likely interpretation is that Dmax depends on the probability of false matches between pattern elements following a displacement. These conflicting accounts are reconciled here by showing that Dmax is indeed determined by the spacing between the elements in the pattern, but only after fine detail has been removed by a physiological prefiltering stage: the filter required to explain the data has a similar size to the receptive field of neurons in the primate magnocellular pathway. The model explains why Dmax can be increased by removing high spatial frequencies from random patterns, and simplifies our view of early motion detection.

  19. Three-part joint modeling methods for complex functional data mixed with zero-and-one-inflated proportions and zero-inflated continuous outcomes with skewness.

    PubMed

    Li, Haocheng; Staudenmayer, John; Wang, Tianying; Keadle, Sarah Kozey; Carroll, Raymond J

    2018-02-20

    We take a functional data approach to longitudinal studies with complex bivariate outcomes. This work is motivated by data from a physical activity study that measured 2 responses over time in 5-minute intervals. One response is the proportion of time active in each interval, a continuous proportions with excess zeros and ones. The other response, energy expenditure rate in the interval, is a continuous variable with excess zeros and skewness. This outcome is complex because there are 3 possible activity patterns in each interval (inactive, partially active, and completely active), and those patterns, which are observed, induce both nonrandom and random associations between the responses. More specifically, the inactive pattern requires a zero value in both the proportion for active behavior and the energy expenditure rate; a partially active pattern means that the proportion of activity is strictly between zero and one and that the energy expenditure rate is greater than zero and likely to be moderate, and the completely active pattern means that the proportion of activity is exactly one, and the energy expenditure rate is greater than zero and likely to be higher. To address these challenges, we propose a 3-part functional data joint modeling approach. The first part is a continuation-ratio model to reorder the ordinal valued 3 activity patterns. The second part models the proportions when they are in interval (0,1). The last component specifies the skewed continuous energy expenditure rate with Box-Cox transformations when they are greater than zero. In this 3-part model, the regression structures are specified as smooth curves measured at various time points with random effects that have a correlation structure. The smoothed random curves for each variable are summarized using a few important principal components, and the association of the 3 longitudinal components is modeled through the association of the principal component scores. The difficulties in handling the ordinal and proportional variables are addressed using a quasi-likelihood type approximation. We develop an efficient algorithm to fit the model that also involves the selection of the number of principal components. The method is applied to physical activity data and is evaluated empirically by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.

  20. RSAT: regulatory sequence analysis tools.

    PubMed

    Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques

    2008-07-01

    The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.

  1. Automatic pattern identification of rock moisture based on the Staff-RF model

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Tao, Kai; Jiang, Wei

    2018-04-01

    Studies on the moisture and damage state of rocks generally focus on the qualitative description and mechanical information of rocks. This method is not applicable to the real-time safety monitoring of rock mass. In this study, a musical staff computing model is used to quantify the acoustic emission signals of rocks with different moisture patterns. Then, the random forest (RF) method is adopted to form the staff-RF model for the real-time pattern identification of rock moisture. The entire process requires only the computing information of the AE signal and does not require the mechanical conditions of rocks.

  2. SHER: a colored petri net based random mobility model for wireless communications.

    PubMed

    Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal

    2015-01-01

    In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model.

  3. SHER: A Colored Petri Net Based Random Mobility Model for Wireless Communications

    PubMed Central

    Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal

    2015-01-01

    In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model. PMID:26267860

  4. A Comparative Study of Random Patterns for Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Stoilov, G.; Kavardzhikov, V.; Pashkouleva, D.

    2012-06-01

    Digital Image Correlation (DIC) is a computer based image analysis technique utilizing random patterns, which finds applications in experimental mechanics of solids and structures. In this paper a comparative study of three simulated random patterns is done. One of them is generated according to a new algorithm, introduced by the authors. A criterion for quantitative evaluation of random patterns after the calculation of their autocorrelation functions is introduced. The patterns' deformations are simulated numerically and realized experimentally. The displacements are measured by using the DIC method. Tensile tests are performed after printing the generated random patterns on surfaces of standard iron sheet specimens. It is found that the new designed random pattern keeps relatively good quality until reaching 20% deformation.

  5. Multilattice sampling strategies for region of interest dynamic MRI.

    PubMed

    Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E

    2013-08-01

    A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.

  6. Failure to suppress low-frequency neuronal oscillatory activity underlies the reduced effectiveness of random patterns of deep brain stimulation.

    PubMed

    McConnell, George C; So, Rosa Q; Grill, Warren M

    2016-06-01

    Subthalamic nucleus (STN) deep brain stimulation (DBS) is an established treatment for the motor symptoms of Parkinson's disease (PD). However, the mechanisms of action of DBS are unknown. Random temporal patterns of DBS are less effective than regular DBS, but the neuronal basis for this dependence on temporal pattern of stimulation is unclear. Using a rat model of PD, we quantified the changes in behavior and single-unit activity in globus pallidus externa and substantia nigra pars reticulata during high-frequency STN DBS with different degrees of irregularity. Although all stimulus trains had the same average rate, 130-Hz regular DBS more effectively reversed motor symptoms, including circling and akinesia, than 130-Hz irregular DBS. A mixture of excitatory and inhibitory neuronal responses was present during all stimulation patterns, and mean firing rate did not change during DBS. Low-frequency (7-10 Hz) oscillations of single-unit firing times present in hemiparkinsonian rats were suppressed by regular DBS, and neuronal firing patterns were entrained to 130 Hz. Irregular patterns of DBS less effectively suppressed 7- to 10-Hz oscillations and did not regularize firing patterns. Random DBS resulted in a larger proportion of neuron pairs with increased coherence at 7-10 Hz compared with regular 130-Hz DBS, which suggested that long pauses (interpulse interval >50 ms) during random DBS facilitated abnormal low-frequency oscillations in the basal ganglia. These results suggest that the efficacy of high-frequency DBS stems from its ability to regularize patterns of neuronal firing and thereby suppress abnormal oscillatory neural activity within the basal ganglia. Copyright © 2016 the American Physiological Society.

  7. Asymptotic laws for random knot diagrams

    NASA Astrophysics Data System (ADS)

    Chapman, Harrison

    2017-06-01

    We study random knotting by considering knot and link diagrams as decorated, (rooted) topological maps on spheres and pulling them uniformly from among sets of a given number of vertices n, as first established in recent work with Cantarella and Mastin. The knot diagram model is an exciting new model which captures both the random geometry of space curve models of knotting as well as the ease of computing invariants from diagrams. We prove that unknot diagrams are asymptotically exponentially rare, an analogue of Sumners and Whittington’s landmark result for self-avoiding polygons. Our proof uses the same key idea: we first show that knot diagrams obey a pattern theorem, which describes their fractal structure. We examine how quickly this behavior occurs in practice. As a consequence, almost all diagrams are asymmetric, simplifying sampling from this model. We conclude with experimental data on knotting in this model. This model of random knotting is similar to those studied by Diao et al, and Dunfield et al.

  8. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  9. Simulated near-field mapping of ripple pattern supported metal nanoparticles arrays for SERS optimization

    NASA Astrophysics Data System (ADS)

    Arya, Mahima; Bhatnagar, Mukul; Ranjan, Mukesh; Mukherjee, Subroto; Nath, Rabinder; Mitra, Anirban

    2017-11-01

    An analytical model has been developed using a modified Yamaguchi model along with the wavelength dependent plasmon line-width correction. The model has been used to calculate the near-field response of random nanoparticles on the plane surface, elongated and spherical silver nanoparticle arrays supported on ion beam produced ripple patterned templates. The calculated near-field mapping for elongated nanoparticles arrays on the ripple patterned surface shows maximum number of hot-spots with a higher near-field enhancement (NFE) as compared to the spherical nanoparticle arrays and randomly distributed nanoparticles on the plane surface. The results from the simulations show a similar trend for the NFE when compared to the far field reflection spectra. The nature of the wavelength dependent NFE is also found to be in agreement with the observed experimental results from surface enhanced Raman spectroscopy (SERS). The calculated and the measured optical response unambiguously reveal the importance of interparticle gap and ordering, where a high intensity Raman signal is obtained for ordered elongated nanoparticles arrays case as against non-ordered and the aligned configuration of spherical nanoparticles on the rippled surface.

  10. A random forest approach for predicting the presence of Echinococcus multilocularis intermediate host Ochotona spp. presence in relation to landscape characteristics in western China

    PubMed Central

    Marston, Christopher G.; Danson, F. Mark; Armitage, Richard P.; Giraudoux, Patrick; Pleydell, David R.J.; Wang, Qian; Qui, Jiamin; Craig, Philip S.

    2014-01-01

    Understanding distribution patterns of hosts implicated in the transmission of zoonotic disease remains a key goal of parasitology. Here, random forests are employed to model spatial patterns of the presence of the plateau pika (Ochotona spp.) small mammal intermediate host for the parasitic tapeworm Echinococcus multilocularis which is responsible for a significant burden of human zoonoses in western China. Landsat ETM+ satellite imagery and digital elevation model data were utilized to generate quantified measures of environmental characteristics across a study area in Sichuan Province, China. Land cover maps were generated identifying the distribution of specific land cover types, with landscape metrics employed to describe the spatial organisation of land cover patches. Random forests were used to model spatial patterns of Ochotona spp. presence, enabling the relative importance of the environmental characteristics in relation to Ochotona spp. presence to be ranked. An index of habitat aggregation was identified as the most important variable in influencing Ochotona spp. presence, with area of degraded grassland the most important land cover class variable. 71% of the variance in Ochotona spp. presence was explained, with a 90.98% accuracy rate as determined by ‘out-of-bag’ error assessment. Identification of the environmental characteristics influencing Ochotona spp. presence enables us to better understand distribution patterns of hosts implicated in the transmission of Em. The predictive mapping of this Em host enables the identification of human populations at increased risk of infection, enabling preventative strategies to be adopted. PMID:25386042

  11. Surface modeling method for aircraft engine blades by using speckle patterns based on the virtual stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang

    2018-03-01

    A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.

  12. Stochastic modeling of a serial killer

    PubMed Central

    Simkin, M.V.; Roychowdhury, V.P.

    2014-01-01

    We analyze the time pattern of the activity of a serial killer, who during twelve years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of “Devil’s staircase” type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. PMID:24721476

  13. Stochastic modeling of a serial killer.

    PubMed

    Simkin, M V; Roychowdhury, V P

    2014-08-21

    We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of "Devil's staircase" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. On a phase diagram for random neural networks with embedded spike timing dependent plasticity.

    PubMed

    Turova, Tatyana S; Villa, Alessandro E P

    2007-01-01

    This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.

  15. Effects of 3 dimensional crystal geometry and orientation on 1D and 2D time-scale determinations of magmatic processes using olivine and orthopyroxene

    NASA Astrophysics Data System (ADS)

    Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia

    2014-05-01

    One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.

  16. Looking for age-related growth decline in natural forests: unexpected biomass patterns from tree rings and simulated mortality

    USGS Publications Warehouse

    Foster, Jane R.; D'Amato, Anthony W.; Bradford, John B.

    2014-01-01

    Forest biomass growth is almost universally assumed to peak early in stand development, near canopy closure, after which it will plateau or decline. The chronosequence and plot remeasurement approaches used to establish the decline pattern suffer from limitations and coarse temporal detail. We combined annual tree ring measurements and mortality models to address two questions: first, how do assumptions about tree growth and mortality influence reconstructions of biomass growth? Second, under what circumstances does biomass production follow the model that peaks early, then declines? We integrated three stochastic mortality models with a census tree-ring data set from eight temperate forest types to reconstruct stand-level biomass increments (in Minnesota, USA). We compared growth patterns among mortality models, forest types and stands. Timing of peak biomass growth varied significantly among mortality models, peaking 20–30 years earlier when mortality was random with respect to tree growth and size, than when mortality favored slow-growing individuals. Random or u-shaped mortality (highest in small or large trees) produced peak growth 25–30 % higher than the surviving tree sample alone. Growth trends for even-aged, monospecific Pinus banksiana or Acer saccharum forests were similar to the early peak and decline expectation. However, we observed continually increasing biomass growth in older, low-productivity forests of Quercus rubra, Fraxinus nigra, and Thuja occidentalis. Tree-ring reconstructions estimated annual changes in live biomass growth and identified more diverse development patterns than previous methods. These detailed, long-term patterns of biomass development are crucial for detecting recent growth responses to global change and modeling future forest dynamics.

  17. Universality in chaos: Lyapunov spectrum and random matrix theory.

    PubMed

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  18. Universality in chaos: Lyapunov spectrum and random matrix theory

    NASA Astrophysics Data System (ADS)

    Hanada, Masanori; Shimada, Hidehiko; Tezuka, Masaki

    2018-02-01

    We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by random matrix theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass-deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t =0 , while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.

  19. Universal self-similarity of propagating populations

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  20. Universal self-similarity of propagating populations.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  1. Genetic analysis of partial egg production records in Japanese quail using random regression models.

    PubMed

    Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A

    2017-08-01

    The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P < 0.05) linear contrast estimates. Significant (P < 0.05) estimates of covariate effect (age at sexual maturity) showed a decreased pattern with greater impact on egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.

  2. Randomizing bipartite networks: the case of the World Trade Web.

    PubMed

    Saracco, Fabio; Di Clemente, Riccardo; Gabrielli, Andrea; Squartini, Tiziano

    2015-06-01

    Within the last fifteen years, network theory has been successfully applied both to natural sciences and to socioeconomic disciplines. In particular, bipartite networks have been recognized to provide a particularly insightful representation of many systems, ranging from mutualistic networks in ecology to trade networks in economy, whence the need of a pattern detection-oriented analysis in order to identify statistically-significant structural properties. Such an analysis rests upon the definition of suitable null models, i.e. upon the choice of the portion of network structure to be preserved while randomizing everything else. However, quite surprisingly, little work has been done so far to define null models for real bipartite networks. The aim of the present work is to fill this gap, extending a recently-proposed method to randomize monopartite networks to bipartite networks. While the proposed formalism is perfectly general, we apply our method to the binary, undirected, bipartite representation of the World Trade Web, comparing the observed values of a number of structural quantities of interest with the expected ones, calculated via our randomization procedure. Interestingly, the behavior of the World Trade Web in this new representation is strongly different from the monopartite analogue, showing highly non-trivial patterns of self-organization.

  3. Using Maximum Entropy to Find Patterns in Genomes

    NASA Astrophysics Data System (ADS)

    Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.

  4. Extrapolation of the dna fragment-size distribution after high-dose irradiation to predict effects at low doses

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.

    2001-01-01

    The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.

  5. Spatial point pattern analysis of human settlements and geographical associations in eastern coastal China - a case study.

    PubMed

    Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping

    2014-03-10

    Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley's K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning.

  6. Early stage hot spot analysis through standard cell base random pattern generation

    NASA Astrophysics Data System (ADS)

    Jeon, Joong-Won; Song, Jaewan; Kim, Jeong-Lim; Park, Seongyul; Yang, Seung-Hune; Lee, Sooryong; Kang, Hokyu; Madkour, Kareem; ElManhawy, Wael; Lee, SeungJo; Kwan, Joe

    2017-04-01

    Due to limited availability of DRC clean patterns during the process and RET recipe development, OPC recipes are not tested with high pattern coverage. Various kinds of pattern can help OPC engineer to detect sensitive patterns to lithographic effects. Random pattern generation is needed to secure robust OPC recipe. However, simple random patterns without considering real product layout style can't cover patterning hotspot in production levels. It is not effective to use them for OPC optimization thus it is important to generate random patterns similar to real product patterns. This paper presents a strategy for generating random patterns based on design architecture information and preventing hotspot in early process development stage through a tool called Layout Schema Generator (LSG). Using LSG, we generate standard cell based on random patterns reflecting real design cell structure - fin pitch, gate pitch and cell height. The output standard cells from LSG are applied to an analysis methodology to assess their hotspot severity by assigning a score according to their optical image parameters - NILS, MEEF, %PV band and thus potential hotspots can be defined by determining their ranking. This flow is demonstrated on Samsung 7nm technology optimizing OPC recipe and early enough in the process avoiding using problematic patterns.

  7. Spatiotemporal pattern formation in a prey-predator model under environmental driving forces

    NASA Astrophysics Data System (ADS)

    Sirohi, Anuj Kumar; Banerjee, Malay; Chakraborti, Anirban

    2015-09-01

    Many existing studies on pattern formation in the reaction-diffusion systems rely on deterministic models. However, environmental noise is often a major factor which leads to significant changes in the spatiotemporal dynamics. In this paper, we focus on the spatiotemporal patterns produced by the predator-prey model with ratio-dependent functional response and density dependent death rate of predator. We get the reaction-diffusion equations incorporating the self-diffusion terms, corresponding to random movement of the individuals within two dimensional habitats, into the growth equations for the prey and predator population. In order to have the noise added model, small amplitude heterogeneous perturbations to the linear intrinsic growth rates are introduced using uncorrelated Gaussian white noise terms. For the noise added system, we then observe spatial patterns for the parameter values lying outside the Turing instability region. With thorough numerical simulations we characterize the patterns corresponding to Turing and Turing-Hopf domain and study their dependence on different system parameters like noise-intensity, etc.

  8. Hormone-Mediated Pattern Formation in Seedling of Plants: a Competitive Growth Dynamics Model

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Satoshi; Mimura, Masayasu; Ohya, Tomoyuki; Oikawa, Noriko; Okabe, Hirotaka; Kai, Shoichi

    2001-10-01

    An ecologically relevant pattern formation process mediated by hormonal interactions among growing seedlings is modeled based on the experimental observations on the effects of indole acetic acid, which can act as an inhibitor and activator of root growth depending on its concentration. In the absence of any lateral root with constant hormone-sensitivity, the edge effect phenomenon is obtained depending on the secretion rate of hormone from the main root. Introduction of growth-stage-dependent hormone-sensitivity drastically amplifies the initial randomness, resulting in spatially irregular macroscopic patterns. When the lateral root growth is introduced, periodic patterns are obtained whose periodicity depends on the length of lateral roots. The growth-stage-dependent hormone-sensitivity and the lateral root growth are crucial for macroscopic periodic-pattern formation.

  9. Complex scaling behavior in animal foraging patterns

    NASA Astrophysics Data System (ADS)

    Premachandra, Prabhavi Kaushalya

    This dissertation attempts to answer questions from two different areas of biology, ecology and neuroscience, using physics-based techniques. In Section 2, suitability of three competing random walk models is tested to describe the emergent movement patterns of two species of primates. The truncated power law (power law with exponential cut off) is the most suitable random walk model that characterizes the emergent movement patterns of these primates. In Section 3, an agent-based model is used to simulate search behavior in different environments (landscapes) to investigate the impact of the resource landscape on the optimal foraging movement patterns of deterministic foragers. It should be noted that this model goes beyond previous work in that it includes parameters such as spatial memory and satiation, which have received little consideration to date in the field of movement ecology. When the food availability is scarce in a tropical forest-like environment with feeding trees distributed in a clumped fashion and the size of those trees are distributed according to a lognormal distribution, the optimal foraging pattern of a generalist who can consume various and abundant food types indeed reaches the Levy range, and hence, show evidence for Levy-flight-like (power law distribution with exponent between 1 and 3) behavior. Section 4 of the dissertation presents an investigation of phase transition behavior in a network of locally coupled self-sustained oscillators as the system passes through various bursting states. The results suggest that a phase transition does not occur for this locally coupled neuronal network. The data analysis in the dissertation adopts a model selection approach and relies on methods based on information theory and maximum likelihood.

  10. Inversion of left-right asymmetry alters performance of Xenopus tadpoles in nonlateralized cognitive tasks.

    PubMed

    Blackiston, Douglas J; Levin, Michael

    2013-08-01

    Left-right behavioural biases are well documented across the animal kingdom, and handedness has long been associated with cognitive performance. However, the relationship between body laterality and cognitive ability is poorly understood. The embryonic pathways dictating normal left-right patterning have been molecularly dissected in model vertebrates, and numerous genetic and pharmacological treatments now facilitate experimental randomization or reversal of the left-right axis in these animals. Several recent studies showed a link between brain asymmetry and strongly lateralized behaviours such as eye use preference. However, links between laterality of the body and performance on cognitive tasks utilizing nonlateralized cues remain unknown. Xenopus tadpoles are an established model for the study of early left-right patterning, and protocols were recently developed to quantitatively evaluate learning and memory in these animals. Using an automated testing and training platform, we tested wild-type, left-right-randomized and left-right-reversed tadpoles for their ability to learn colour cues in an automated assay. Our results indicate that animals with either randomization or reversal of somatic left-right patterning learned more slowly than wild-type siblings, although all groups were able to reach the same performance optimum given enough training sessions. These results are the first analysis of the link between body laterality and learning of nonlateralized cues, and they position the Xenopus tadpole as an attractive and tractable model for future studies of the links between asymmetry of the body, lateralization of the brain and behaviour.

  11. Oscillations and chaos in neural networks: an exactly solvable model.

    PubMed Central

    Wang, L P; Pichler, E E; Ross, J

    1990-01-01

    We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287

  12. Gradient modeling of conifer species using random forests

    Treesearch

    Jeffrey S. Evans; Samuel A. Cushman

    2009-01-01

    Landscape ecology often adopts a patch mosaic model of ecological patterns. However, many ecological attributes are inherently continuous and classification of species composition into vegetation communities and discrete patches provides an overly simplistic view of the landscape. If one adopts a nichebased, individualistic concept of biotic communities then it may...

  13. The random energy model in a magnetic field and joint source channel coding

    NASA Astrophysics Data System (ADS)

    Merhav, Neri

    2008-09-01

    We demonstrate that there is an intimate relationship between the magnetic properties of Derrida’s random energy model (REM) of spin glasses and the problem of joint source-channel coding in Information Theory. In particular, typical patterns of erroneously decoded messages in the coding problem have “magnetization” properties that are analogous to those of the REM in certain phases, where the non-uniformity of the distribution of the source in the coding problem plays the role of an external magnetic field applied to the REM. We also relate the ensemble performance (random coding exponents) of joint source-channel codes to the free energy of the REM in its different phases.

  14. A random approach of test macro generation for early detection of hotspots

    NASA Astrophysics Data System (ADS)

    Lee, Jong-hyun; Kim, Chin; Kang, Minsoo; Hwang, Sungwook; Yang, Jae-seok; Harb, Mohammed; Al-Imam, Mohamed; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe

    2016-03-01

    Multiple-Patterning Technology (MPT) is still the preferred choice over EUV for the advanced technology nodes, starting the 20nm node. Down the way to 7nm and 5nm nodes, Self-Aligned Multiple Patterning (SAMP) appears to be one of the effective multiple patterning techniques in terms of achieving small pitch of printed lines on wafer, yet its yield is in question. Predicting and enhancing the yield in the early stages of technology development are some of the main objectives for creating test macros on test masks. While conventional yield ramp techniques for a new technology node have relied on using designs from previous technology nodes as a starting point to identify patterns for Design of Experiment (DoE) creation, these techniques are challenging to apply in the case of introducing an MPT technique like SAMP that did not exist in previous nodes. This paper presents a new strategy for generating test structures based on random placement of unit patterns that can construct more meaningful bigger patterns. Specifications governing the relationships between those unit patterns can be adjusted to generate layout clips that look like realistic SAMP designs. A via chain can be constructed to connect the random DoE of SAMP structures through a routing layer to external pads for electrical measurement. These clips are decomposed according to the decomposition rules of the technology into the appropriate mandrel and cut masks. The decomposed clips can be tested through simulations, or electrically on silicon to discover hotspots. The hotspots can be used in optimizing the fabrication process and models to fix them. They can also be used as learning patterns for DFM deck development. By expanding the size of the randomly generated test structures, more hotspots can be detected. This should provide a faster way to enhance the yield of a new technology node.

  15. Sensing Urban Land-Use Patterns by Integrating Google Tensorflow and Scene-Classification Models

    NASA Astrophysics Data System (ADS)

    Yao, Y.; Liang, H.; Li, X.; Zhang, J.; He, J.

    2017-09-01

    With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.

  16. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circlesmore » shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.« less

  18. Directed self assembly of block copolymers using chemical patterns with sidewall guiding lines, backfilled with random copolymer brushes.

    PubMed

    Pandav, Gunja; Durand, William J; Ellison, Christopher J; Willson, C Grant; Ganesan, Venkat

    2015-12-21

    Recently, alignment of block copolymer domains has been achieved using a topographically patterned substrate with a sidewall preferential to one of the blocks. This strategy has been suggested as an option to overcome the patterning resolution challenges facing chemoepitaxy strategies, which utilize chemical stripes with a width of about half the period of block copolymer to orient the equilibrium morphologies. In this work, single chain in mean field simulation methodology was used to study the self assembly of symmetric block copolymers on topographically patterned substrates with sidewall interactions. Random copolymer brushes grafted to the background region (space between patterns) were modeled explicitly. The effects of changes in pattern width, film thicknesses and strength of sidewall interaction on the resulting morphologies were examined and the conditions which led to perpendicular morphologies required for lithographic applications were identified. A number of density multiplication schemes were studied in order to gauge the efficiency with which the sidewall pattern can guide the self assembly of block copolymers. The results indicate that such a patterning technique can potentially utilize pattern widths of the order of one-two times the period of block copolymer and still be able to guide ordering of the block copolymer domains up to 8X density multiplication.

  19. Quantifying the effect of crop spatial arrangement on weed suppression using functional-structural plant modelling.

    PubMed

    Evers, Jochem B; Bastiaans, Lammert

    2016-05-01

    Suppression of weed growth in a crop canopy can be enhanced by improving crop competitiveness. One way to achieve this is by modifying the crop planting pattern. In this study, we addressed the question to what extent a uniform planting pattern increases the ability of a crop to compete with weed plants for light compared to a random and a row planting pattern, and how this ability relates to crop and weed plant density as well as the relative time of emergence of the weed. To this end, we adopted the functional-structural plant modelling approach which allowed us to explicitly include the 3D spatial configuration of the crop-weed canopy and to simulate intra- and interspecific competition between individual plants for light. Based on results of simulated leaf area development, canopy photosynthesis and biomass growth of the crop, we conclude that differences between planting pattern were small, particularly if compared to the effects of relative time of emergence of the weed, weed density and crop density. Nevertheless, analysis of simulated weed biomass demonstrated that a uniform planting of the crop improved the weed-suppression ability of the crop canopy. Differences in weed suppressiveness between planting patterns were largest with weed emergence before crop emergence, when the suppressive effect of the crop was only marginal. With simultaneous emergence a uniform planting pattern was 8 and 15 % more competitive than a row and a random planting pattern, respectively. When weed emergence occurred after crop emergence, differences between crop planting patterns further decreased as crop canopy closure was reached early on regardless of planting pattern. We furthermore conclude that our modelling approach provides promising avenues to further explore crop-weed interactions and aid in the design of crop management strategies that aim at improving crop competitiveness with weeds.

  20. Spatial Point Pattern Analysis of Human Settlements and Geographical Associations in Eastern Coastal China — A Case Study

    PubMed Central

    Zhang, Zhonghao; Xiao, Rui; Shortridge, Ashton; Wu, Jiaping

    2014-01-01

    Understanding the spatial point pattern of human settlements and their geographical associations are important for understanding the drivers of land use and land cover change and the relationship between environmental and ecological processes on one hand and cultures and lifestyles on the other. In this study, a Geographic Information System (GIS) approach, Ripley’s K function and Monte Carlo simulation were used to investigate human settlement point patterns. Remotely sensed tools and regression models were employed to identify the effects of geographical determinants on settlement locations in the Wen-Tai region of eastern coastal China. Results indicated that human settlements displayed regular-random-cluster patterns from small to big scale. Most settlements located on the coastal plain presented either regular or random patterns, while those in hilly areas exhibited a clustered pattern. Moreover, clustered settlements were preferentially located at higher elevations with steeper slopes and south facing aspects than random or regular settlements. Regression showed that influences of topographic factors (elevation, slope and aspect) on settlement locations were stronger across hilly regions. This study demonstrated a new approach to analyzing the spatial patterns of human settlements from a wide geographical prospective. We argue that the spatial point patterns of settlements, in addition to the characteristics of human settlements, such as area, density and shape, should be taken into consideration in the future, and land planners and decision makers should pay more attention to city planning and management. Conceptual and methodological bridges linking settlement patterns to regional and site-specific geographical characteristics will be a key to human settlement studies and planning. PMID:24619117

  1. Layers: A molecular surface peeling algorithm and its applications to analyze protein structures

    PubMed Central

    Karampudi, Naga Bhushana Rao; Bahadur, Ranjit Prasad

    2015-01-01

    We present an algorithm ‘Layers’ to peel the atoms of proteins as layers. Using Layers we show an efficient way to transform protein structures into 2D pattern, named residue transition pattern (RTP), which is independent of molecular orientations. RTP explains the folding patterns of proteins and hence identification of similarity between proteins is simple and reliable using RTP than with the standard sequence or structure based methods. Moreover, Layers generates a fine-tunable coarse model for the molecular surface by using non-random sampling. The coarse model can be used for shape comparison, protein recognition and ligand design. Additionally, Layers can be used to develop biased initial configuration of molecules for protein folding simulations. We have developed a random forest classifier to predict the RTP of a given polypeptide sequence. Layers is a standalone application; however, it can be merged with other applications to reduce the computational load when working with large datasets of protein structures. Layers is available freely at http://www.csb.iitkgp.ernet.in/applications/mol_layers/main. PMID:26553411

  2. When human walking becomes random walking: fractal analysis and modeling of gait rhythm fluctuations

    NASA Astrophysics Data System (ADS)

    Hausdorff, Jeffrey M.; Ashkenazy, Yosef; Peng, Chang-K.; Ivanov, Plamen Ch.; Stanley, H. Eugene; Goldberger, Ary L.

    2001-12-01

    We present a random walk, fractal analysis of the stride-to-stride fluctuations in the human gait rhythm. The gait of healthy young adults is scale-free with long-range correlations extending over hundreds of strides. This fractal scaling changes characteristically with maturation in children and older adults and becomes almost completely uncorrelated with certain neurologic diseases. Stochastic modeling of the gait rhythm dynamics, based on transitions between different “neural centers”, reproduces distinctive statistical properties of the gait pattern. By tuning one model parameter, the hopping (transition) range, the model can describe alterations in gait dynamics from childhood to adulthood - including a decrease in the correlation and volatility exponents with maturation.

  3. Numerical comparison of grid pattern diffraction effects through measurement and modeling with OptiScan software

    NASA Astrophysics Data System (ADS)

    Murray, Ian B.; Densmore, Victor; Bora, Vaibhav; Pieratt, Matthew W.; Hibbard, Douglas L.; Milster, Tom D.

    2011-06-01

    Coatings of various metalized patterns are used for heating and electromagnetic interference (EMI) shielding applications. Previous work has focused on macro differences between different types of grids, and has shown good correlation between measurements and analyses of grid diffraction. To advance this work, we have utilized the University of Arizona's OptiScan software, which has been optimized for this application by using the Babinet Principle. When operating on an appropriate computer system, this algorithm produces results hundreds of times faster than standard Fourier-based methods, and allows realistic cases to be modeled for the first time. By using previously published derivations by Exotic Electro-Optics, we compare diffraction performance of repeating and randomized grid patterns with equivalent sheet resistance using numerical performance metrics. Grid patterns of each type are printed on optical substrates and measured energy is compared against modeled energy.

  4. Spatiotemporal progression of metastatic breast cancer: a Markov chain model highlighting the role of early metastatic sites

    PubMed Central

    Newton, Paul K; Mason, Jeremy; Venkatappa, Neethi; Jochelson, Maxine S; Hurt, Brian; Nieva, Jorge; Comen, Elizabeth; Norton, Larry; Kuhn, Peter

    2015-01-01

    Background: Cancer cell migration patterns are critical for understanding metastases and clinical evolution. Breast cancer spreads from one organ system to another via hematogenous and lymphatic routes. Although patterns of spread may superficially seem random and unpredictable, we explored the possibility that this is not the case. Aims: Develop a Markov based model of breast cancer progression that has predictive capability. Methods: On the basis of a longitudinal data set of 446 breast cancer patients, we created a Markov chain model of metastasis that describes the probabilities of metastasis occurring at a given anatomic site together with the probability of spread to additional sites. Progression is modeled as a random walk on a directed graph, where nodes represent anatomical sites where tumors can develop. Results: We quantify how survival depends on the location of the first metastatic site for different patient subcategories. In addition, we classify metastatic sites as “sponges” or “spreaders” with implications regarding anatomical pathway prediction and long-term survival. As metastatic tumors to the bone (main spreader) are most prominent, we focus in more detail on differences between groups of patients who form subsequent metastases to the lung as compared with the liver. Conclusions: We have found that spatiotemporal patterns of metastatic spread in breast cancer are neither random nor unpredictable. Furthermore, the novel concept of classifying organ sites as sponges or spreaders may motivate experiments seeking a biological basis for these phenomena and allow us to quantify the potential consequences of therapeutic targeting of sites in the oligometastatic setting and shed light on organotropic aspects of the disease. PMID:28721371

  5. Frustration in Condensed Matter and Protein Folding

    NASA Astrophysics Data System (ADS)

    Lorelli, S.; Cabot, A.; Sundarprasad, N.; Boekema, C.

    Using computer modeling we study frustration in condensed matter and protein folding. Frustration is due to random and/or competing interactions. One definition of frustration is the sum of squares of the differences between actual and expected distances between characters. If this sum is non-zero, then the system is said to have frustration. A simulation tracks the movement of characters to lower their frustration. Our research is conducted on frustration as a function of temperature using a logarithmic scale. At absolute zero, the relaxation for frustration is a power function for randomly assigned patterns or an exponential function for regular patterns like Thomson figures. These findings have implications for protein folding; we attempt to apply our frustration modeling to protein folding and dynamics. We use coding in Python to simulate different ways a protein can fold. An algorithm is being developed to find the lowest frustration (and thus energy) states possible. Research supported by SJSU & AFC.

  6. A Cerebellar-model Associative Memory as a Generalized Random-access Memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1989-01-01

    A versatile neural-net model is explained in terms familiar to computer scientists and engineers. It is called the sparse distributed memory, and it is a random-access memory for very long words (for patterns with thousands of bits). Its potential utility is the result of several factors: (1) a large pattern representing an object or a scene or a moment can encode a large amount of information about what it represents; (2) this information can serve as an address to the memory, and it can also serve as data; (3) the memory is noise tolerant--the information need not be exact; (4) the memory can be made arbitrarily large and hence an arbitrary amount of information can be stored in it; and (5) the architecture is inherently parallel, allowing large memories to be fast. Such memories can become important components of future computers.

  7. Comparing effects of fire modeling methods on simulated fire patterns and succession: a case study in the Missouri Ozarks

    Treesearch

    Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson

    2008-01-01

    We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...

  8. The Signal Importance of Noise

    ERIC Educational Resources Information Center

    Macy, Michael; Tsvetkova, Milena

    2015-01-01

    Noise is widely regarded as a residual category--the unexplained variance in a linear model or the random disturbance of a predictable pattern. Accordingly, formal models often impose the simplifying assumption that the world is noise-free and social dynamics are deterministic. Where noise is assigned causal importance, it is often assumed to be a…

  9. A high-capacity model for one shot association learning in the brain

    PubMed Central

    Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika

    2014-01-01

    We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs. PMID:25426060

  10. A high-capacity model for one shot association learning in the brain.

    PubMed

    Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika

    2014-01-01

    We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.

  11. Use of forecasting signatures to help distinguish periodicity, randomness, and chaos in ripples and other spatial patterns

    USGS Publications Warehouse

    Rubin, D.M.

    1992-01-01

    Forecasting of one-dimensional time series previously has been used to help distinguish periodicity, chaos, and noise. This paper presents two-dimensional generalizations for making such distinctions for spatial patterns. The techniques are evaluated using synthetic spatial patterns and then are applied to a natural example: ripples formed in sand by blowing wind. Tests with the synthetic patterns demonstrate that the forecasting techniques can be applied to two-dimensional spatial patterns, with the same utility and limitations as when applied to one-dimensional time series. One limitation is that some combinations of periodicity and randomness exhibit forecasting signatures that mimic those of chaos. For example, sine waves distorted with correlated phase noise have forecasting errors that increase with forecasting distance, errors that, are minimized using nonlinear models at moderate embedding dimensions, and forecasting properties that differ significantly between the original and surrogates. Ripples formed in sand by flowing air or water typically vary in geometry from one to another, even when formed in a flow that is uniform on a large scale; each ripple modifies the local flow or sand-transport field, thereby influencing the geometry of the next ripple downcurrent. Spatial forecasting was used to evaluate the hypothesis that such a deterministic process - rather than randomness or quasiperiodicity - is responsible for the variation between successive ripples. This hypothesis is supported by a forecasting error that increases with forecasting distance, a greater accuracy of nonlinear relative to linear models, and significant differences between forecasts made with the original ripples and those made with surrogate patterns. Forecasting signatures cannot be used to distinguish ripple geometry from sine waves with correlated phase noise, but this kind of structure can be ruled out by two geometric properties of the ripples: Successive ripples are highly correlated in wavelength, and ripple crests display dislocations such as branchings and mergers. ?? 1992 American Institute of Physics.

  12. Brownian Motion in a Speckle Light Field: Tunable Anomalous Diffusion and Selective Optical Manipulation

    PubMed Central

    Volpe, Giorgio; Volpe, Giovanni; Gigan, Sylvain

    2014-01-01

    The motion of particles in random potentials occurs in several natural phenomena ranging from the mobility of organelles within a biological cell to the diffusion of stars within a galaxy. A Brownian particle moving in the random optical potential associated to a speckle pattern, i.e., a complex interference pattern generated by the scattering of coherent light by a random medium, provides an ideal model system to study such phenomena. Here, we derive a theory for the motion of a Brownian particle in a speckle field and, in particular, we identify its universal characteristic timescale. Based on this theoretical insight, we show how speckle light fields can be used to control the anomalous diffusion of a Brownian particle and to perform some basic optical manipulation tasks such as guiding and sorting. Our results might broaden the perspectives of optical manipulation for real-life applications. PMID:24496461

  13. Effect of texture randomization on the slip and interfacial robustness in turbulent flows over superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Mani, Ali

    2018-04-01

    Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag reduction performance and robustness of superhydrophobic surfaces.

  14. Non-random dispersal in the butterfly Maniola jurtina: implications for metapopulation models.

    PubMed Central

    Conradt, L; Bodsworth, E J; Roper, T J; Thomas, C D

    2000-01-01

    The dispersal patterns of animals are important in metapopulation ecology because they affect the dynamics and survival of populations. Theoretical models assume random dispersal but little is known in practice about the dispersal behaviour of individual animals or the strategy by which dispersers locate distant habitat patches. In the present study, we released individual meadow brown butterflies (Maniola jurtina) in a non-habitat and investigated their ability to return to a suitable habitat. The results provided three reasons for supposing that meadow brown butterflies do not seek habitat by means of random flight. First, when released within the range of their normal dispersal distances, the butterflies orientated towards suitable habitat at a higher rate than expected at random. Second, when released at larger distances from their habitat, they used a non-random, systematic, search strategy in which they flew in loops around the release point and returned periodically to it. Third, butterflies returned to a familiar habitat patch rather than a non-familiar one when given a choice. If dispersers actively orientate towards or search systematically for distant habitat, this may be problematic for existing metapopulation models, including models of the evolution of dispersal rates in metapopulations. PMID:11007325

  15. SKA aperture array verification system: electromagnetic modeling and beam pattern measurements using a micro UAV

    NASA Astrophysics Data System (ADS)

    de Lera Acedo, E.; Bolli, P.; Paonessa, F.; Virone, G.; Colin-Beltran, E.; Razavi-Ghods, N.; Aicardi, I.; Lingua, A.; Maschio, P.; Monari, J.; Naldi, G.; Piras, M.; Pupillo, G.

    2018-03-01

    In this paper we present the electromagnetic modeling and beam pattern measurements of a 16-elements ultra wideband sparse random test array for the low frequency instrument of the Square Kilometer Array telescope. We discuss the importance of a small array test platform for the development of technologies and techniques towards the final telescope, highlighting the most relevant aspects of its design. We also describe the electromagnetic simulations and modeling work as well as the embedded-element and array pattern measurements using an Unmanned Aerial Vehicle system. The latter are helpful both for the validation of the models and the design as well as for the future instrumental calibration of the telescope thanks to the stable, accurate and strong radio frequency signal transmitted by the UAV. At this stage of the design, these measurements have shown a general agreement between experimental results and numerical data and have revealed the localized effect of un-calibrated cable lengths in the inner side-lobes of the array pattern.

  16. Modeling of blob-hole correlations in GPI edge turbulence data

    NASA Astrophysics Data System (ADS)

    Myra, J. R.; Russell, D. A.; Zweben, S. J.

    2017-10-01

    Gas-puff imaging (GPI) observations made on NSTX have revealed two-point spatial correlation patterns in the plane perpendicular to the magnetic field. A common feature is the occurrence of dipole-like patterns with significant regions of negative correlation. In this work, we explore the possibility that these dipole patterns may be due to blob-hole pairs. Statistical methods are applied to determine the two-point spatial correlation that results from a model of blob-hole pair formation. It is shown that the model produces dipole correlation patterns that are qualitatively similar to the GPI data in many respects. Effects of the reference location (confined surfaces or scrape-off layer), a superimposed random background, hole velocity and lifetime, and background sheared flows are explored. The possibility of using the model to ascertain new information about edge turbulence is discussed. Work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG02-02ER54678.

  17. Image-based modeling of radiation-induced foci

    NASA Astrophysics Data System (ADS)

    Costes, Sylvain; Cucinotta, Francis A.; Ponomarev, Artem; Barcellos-Hoff, Mary Helen; Chen, James; Chou, William; Gascard, Philippe

    Several proteins involved in the response to DNA double strand breaks (DSB) form microscopically visible nuclear domains, or foci, after exposure to ionizing radiation. Radiation-induced foci (RIF) are believed to be located where DNA damage occurs. To test this assumption, we used Monte Carlo simulations to predict the spatial distribution of DSB in human nuclei exposed to high or low-LET radiation. We then compared these predictions to the distribution patterns of three DNA damage sensing proteins, i.e. 53BP1, phosphorylated ATM and γH2AX in human mammary epithelial. The probability to induce DSB can be derived from DNA fragment data measured experimentally by pulsed-field gel electrophoresis. We first used this probability in Monte Carlo simulations to predict DSB locations in synthetic nuclei geometrically described by a complete set of human chromosomes, taking into account microscope optics from real experiments. Simulations showed a very good agreement for high-LET, predicting 0.7 foci/µm along the path of a 1 GeV/amu Fe particle against measurement of 0.69 to 0.82 foci/µm for various RIF 5 min following exposure (LET 150 keV/µm). On the other hand, discrepancies were shown in foci frequency for low-LET, with measurements 20One drawback using a theoretical model for the nucleus is that it assumes a simplistic and static pattern for DNA densities. However DNA damage pattern is highly correlated to DNA density pattern (i.e. the more DNA, the more likely to have a break). Therefore, we generalized our Monte Carlo approach to real microscope images, assuming pixel intensity of DAPI in the nucleus was directly proportional to the amount of DNA in that pixel. With such approach we could predict DNA damage pattern in real images on a per nucleus basis. Since energy is randomly deposited along high-LET particle paths, RIF along these paths should also be randomly distributed. As expected, simulations produced DNA-weighted random (Poisson) distributions. In contrast, the distributions of RIF obtained as early as 5 min after exposure to high LET (1 GeV/amu Fe) were non-random. This deviation from the expected DNA-weighted random pattern was further characterized by "relative DNA image measurements". This novel imaging approach showed that RIF were located preferentially at the interface between high and low DNA density regions, and were more frequent than predicted in regions with lower DNA density. The same preferential nuclear location was also measured for RIF induced by 1 Gy of low-LET radiation. This deviation from random behavior was evident only 5 min after irradiation for phosphorylated ATM RIF, while γH2AX and 53BP1 RIF showed pronounced deviations up to 30 min after exposure. These data suggest that RIF within a few minutes following exposure to radiation cluster into open regions of the nucleus (i.e. euchromatin). It is possible that DNA lesions are collected in these nuclear sub-domains for more efficient repair. If so, this would imply that DSB are actively transported within the nucleus, a phenomenon that has not yet been considered in modeling DNA misrepair following exposure to radiation. These results are thus critical for more accurate risk models of radiation and we are actively working on characterizing further RIF movement in human nuclei using live cell imaging.

  18. Motifs in triadic random graphs based on Steiner triple systems

    NASA Astrophysics Data System (ADS)

    Winkler, Marco; Reichardt, Jörg

    2013-08-01

    Conventionally, pairwise relationships between nodes are considered to be the fundamental building blocks of complex networks. However, over the last decade, the overabundance of certain subnetwork patterns, i.e., the so-called motifs, has attracted much attention. It has been hypothesized that these motifs, instead of links, serve as the building blocks of network structures. Although the relation between a network's topology and the general properties of the system, such as its function, its robustness against perturbations, or its efficiency in spreading information, is the central theme of network science, there is still a lack of sound generative models needed for testing the functional role of subgraph motifs. Our work aims to overcome this limitation. We employ the framework of exponential random graph models (ERGMs) to define models based on triadic substructures. The fact that only a small portion of triads can actually be set independently poses a challenge for the formulation of such models. To overcome this obstacle, we use Steiner triple systems (STSs). These are partitions of sets of nodes into pair-disjoint triads, which thus can be specified independently. Combining the concepts of ERGMs and STSs, we suggest generative models capable of generating ensembles of networks with nontrivial triadic Z-score profiles. Further, we discover inevitable correlations between the abundance of triad patterns, which occur solely for statistical reasons and need to be taken into account when discussing the functional implications of motif statistics. Moreover, we calculate the degree distributions of our triadic random graphs analytically.

  19. Dynamics of vascular branching morphogenesis: The effect of blood and tissue flow

    NASA Astrophysics Data System (ADS)

    Nguyen, Thi-Hanh; Eichmann, Anne; Le Noble, Ferdinand; Fleury, Vincent

    2006-06-01

    Vascularization of embryonic organs or tumors starts from a primitive lattice of capillaries. Upon perfusion, this lattice is remodeled into branched arteries and veins. Adaptation to mechanical forces is implied to play a major role in arterial patterning. However, numerical simulations of vessel adaptation to haemodynamics has so far failed to predict any realistic vascular pattern. We present in this article a theoretical modeling of vascular development in the yolk sac based on three features of vascular morphogenesis: the disconnection of side branches from main branches, the reconnection of dangling sprouts (“dead ends”), and the plastic extension of interstitial tissue, which we have observed in vascular morphogenesis. We show that the effect of Poiseuille flow in the vessels can be modeled by aggregation of random walkers. Solid tissue expansion can be modeled by a Poiseuille (parabolic) deformation, hence by deformation under hits of random walkers. Incorporation of these features, which are of a mechanical nature, leads to realistic modeling of vessels, with important biological consequences. The model also predicts the outcome of simple mechanical actions, such as clamping of vessels or deformation of tissue by the presence of obstacles. This study offers an explanation for flow-driven control of vascular branching morphogenesis.

  20. Nonlinear complexity of random visibility graph and Lempel-Ziv on multitype range-intensity interacting financial dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Yali; Wang, Jun

    2017-09-01

    In an attempt to investigate the nonlinear complex evolution of financial dynamics, a new financial price model - the multitype range-intensity contact (MRIC) financial model, is developed based on the multitype range-intensity interacting contact system, in which the interaction and transmission of different types of investment attitudes in a stock market are simulated by viruses spreading. Two new random visibility graph (VG) based analyses and Lempel-Ziv complexity (LZC) are applied to study the complex behaviors of return time series and the corresponding random sorted series. The VG method is the complex network theory, and the LZC is a non-parametric measure of complexity reflecting the rate of new pattern generation of a series. In this work, the real stock market indices are considered to be comparatively studied with the simulation data of the proposed model. Further, the numerical empirical study shows the similar complexity behaviors between the model and the real markets, the research confirms that the financial model is reasonable to some extent.

  1. Analysis of a Spatial Point Pattern: Examining the Damage to Pavement and Pipes in Santa Clara Valley Resulting from the Loma Prieta Earthquake

    USGS Publications Warehouse

    Phelps, G.A.

    2008-01-01

    This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.

  2. Pattern formation and collective effects in populations of magnetic microswimmers

    NASA Astrophysics Data System (ADS)

    Vach, Peter J.; Walker, Debora; Fischer, Peer; Fratzl, Peter; Faivre, Damien

    2017-03-01

    Self-propelled particles are one prototype of synthetic active matter used to understand complex biological processes, such as the coordination of movement in bacterial colonies or schools of fishes. Collective patterns such as clusters were observed for such systems, reproducing features of biological organization. However, one limitation of this model is that the synthetic assemblies are made of identical individuals. Here we introduce an active system based on magnetic particles at colloidal scales. We use identical but also randomly-shaped magnetic micropropellers and show that they exhibit dynamic and reversible pattern formation.

  3. Planning in Higher Education and Chaos Theory: A Model, a Method.

    ERIC Educational Resources Information Center

    Cutright, Marc

    This paper proposes a model, based on chaos theory, that explores strategic planning in higher education. It notes that chaos theory was first developed in the physical sciences to explain how apparently random activity was, in fact, complexity patterned. The paper goes on to describe how chaos theory has subsequently been applied to the social…

  4. A Hierarchical and Contextual Model for Learning and Recognizing Highly Variant Visual Categories

    DTIC Science & Technology

    2010-01-01

    neighboring pattern primitives, to create our model. We also present a minimax entropy framework for automatically learning which contextual constraints are...Grammars . . . . . . . . . . . . . . . . . . 19 3.2 Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Creating a Contextual...Compositional Boosting. . . . . 119 7.8 Top-down hallucinations of missing objects. . . . . . . . . . . . . . . 121 7.9 The bottom-up to top-down

  5. A random walk model to evaluate autism

    NASA Astrophysics Data System (ADS)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  6. A multivariate spatial mixture model for areal data: examining regional differences in standardized test scores

    PubMed Central

    Neelon, Brian; Gelfand, Alan E.; Miranda, Marie Lynn

    2013-01-01

    Summary Researchers in the health and social sciences often wish to examine joint spatial patterns for two or more related outcomes. Examples include infant birth weight and gestational length, psychosocial and behavioral indices, and educational test scores from different cognitive domains. We propose a multivariate spatial mixture model for the joint analysis of continuous individual-level outcomes that are referenced to areal units. The responses are modeled as a finite mixture of multivariate normals, which accommodates a wide range of marginal response distributions and allows investigators to examine covariate effects within subpopulations of interest. The model has a hierarchical structure built at the individual level (i.e., individuals are nested within areal units), and thus incorporates both individual- and areal-level predictors as well as spatial random effects for each mixture component. Conditional autoregressive (CAR) priors on the random effects provide spatial smoothing and allow the shape of the multivariate distribution to vary flexibly across geographic regions. We adopt a Bayesian modeling approach and develop an efficient Markov chain Monte Carlo model fitting algorithm that relies primarily on closed-form full conditionals. We use the model to explore geographic patterns in end-of-grade math and reading test scores among school-age children in North Carolina. PMID:26401059

  7. Identifying treatment effect heterogeneity in clinical trials using subpopulations of events: STEPP.

    PubMed

    Lazar, Ann A; Bonetti, Marco; Cole, Bernard F; Yip, Wai-Ki; Gelber, Richard D

    2016-04-01

    Investigators conducting randomized clinical trials often explore treatment effect heterogeneity to assess whether treatment efficacy varies according to patient characteristics. Identifying heterogeneity is central to making informed personalized healthcare decisions. Treatment effect heterogeneity can be investigated using subpopulation treatment effect pattern plot (STEPP), a non-parametric graphical approach that constructs overlapping patient subpopulations with varying values of a characteristic. Procedures for statistical testing using subpopulation treatment effect pattern plot when the endpoint of interest is survival remain an area of active investigation. A STEPP analysis was used to explore patterns of absolute and relative treatment effects for varying levels of a breast cancer biomarker, Ki-67, in the phase III Breast International Group 1-98 randomized clinical trial, comparing letrozole to tamoxifen as adjuvant therapy for postmenopausal women with hormone receptor-positive breast cancer. Absolute treatment effects were measured by differences in 4-year cumulative incidence of breast cancer recurrence, while relative effects were measured by the subdistribution hazard ratio in the presence of competing risks using O-E (observed-minus-expected) methodology, an intuitive non-parametric method. While estimation of hazard ratio values based on O-E methodology has been shown, a similar development for the subdistribution hazard ratio has not. Furthermore, we observed that the subpopulation treatment effect pattern plot analysis may not produce results, even with 100 patients within each subpopulation. After further investigation through simulation studies, we observed inflation of the type I error rate of the traditional test statistic and sometimes singular variance-covariance matrix estimates that may lead to results not being produced. This is due to the lack of sufficient number of events within the subpopulations, which we refer to as instability of the subpopulation treatment effect pattern plot analysis. We introduce methodology designed to improve stability of the subpopulation treatment effect pattern plot analysis and generalize O-E methodology to the competing risks setting. Simulation studies were designed to assess the type I error rate of the tests for a variety of treatment effect measures, including subdistribution hazard ratio based on O-E estimation. This subpopulation treatment effect pattern plot methodology and standard regression modeling were used to evaluate heterogeneity of Ki-67 in the Breast International Group 1-98 randomized clinical trial. We introduce methodology that generalizes O-E methodology to the competing risks setting and that improves stability of the STEPP analysis by pre-specifying the number of events across subpopulations while controlling the type I error rate. The subpopulation treatment effect pattern plot analysis of the Breast International Group 1-98 randomized clinical trial showed that patients with high Ki-67 percentages may benefit most from letrozole, while heterogeneity was not detected using standard regression modeling. The STEPP methodology can be used to study complex patterns of treatment effect heterogeneity, as illustrated in the Breast International Group 1-98 randomized clinical trial. For the subpopulation treatment effect pattern plot analysis, we recommend a minimum of 20 events within each subpopulation. © The Author(s) 2015.

  8. Is walking a random walk? Evidence for long-range correlations in stride interval of human gait

    NASA Technical Reports Server (NTRS)

    Hausdorff, Jeffrey M.; Peng, C.-K.; Ladin, Zvi; Wei, Jeanne Y.; Goldberger, Ary L.

    1995-01-01

    Complex fluctuation of unknown origin appear in the normal gait pattern. These fluctuations might be described as being (1) uncorrelated white noise, (2) short-range correlations, or (3) long-range correlations with power-law scaling. To test these possibilities, the stride interval of 10 healthy young men was measured as they walked for 9 min at their usual rate. From these time series we calculated scaling indexes by using a modified random walk analysis and power spectral analysis. Both indexes indicated the presence of long-range self-similar correlations extending over hundreds of steps; the stride interval at any time depended on the stride interval at remote previous times, and this dependence decayed in a scale-free (fractallike) power-law fashion. These scaling indexes were significantly different from those obtained after random shuffling of the original time series, indicating the importance of the sequential ordering of the stride interval. We demonstrate that conventional models of gait generation fail to reproduce the observed scaling behavior and introduce a new type of central pattern generator model that sucessfully accounts for the experimentally observed long-range correlations.

  9. Turing patterns and a stochastic individual-based model for predator-prey systems

    NASA Astrophysics Data System (ADS)

    Nagano, Seido

    2012-02-01

    Reaction-diffusion theory has played a very important role in the study of pattern formations in biology. However, a group of individuals is described by a single state variable representing population density in reaction-diffusion models and interaction between individuals can be included only phenomenologically. Recently, we have seamlessly combined individual-based models with elements of reaction-diffusion theory. To include animal migration in the scheme, we have adopted a relationship between the diffusion and the random numbers generated according to a two-dimensional bivariate normal distribution. Thus, we have observed the transition of population patterns from an extinction mode, a stable mode, or an oscillatory mode to the chaotic mode as the population growth rate increases. We show our phase diagram of predator-prey systems and discuss the microscopic mechanism for the stable lattice formation in detail.

  10. Random Evolution of Idiotypic Networks: Dynamics and Architecture

    NASA Astrophysics Data System (ADS)

    Brede, Markus; Behn, Ulrich

    The paper deals with modelling a subsystem of the immune system, the so-called idiotypic network (INW). INWs, conceived by N.K. Jerne in 1974, are functional networks of interacting antibodies and B cells. In principle, Jernes' framework provides solutions to many issues in immunology, such as immunological memory, mechanisms for antigen recognition and self/non-self discrimination. Explaining the interconnection between the elementary components, local dynamics, network formation and architecture, and possible modes of global system function appears to be an ideal playground of statistical mechanics. We present a simple cellular automaton model, based on a graph representation of the system. From a simplified description of idiotypic interactions, rules for the random evolution of networks of occupied and empty sites on these graphs are derived. In certain biologically relevant parameter ranges the resultant dynamics leads to stationary states. A stationary state is found to correspond to a specific pattern of network organization. It turns out that even these very simple rules give rise to a multitude of different kinds of patterns. We characterize these networks by classifying `static' and `dynamic' network-patterns. A type of `dynamic' network is found to display many features of real INWs.

  11. The effects of biome and spatial scale on the Co-occurrence patterns of a group of Namibian beetles

    NASA Astrophysics Data System (ADS)

    Pitzalis, Monica; Montalto, Francesca; Amore, Valentina; Luiselli, Luca; Bologna, Marco A.

    2017-08-01

    Co-occurrence patterns (studied by C-score, number of checkerboard units, number of species combinations, and V-ratio, and by an empirical Bayes approach developed by Gotelli and Ulrich, 2010) are crucial elements in order to understand assembly rules in ecological communities at both local and spatial scales. In order to explore general assembly rules and the effects of biome and spatial scale on such rules, here we studied a group of beetles (Coleoptera, Meloidae), using Namibia as a case of study. Data were gathered from 186 sampling sites, which allowed collection of 74 different species. We analyzed data at the level of (i) all sampled sites, (ii) all sites stratified by biome (Savannah, Succulent Karoo, Nama Karoo, Desert), and (iii) three randomly selected nested areas with three spatial scales each. Three competing algorithms were used for all analyses: (i) Fixed-Equiprobable, (ii) Fixed-Fixed, and (iii) Fixed-Proportional. In most of the null models we created, co-occurrence indicators revealed a non-random structure in meloid beetle assemblages at the global scale and at the scale of biomes, with species aggregation being much more important than species segregation in determining this non-randomness. At the level of biome, the same non-random organization was uncovered in assemblages from Savannah (where the aggregation pattern was particularly strong) and Succulent Karoo, but not in Desert and Nama Karoo. We conclude that species facilitation and similar niche in endemic species pairs may be particularly important as community drivers in our case of study. This pattern is also consistent with the evidence of a higher species diversity (normalized according to biome surface area) in the two former biomes. Historical patterns were perhaps also important for Succulent Karoo assemblages. Spatial scale had a reduced effect on patterning our data. This is consistent with the general homogeneity of environmental conditions over wide areas in Namibia.

  12. Diversity of multilayer networks and its impact on collaborating epidemics

    NASA Astrophysics Data System (ADS)

    Min, Yong; Hu, Jiaren; Wang, Weihong; Ge, Ying; Chang, Jie; Jin, Xiaogang

    2014-12-01

    Interacting epidemics on diverse multilayer networks are increasingly important in modeling and analyzing the diffusion processes of real complex systems. A viral agent spreading on one layer of a multilayer network can interact with its counterparts by promoting (cooperative interaction), suppressing (competitive interaction), or inducing (collaborating interaction) its diffusion on other layers. Collaborating interaction displays different patterns: (i) random collaboration, where intralayer or interlayer induction has the same probability; (ii) concentrating collaboration, where consecutive intralayer induction is guaranteed with a probability of 1; and (iii) cascading collaboration, where consecutive intralayer induction is banned with a probability of 0. In this paper, we develop a top-bottom framework that uses only two distributions, the overlaid degree distribution and edge-type distribution, to model collaborating epidemics on multilayer networks. We then state the response of three collaborating patterns to structural diversity (evenness and difference of network layers). For viral agents with small transmissibility, we find that random collaboration is more effective in networks with higher diversity (high evenness and difference), while the concentrating pattern is more suitable in uneven networks. Interestingly, the cascading pattern requires a network with moderate difference and high evenness, and the moderately uneven coupling of multiple network layers can effectively increase robustness to resist cascading failure. With large transmissibility, however, we find that all collaborating patterns are more effective in high-diversity networks. Our work provides a systemic analysis of collaborating epidemics on multilayer networks. The results enhance our understanding of biotic and informative diffusion through multiple vectors.

  13. Emergent central pattern generator behavior in gap-junction-coupled Hodgkin-Huxley style neuron model.

    PubMed

    Horn, Kyle G; Memelli, Heraldo; Solomon, Irene C

    2012-01-01

    Most models of central pattern generators (CPGs) involve two distinct nuclei mutually inhibiting one another via synapses. Here, we present a single-nucleus model of biologically realistic Hodgkin-Huxley neurons with random gap junction coupling. Despite no explicit division of neurons into two groups, we observe a spontaneous division of neurons into two distinct firing groups. In addition, we also demonstrate this phenomenon in a simplified version of the model, highlighting the importance of afterhyperpolarization currents (I(AHP)) to CPGs utilizing gap junction coupling. The properties of these CPGs also appear sensitive to gap junction conductance, probability of gap junction coupling between cells, topology of gap junction coupling, and, to a lesser extent, input current into our simulated nucleus.

  14. A new phenotyping pipeline reveals three types of lateral roots and a random branching pattern in two cereals.

    PubMed

    Passot, Sixtine; Moreno-Ortega, Beatriz; Moukouanga, Daniel; Balsera, Crispulo; Guyomarc'h, Soazig; Lucas, Mikael; Lobet, Guillaume; Laplaze, Laurent; Muller, Bertrand; Guédon, Yann

    2018-05-11

    Recent progress in root phenotyping has focused mainly on increasing throughput for genetic studies while identifying root developmental patterns has been comparatively underexplored. We introduce a new phenotyping pipeline for producing high-quality spatio-temporal root system development data and identifying developmental patterns within these data. The SmartRoot image analysis system and temporal and spatial statistical models were applied to two cereals, pearl millet (Pennisetum glaucum) and maize (Zea mays). Semi-Markov switching linear models were used to cluster lateral roots based on their growth rate profiles. These models revealed three types of lateral roots with similar characteristics in both species. The first type corresponds to fast and accelerating roots, the second to rapidly arrested roots, and the third to an intermediate type where roots cease elongation after a few days. These types of lateral roots were retrieved in different proportions in a maize mutant affected in auxin signaling, while the first most vigorous type was absent in maize plants exposed to severe shading. Moreover, the classification of growth rate profiles was mirrored by a ranking of anatomical traits in pearl millet. Potential dependencies in the succession of lateral root types along the primary root were then analyzed using variable-order Markov chains. The lateral root type was not influenced by the shootward neighbor root type or by the distance from this root. This random branching pattern of primary roots was remarkably conserved, despite the high variability of root systems in both species. Our phenotyping pipeline opens the door to exploring the genetic variability of lateral root developmental patterns. {copyright, serif} 2018 American Society of Plant Biologists. All rights reserved.

  15. Effects of ion channel noise on neural circuits: an application to the respiratory pattern generator to investigate breathing variability.

    PubMed

    Yu, Haitao; Dhingra, Rishi R; Dick, Thomas E; Galán, Roberto F

    2017-01-01

    Neural activity generally displays irregular firing patterns even in circuits with apparently regular outputs, such as motor pattern generators, in which the output frequency fluctuates randomly around a mean value. This "circuit noise" is inherited from the random firing of single neurons, which emerges from stochastic ion channel gating (channel noise), spontaneous neurotransmitter release, and its diffusion and binding to synaptic receptors. Here we demonstrate how to expand conductance-based network models that are originally deterministic to include realistic, physiological noise, focusing on stochastic ion channel gating. We illustrate this procedure with a well-established conductance-based model of the respiratory pattern generator, which allows us to investigate how channel noise affects neural dynamics at the circuit level and, in particular, to understand the relationship between the respiratory pattern and its breath-to-breath variability. We show that as the channel number increases, the duration of inspiration and expiration varies, and so does the coefficient of variation of the breath-to-breath interval, which attains a minimum when the mean duration of expiration slightly exceeds that of inspiration. For small channel numbers, the variability of the expiratory phase dominates over that of the inspiratory phase, and vice versa for large channel numbers. Among the four different cell types in the respiratory pattern generator, pacemaker cells exhibit the highest sensitivity to channel noise. The model shows that suppressing input from the pons leads to longer inspiratory phases, a reduction in breathing frequency, and larger breath-to-breath variability, whereas enhanced input from the raphe nucleus increases breathing frequency without changing its pattern. A major source of noise in neuronal circuits is the "flickering" of ion currents passing through the neurons' membranes (channel noise), which cannot be suppressed experimentally. Computational simulations are therefore the best way to investigate the effects of this physiological noise by manipulating its level at will. We investigate the role of noise in the respiratory pattern generator and show that endogenous, breath-to-breath variability is tightly linked to the respiratory pattern. Copyright © 2017 the American Physiological Society.

  16. Slowdowns in diversification rates from real phylogenies may not be real.

    PubMed

    Cusimano, Natalie; Renner, Susanne S

    2010-07-01

    Studies of diversification patterns often find a slowing in lineage accumulation toward the present. This seemingly pervasive pattern of rate downturns has been taken as evidence for adaptive radiations, density-dependent regulation, and metacommunity species interactions. The significance of rate downturns is evaluated with statistical tests (the gamma statistic and Monte Carlo constant rates (MCCR) test; birth-death likelihood models and Akaike Information Criterion [AIC] scores) that rely on null distributions, which assume that the included species are a random sample of the entire clade. Sampling in real phylogenies, however, often is nonrandom because systematists try to include early-diverging species or representatives of previous intrataxon classifications. We studied the effects of biased sampling, structured sampling, and random sampling by experimentally pruning simulated trees (60 and 150 species) as well as a completely sampled empirical tree (58 species) and then applying the gamma statistic/MCCR test and birth-death likelihood models/AIC scores to assess rate changes. For trees with random species sampling, the true model (i.e., the one fitting the complete phylogenies) could be inferred in most cases. Oversampling deep nodes, however, strongly biases inferences toward downturns, with simulations of structured and biased sampling suggesting that this occurs when sampling percentages drop below 80%. The magnitude of the effect and the sensitivity of diversification rate models is such that a useful rule of thumb may be not to infer rate downturns from real trees unless they have >80% species sampling.

  17. Spatial effects in discrete generation population models.

    PubMed

    Carrillo, C; Fife, P

    2005-02-01

    A framework is developed for constructing a large class of discrete generation, continuous space models of evolving single species populations and finding their bifurcating patterned spatial distributions. Our models involve, in separate stages, the spatial redistribution (through movement laws) and local regulation of the population; and the fundamental properties of these events in a homogeneous environment are found. Emphasis is placed on the interaction of migrating individuals with the existing population through conspecific attraction (or repulsion), as well as on random dispersion. The nature of the competition of these two effects in a linearized scenario is clarified. The bifurcation of stationary spatially patterned population distributions is studied, with special attention given to the role played by that competition.

  18. The Role of Visual Eccentricity on Preference for Abstract Symmetry

    PubMed Central

    O’ Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity. PMID:27124081

  19. The Role of Visual Eccentricity on Preference for Abstract Symmetry.

    PubMed

    Rampone, Giulia; O' Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity.

  20. Nonstandard convergence to jamming in random sequential adsorption: The case of patterned one-dimensional substrates

    NASA Astrophysics Data System (ADS)

    Verma, Arjun; Privman, Vladimir

    2018-02-01

    We study approach to the large-time jammed state of the deposited particles in the model of random sequential adsorption. The convergence laws are usually derived from the argument of Pomeau which includes the assumption of the dominance, at large enough times, of small landing regions into each of which only a single particle can be deposited without overlapping earlier deposited particles and which, after a certain time are no longer created by depositions in larger gaps. The second assumption has been that the size distribution of gaps open for particle-center landing in this large-time small-gaps regime is finite in the limit of zero gap size. We report numerical Monte Carlo studies of a recently introduced model of random sequential adsorption on patterned one-dimensional substrates that suggest that the second assumption must be generalized. We argue that a region exists in the parameter space of the studied model in which the gap-size distribution in the Pomeau large-time regime actually linearly vanishes at zero gap sizes. In another region, the distribution develops a threshold property, i.e., there are no small gaps below a certain gap size. We discuss the implications of these findings for new asymptotic power-law and exponential-modified-by-a-power-law convergences to jamming in irreversible one-dimensional deposition.

  1. An agent-based model of dialect evolution in killer whales.

    PubMed

    Filatova, Olga A; Miller, Patrick J O

    2015-05-21

    The killer whale is one of the few animal species with vocal dialects that arise from socially learned group-specific call repertoires. We describe a new agent-based model of killer whale populations and test a set of vocal-learning rules to assess which mechanisms may lead to the formation of dialect groupings observed in the wild. We tested a null model with genetic transmission and no learning, and ten models with learning rules that differ by template source (mother or matriline), variation type (random errors or innovations) and type of call change (no divergence from kin vs. divergence from kin). The null model without vocal learning did not produce the pattern of group-specific call repertoires we observe in nature. Learning from either mother alone or the entire matriline with calls changing by random errors produced a graded distribution of the call phenotype, without the discrete call types observed in nature. Introducing occasional innovation or random error proportional to matriline variance yielded more or less discrete and stable call types. A tendency to diverge from the calls of related matrilines provided fast divergence of loose call clusters. A pattern resembling the dialect diversity observed in the wild arose only when rules were applied in combinations and similar outputs could arise from different learning rules and their combinations. Our results emphasize the lack of information on quantitative features of wild killer whale dialects and reveal a set of testable questions that can draw insights into the cultural evolution of killer whale dialects. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Bioconvection in Cultures of the Calcifying Unicellular Alga Pleurochrysis Carterae

    NASA Technical Reports Server (NTRS)

    Montufar-Solis, Dina; Duke, P. Jackie; Marsh, Mary E.

    2003-01-01

    The unicellular, marine, calcifying alga P leurochiysis carterae--a model to study cell morphogenesis, cell polarity, calcification, gravitaxis, reproduction and development-- has extremely flexible culture requirements. Support studies for a flight experiment addressing cell motility suggested that cell density (cells/ml) affects cell movement in P. carterae cultures through the gradual establishment of bioconvection as the culture grows. To assess the effect of cell density on direction of the movement, without the effects of aging of the culture, swimming behavior was analyzed in aliquots from a series of dilutions obtained from a stock culture. Results showed that at low concentrations cells swim randomly. As the concentration increases, upswimming patterns overtake random swimming. Gradually, up and down movement patterns prevail, representative of bioconvection. This oriented swimming of P. carterae occurs in a wide range of concentrations, adding to the list of flexible requirements, in this case, cell concentration, to be used for spaceflight studies addressing cell motility and bioconvection in a unicellular model of biologically directed mineralization.

  3. Laser-induced speckle scatter patterns in Bacillus colonies

    PubMed Central

    Kim, Huisung; Singh, Atul K.; Bhunia, Arun K.; Bae, Euiwon

    2014-01-01

    Label-free bacterial colony phenotyping technology called BARDOT (Bacterial Rapid Detection using Optical scattering Technology) provided successful classification of several different bacteria at the genus, species, and serovar level. Recent experiments with colonies of Bacillus species provided strikingly different characteristics of elastic light scatter (ELS) patterns, which were comprised of random speckles compared to other bacteria, which are dominated by concentric rings and spokes. Since this laser-based optical sensor interrogates the whole volume of the colony, 3-D information of micro- and macro-structures are all encoded in the far-field scatter patterns. Here, we present a theoretical model explaining the underlying mechanism of the speckle formation by the colonies from Bacillus species. Except for Bacillus polymyxa, all Bacillus spp. produced random bright spots on the imaging plane, which presumably dependent on the cellular and molecular organization and content within the colony. Our scatter model-based analysis revealed that colony spread resulting in variable surface roughness can modify the wavefront of the scatter field. As the center diameter of the Bacillus spp. colony grew from 500 to 900 μm, average speckles area decreased two-fold and the number of small speckles increased seven-fold. In conclusion, as Bacillus colony grows, the average speckle size in the scatter pattern decreases and the number of smaller speckle increases due to the swarming growth characteristics of bacteria within the colony. PMID:25352840

  4. Universal principles governing multiple random searchers on complex networks: The logarithmic growth pattern and the harmonic law

    NASA Astrophysics Data System (ADS)

    Weng, Tongfeng; Zhang, Jie; Small, Michael; Harandizadeh, Bahareh; Hui, Pan

    2018-03-01

    We propose a unified framework to evaluate and quantify the search time of multiple random searchers traversing independently and concurrently on complex networks. We find that the intriguing behaviors of multiple random searchers are governed by two basic principles—the logarithmic growth pattern and the harmonic law. Specifically, the logarithmic growth pattern characterizes how the search time increases with the number of targets, while the harmonic law explores how the search time of multiple random searchers varies relative to that needed by individual searchers. Numerical and theoretical results demonstrate these two universal principles established across a broad range of random search processes, including generic random walks, maximal entropy random walks, intermittent strategies, and persistent random walks. Our results reveal two fundamental principles governing the search time of multiple random searchers, which are expected to facilitate investigation of diverse dynamical processes like synchronization and spreading.

  5. First passage time: Connecting random walks to functional responses in heterogeneous environments (Invited)

    NASA Astrophysics Data System (ADS)

    Lewis, M. A.; McKenzie, H.; Merrill, E.

    2010-12-01

    In this talk I will outline first passage time analysis for animals undertaking complex movement patterns, and will demonstrate how first passage time can be used to derive functional responses in predator prey systems. The result is a new approach to understanding type III functional responses based on a random walk model. I will extend the analysis to heterogeneous environments to assess the effects of linear features on functional responses in wolves and elk using GPS tracking data.

  6. Covering Ground: Movement Patterns and Random Walk Behavior in Aquilonastra anomala Sea Stars.

    PubMed

    Lohmann, Amanda C; Evangelista, Dennis; Waldrop, Lindsay D; Mah, Christopher L; Hedrick, Tyson L

    2016-10-01

    The paths animals take while moving through their environments affect their likelihood of encountering food and other resources; thus, models of foraging behavior abound. To collect movement data appropriate for comparison with these models, we used time-lapse photography to track movements of a small, hardy, and easy-to-obtain organism, Aquilonastra anomala sea stars. We recorded the sea stars in a tank over many hours, with and without a food cue. With food present, they covered less distance, as predicted by theory; this strategy would allow them to remain near food. We then compared the paths of the sea stars to three common models of animal movement: Brownian motion, Lévy walks, and correlated random walks; we found that the sea stars' movements most closely resembled a correlated random walk. Additionally, we compared the search performance of models of Brownian motion, a Lévy walk, and a correlated random walk to that of a model based on the sea stars' movements. We found that the behavior of the modeled sea star walk was similar to that of the modeled correlated random walk and the Brownian motion model, but that the sea star walk was slightly more likely than the other walks to find targets at intermediate distances. While organisms are unlikely to follow an idealized random walk in all details, our data suggest that comparing the effectiveness of an organism's paths to those from theory can give insight into the organism's actual movement strategy. Finally, automated optical tracking of invertebrates proved feasible, and A. anomala was revealed to be a tractable, 2D-movement study system.

  7. A fast process development flow by applying design technology co-optimization

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Chieh; Yeh, Shin-Shing; Ou, Tsong-Hua; Lin, Hung-Yu; Mai, Yung-Ching; Lin, Lawrence; Lai, Jun-Cheng; Lai, Ya Chieh; Xu, Wei; Hurat, Philippe

    2017-03-01

    Beyond 40 nm technology node, the pattern weak points and hotspot types increase dramatically. The typical patterns for lithography verification suffers huge turn-around-time (TAT) to handle the design complexity. Therefore, in order to speed up process development and increase pattern variety, accurate design guideline and realistic design combinations are required. This paper presented a flow for creating a cell-based layout, a lite realistic design, to early identify problematic patterns which will negatively affect the yield. A new random layout generating method, Design Technology Co-Optimization Pattern Generator (DTCO-PG), is reported in this paper to create cell-based design. DTCO-PG also includes how to characterize the randomness and fuzziness, so that it is able to build up the machine learning scheme which model could be trained by previous results, and then it generates patterns never seen in a lite design. This methodology not only increases pattern diversity but also finds out potential hotspot preliminarily. This paper also demonstrates an integrated flow from DTCO pattern generation to layout modification. Optical Proximity Correction, OPC and lithographic simulation is then applied to DTCO-PG design database to detect hotspots and then hotspots or weak points can be automatically fixed through the procedure or handled manually. This flow benefits the process evolution to have a faster development cycle time, more complexity pattern design, higher probability to find out potential hotspots in early stage, and a more holistic yield ramping operation.

  8. Emergence of Alpha and Gamma Like Rhythms in a Large Scale Simulation of Interacting Neurons

    NASA Astrophysics Data System (ADS)

    Gaebler, Philipp; Miller, Bruce

    2007-10-01

    In the normal brain, at first glance the electrical activity appears very random. However, certain frequencies emerge during specific stages of sleep or between quiet wake states. This raises the question of whether current mathematical and computational models of interacting neurons can display similar behavior. A recent model developed by Eugene Izhikevich appears to succeed. However, early dynamical simulations used to detect these patterns were possibly compromised by an over-simplified initial condition and evolution algorithm. Utilizing the same model, but a more robust algorithm, here we present our initial results, showing that these patterns persist under a wide range of initial conditions. We employ spectral analysis of the firing patterns of a system of interacting excitatory and inhibitory neurons to demonstrate a bimodal spectrum centered on two frequencies in the range characteristic of alpha and gamma rhythms in the human brain.

  9. Learning pattern recognition and decision making in the insect brain

    NASA Astrophysics Data System (ADS)

    Huerta, R.

    2013-01-01

    We revise the current model of learning pattern recognition in the Mushroom Bodies of the insects using current experimental knowledge about the location of learning, olfactory coding and connectivity. We show that it is possible to have an efficient pattern recognition device based on the architecture of the Mushroom Bodies, sparse code, mutual inhibition and Hebbian leaning only in the connections from the Kenyon cells to the output neurons. We also show that despite the conventional wisdom that believes that artificial neural networks are the bioinspired model of the brain, the Mushroom Bodies actually resemble very closely Support Vector Machines (SVMs). The derived SVM learning rules are situated in the Mushroom Bodies, are nearly identical to standard Hebbian rules, and require inhibition in the output. A very particular prediction of the model is that random elimination of the Kenyon cells in the Mushroom Bodies do not impair the ability to recognize odorants previously learned.

  10. Not accounting for interindividual variability can mask habitat selection patterns: a case study on black bears.

    PubMed

    Lesmerises, Rémi; St-Laurent, Martin-Hugues

    2017-11-01

    Habitat selection studies conducted at the population scale commonly aim to describe general patterns that could improve our understanding of the limiting factors in species-habitat relationships. Researchers often consider interindividual variation in selection patterns to control for its effects and avoid pseudoreplication by using mixed-effect models that include individuals as random factors. Here, we highlight common pitfalls and possible misinterpretations of this strategy by describing habitat selection of 21 black bears Ursus americanus. We used Bayesian mixed-effect models and compared results obtained when using random intercept (i.e., population level) versus calculating individual coefficients for each independent variable (i.e., individual level). We then related interindividual variability to individual characteristics (i.e., age, sex, reproductive status, body condition) in a multivariate analysis. The assumption of comparable behavior among individuals was verified only in 40% of the cases in our seasonal best models. Indeed, we found strong and opposite responses among sampled bears and individual coefficients were linked to individual characteristics. For some covariates, contrasted responses canceled each other out at the population level. In other cases, interindividual variability was concealed by the composition of our sample, with the majority of the bears (e.g., old individuals and bears in good physical condition) driving the population response (e.g., selection of young forest cuts). Our results stress the need to consider interindividual variability to avoid misinterpretation and uninformative results, especially for a flexible and opportunistic species. This study helps to identify some ecological drivers of interindividual variability in bear habitat selection patterns.

  11. Sex difference in human fingertip recognition of micron-level randomness as unpleasant.

    PubMed

    Nakatani, M; Kawasoe, T; Denda, M

    2011-08-01

    We investigated sex difference in evaluation, using the human fingertip, of the tactile impressions of three different micron-scale patterns laser-engraved on plastic plates. There were two ordered (periodical) patterns consisting of ripples on a scale of a few micrometres and one pseudo-random (non-periodical) pattern; these patterns were considered to mimic the surface geometry of healthy and damaged human hair, respectively. In the first experiment, 10 women and 10 men ran a fingertip over each surface and determined which of the three plates felt most unpleasant. All 10 female participants reported the random pattern, but not the ordered patterns, as unpleasant, whereas the majority of the male participants did not. In the second experiment, 9 of 10 female participants continued to report the pseudo-random pattern as unpleasant even after their fingertip had been coated with a collodion membrane. In the third experiment, participants were asked to evaluate the magnitude of the tactile impression for each pattern. The results again indicated that female participants tend to report a greater magnitude of unpleasantness than male participants. Our findings indicate that the female participants could readily detect microgeometric surface characteristics and that they evaluated the random pattern as more unpleasant. Possible physical and perceptual mechanisms involved are discussed. © 2011 The Authors. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  12. The emergence of collective phenomena in systems with random interactions

    NASA Astrophysics Data System (ADS)

    Abramkina, Volha

    Emergent phenomena are one of the most profound topics in modern science, addressing the ways that collectivities and complex patterns appear due to multiplicity of components and simple interactions. Ensembles of random Hamiltonians allow one to explore emergent phenomena in a statistical way. In this work we adopt a shell model approach with a two-body interaction Hamiltonian. The sets of the two-body interaction strengths are selected at random, resulting in the two-body random ensemble (TBRE). Symmetries such as angular momentum, isospin, and parity entangled with complex many-body dynamics result in surprising order discovered in the spectrum of low-lying excitations. The statistical patterns exhibited in the TBRE are remarkably similar to those observed in real nuclei. Signs of almost every collective feature seen in nuclei, namely, pairing superconductivity, deformation, and vibration, have been observed in random ensembles [3, 4, 5, 6]. In what follows a systematic investigation of nuclear shape collectivities in random ensembles is conducted. The development of the mean field, its geometry, multipole collectivities and their dependence on the underlying two-body interaction are explored. Apart from the role of static symmetries such as SU(2) angular momentum and isospin groups, the emergence of dynamical symmetries including the seniority SU(2), rotational symmetry, as well as the Elliot SU(3) is shown to be an important precursor for the existence of geometric collectivities.

  13. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    PubMed Central

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  14. Local and regional species diversity of benthic Isopoda (Crustacea) in the deep Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Wilson, George D. F.

    2008-12-01

    Recent studies of deep-sea faunas considered the influence of mid-domain models in the distribution of species diversity and richness with depth. In this paper, I show that separating local diversity from regional species richness in benthic isopods clarifies mid-domain effects in the distribution of isopods in the Gulf of Mexico. Deviations from the randomised implied species ranges can be informative to understanding general patterns within the Gulf of Mexico. The isopods from the GoMB study contained 135 species, with a total of 156 species including those from an earlier study. More than 60 species may be new to science. Most families of deep-sea isopods (suborder Asellota) were present, although some were extremely rare. The isopod family Desmosomatidae dominated the samples, and one species of Macrostylis (Macrostylidae) was found in many samples. Species richness for samples pooled within sites ranged from 1 to 52 species. Because species in pooled samples were highly correlated with individuals, species diversity was compared across sites using the expected species estimator ( n=15 individuals, ES 15). Six depth transects had idiosyncratic patterns of ES 15, and transects with the greatest short-range variation in topography, such as basins and canyons, had the greatest short-range disparity. Basins on the deep slope did not have a consistent influence (i.e., relatively higher or lower than surrounding areas) on the comparative species diversity. ES 15 of all transects together showed a weak mid-domain effect, peaking around 1200-1500 m, with low values at the shallowest and deepest samples (Sigsbee Abyssal Plain); no longitudinal (east-west) pattern was found. The regional species pool was analyzed by summing the implied ranges of all species. The species ranges in aggregate did not have significant patterns across longitudes, and many species had broad depth ranges, suggesting that the isopod fauna of the Gulf of Mexico is well dispersed. The summed ranges, as expected, had strong mid-domain patterns, contrasting with the local species richness estimates. The longitudinal ranges closely matched a randomized pattern (species ranges placed randomly, 1000 iterations), with significant deviations in the east attributable to lower sampling effort. The depth pattern, however, deviated from the mid-domain model, with a bimodal peak displaced nearly 500 m shallower than the mode of the randomized distribution. The deviations from random expectation were significantly positive above 1600 m and negative below 2000 m, with the result that regional species richness peaked between 800 and 1200 m, and decreased rapidly at deeper depths. The highest species richness intervals corresponded to the number of individuals collected. Residuals from a regression of the deviations on individual numbers, however, still deviated from the randomized pattern. In this declining depth-diversity pattern, the Gulf of Mexico resembles other partially enclosed basins, such as the Norwegian Sea, known to have suffered geologically recent extinction events. This displaced diversity pattern and broad depth ranges implicate ongoing re-colonization of the deeper parts of the Gulf of Mexico. The Sigsbee Abyssal Plain sites could be depauperate for historical reasons (e.g., one or more extinction events) rather than ongoing ecological reasons (e.g., low food supply).

  15. Random network model of electrical conduction in two-phase rock

    NASA Astrophysics Data System (ADS)

    Fuji-ta, Kiyoshi; Seki, Masayuki; Ichiki, Masahiro

    2018-05-01

    We developed a cell-type lattice model to clarify the interconnected conductivity mechanism of two-phase rock. We quantified electrical conduction networks in rock and evaluated electrical conductivity models of the two-phase interaction. Considering the existence ratio of conductive and resistive cells in the model, we generated natural matrix cells simulating a natural mineral distribution pattern, using Mersenne Twister random numbers. The most important and prominent feature of the model simulation is a drastic increase in the pseudo-conductivity index for conductor ratio R > 0.22. This index in the model increased from 10-4 to 100 between R = 0.22 and 0.9, a change of four orders of magnitude. We compared our model responses with results from previous model studies. Although the pseudo-conductivity computed by the model differs slightly from that of the previous model, model responses can account for the conductivity change. Our modeling is thus effective for quantitatively estimating the degree of interconnection of rock and minerals.

  16. A model for simulating random atmospheres as a function of latitude, season, and time

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.

    1977-01-01

    An empirical stochastic computer model was developed with the capability of generating random thermodynamic profiles of the atmosphere below an altitude of 99 km which are characteristic of any given season, latitude, and time of day. Samples of temperature, density, and pressure profiles generated by the model are statistically similar to measured profiles in a data base of over 6000 rocket and high-altitude atmospheric soundings; that is, means and standard deviations of modeled profiles and their vertical gradients are in close agreement with data. Model-generated samples can be used for Monte Carlo simulations of aircraft or spacecraft trajectories to predict or account for the effects on a vehicle's performance of atmospheric variability. Other potential uses for the model are in simulating pollutant dispersion patterns, variations in sound propagation, and other phenomena which are dependent on atmospheric properties, and in developing data-reduction software for satellite monitoring systems.

  17. Evolving optimised decision rules for intrusion detection using particle swarm paradigm

    NASA Astrophysics Data System (ADS)

    Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.

    2012-12-01

    The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.

  18. Chromosome Gene Orientation Inversion Networks (GOINs) of Plasmodium Proteome.

    PubMed

    Quevedo-Tumailli, Viviana F; Ortega-Tenezaca, Bernabé; González-Díaz, Humbert

    2018-03-02

    The spatial distribution of genes in chromosomes seems not to be random. For instance, only 10% of genes are transcribed from bidirectional promoters in humans, and many more are organized into larger clusters. This raises intriguing questions previously asked by different authors. We would like to add a few more questions in this context, related to gene orientation inversions. Does gene orientation (inversion) follow a random pattern? Is it relevant to biological activity somehow? We define a new kind of network coined as the gene orientation inversion network (GOIN). GOIN's complex network encodes short- and long-range patterns of inversion of the orientation of pairs of gene in the chromosome. We selected Plasmodium falciparum as a case of study due to the high relevance of this parasite to public health (causal agent of malaria). We constructed here for the first time all of the GOINs for the genome of this parasite. These networks have an average of 383 nodes (genes in one chromosome) and 1314 links (pairs of gene with inverse orientation). We calculated node centralities and other parameters of these networks. These numerical parameters were used to study different properties of gene inversion patterns, for example, distribution, local communities, similarity to Erdös-Rényi random networks, randomness, and so on. We find clues that seem to indicate that gene orientation inversion does not follow a random pattern. We noted that some gene communities in the GOINs tend to group genes encoding for RIFIN-related proteins in the proteome of the parasite. RIFIN-like proteins are a second family of clonally variant proteins expressed on the surface of red cells infected with Plasmodium falciparum. Consequently, we used these centralities as input of machine learning (ML) models to predict the RIFIN-like activity of 5365 proteins in the proteome of Plasmodium sp. The best linear ML model found discriminates RIFIN-like from other proteins with sensitivity and specificity 70-80% in training and external validation series. All of these results may point to a possible biological relevance of gene orientation inversion not directly dependent on genetic sequence information. This work opens the gate to the use of GOINs as a tool for the study of the structure of chromosomes and the study of protein function in proteome research.

  19. Sensitivity of a Riparian Large Woody Debris Recruitment Model to the Number of Contributing Banks and Tree Fall Pattern

    Treesearch

    Don C. Bragg; Jeffrey L. Kershner

    2004-01-01

    Riparian large woody debris (LWD) recruitment simulations have traditionally applied a random angle of tree fall from two well-forested stream banks. We used a riparian LWD recruitment model (CWD, version 1.4) to test the validity these assumptions. Both the number of contributing forest banks and predominant tree fall direction significantly influenced simulated...

  20. Perception of randomness: On the time of streaks.

    PubMed

    Sun, Yanlong; Wang, Hongbin

    2010-12-01

    People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the most delayed pattern for its first occurrence. It is argued that when time is of essence, how often a pattern is to occur (mean time, or, frequency) and when a pattern is to first occur (waiting time) are different questions and bear different psychological relevance. The waiting time statistics may provide a quantitative measure to the psychological distance when people are expecting a probabilistic event, and such measure is consistent with both of the representativeness and availability heuristics in people's perception of randomness. We discuss some of the recent empirical findings and suggest that people's judgment and generation of random sequences may be guided by their actual experiences of the waiting time statistics. Published by Elsevier Inc.

  1. Dietary patterns in obese pregnant women; influence of a behavioral intervention of diet and physical activity in the UPBEAT randomized controlled trial.

    PubMed

    Flynn, Angela C; Seed, Paul T; Patel, Nashita; Barr, Suzanne; Bell, Ruth; Briley, Annette L; Godfrey, Keith M; Nelson, Scott M; Oteng-Ntim, Eugene; Robinson, Sian M; Sanders, Thomas A; Sattar, Naveed; Wardle, Jane; Poston, Lucilla; Goff, Louise M

    2016-11-29

    Understanding dietary patterns in obese pregnant women will inform future intervention strategies to improve pregnancy outcomes and the health of the child. The aim of this study was to investigate the effect of a behavioral intervention of diet and physical activity advice on dietary patterns in obese pregnant woman participating in the UPBEAT study, and to explore associations of dietary patterns with pregnancy outcomes. In the UPBEAT randomized controlled trial, pregnant obese women from eight UK multi-ethnic, inner-city populations were randomly assigned to receive a diet/physical activity intervention or standard antenatal care. The dietary intervention aimed to reduce glycemic load and saturated fat intake. Diet was assessed using a food frequency questionnaire (FFQ) at baseline (15 +0 -18 +6 weeks' gestation), post intervention (27 +0 -28 +6 weeks) and in late pregnancy (34 +0 -36 +0 weeks). Dietary patterns were characterized using factor analysis of the baseline FFQ data, and changes compared in the control and intervention arms. Patterns were related to pregnancy outcomes in the combined control/intervention cohort (n = 1023). Four distinct baseline dietary patterns were defined; Fruit and vegetables, African/Caribbean, Processed, and Snacks, which were differently associated with social and demographic factors. The UPBEAT intervention significantly reduced the Processed (-0.14; 95% CI -0.19, -0.08, P <0.0001) and Snacks (-0.24; 95% CI -0.31, -0.17, P <0.0001) pattern scores. In the adjusted model, baseline scores for the African/Caribbean (quartile 4 compared with quartile 1: OR = 2.46; 95% CI 1.41, 4.30) and Processed (quartile 4 compared with quartile 1: OR = 2.05; 95% CI 1.23, 3.41) patterns in the entire cohort were associated with increased risk of gestational diabetes. In a diverse cohort of obese pregnant women an intensive dietary intervention improved Processed and Snack dietary pattern scores. African/Caribbean and Processed patterns were associated with an increased risk of gestational diabetes, and provide potential targets for future interventions. Current controlled trials; ISRCTN89971375.

  2. A random utility based estimation framework for the household activity pattern problem.

    DOT National Transportation Integrated Search

    2016-06-01

    This paper develops a random utility based estimation framework for the Household Activity : Pattern Problem (HAPP). Based on the realization that output of complex activity-travel decisions : form a continuous pattern in space-time dimension, the es...

  3. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data

    PubMed Central

    2010-01-01

    Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements. PMID:20205909

  4. Exact distribution of a pattern in a set of random sequences generated by a Markov source: applications to biological data.

    PubMed

    Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude

    2010-01-26

    In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements.

  5. Mathematical modeling of malaria infection with innate and adaptive immunity in individuals and agent-based communities.

    PubMed

    Gurarie, David; Karl, Stephan; Zimmerman, Peter A; King, Charles H; St Pierre, Timothy G; Davis, Timothy M E

    2012-01-01

    Agent-based modeling of Plasmodium falciparum infection offers an attractive alternative to the conventional Ross-Macdonald methodology, as it allows simulation of heterogeneous communities subjected to realistic transmission (inoculation patterns). We developed a new, agent based model that accounts for the essential in-host processes: parasite replication and its regulation by innate and adaptive immunity. The model also incorporates a simplified version of antigenic variation by Plasmodium falciparum. We calibrated the model using data from malaria-therapy (MT) studies, and developed a novel calibration procedure that accounts for a deterministic and a pseudo-random component in the observed parasite density patterns. Using the parasite density patterns of 122 MT patients, we generated a large number of calibrated parameters. The resulting data set served as a basis for constructing and simulating heterogeneous agent-based (AB) communities of MT-like hosts. We conducted several numerical experiments subjecting AB communities to realistic inoculation patterns reported from previous field studies, and compared the model output to the observed malaria prevalence in the field. There was overall consistency, supporting the potential of this agent-based methodology to represent transmission in realistic communities. Our approach represents a novel, convenient and versatile method to model Plasmodium falciparum infection.

  6. Application of pattern mixture models to address missing data in longitudinal data analysis using SPSS.

    PubMed

    Son, Heesook; Friedmann, Erika; Thomas, Sue A

    2012-01-01

    Longitudinal studies are used in nursing research to examine changes over time in health indicators. Traditional approaches to longitudinal analysis of means, such as analysis of variance with repeated measures, are limited to analyzing complete cases. This limitation can lead to biased results due to withdrawal or data omission bias or to imputation of missing data, which can lead to bias toward the null if data are not missing completely at random. Pattern mixture models are useful to evaluate the informativeness of missing data and to adjust linear mixed model (LMM) analyses if missing data are informative. The aim of this study was to provide an example of statistical procedures for applying a pattern mixture model to evaluate the informativeness of missing data and conduct analyses of data with informative missingness in longitudinal studies using SPSS. The data set from the Patients' and Families' Psychological Response to Home Automated External Defibrillator Trial was used as an example to examine informativeness of missing data with pattern mixture models and to use a missing data pattern in analysis of longitudinal data. Prevention of withdrawal bias, omitted data bias, and bias toward the null in longitudinal LMMs requires the assessment of the informativeness of the occurrence of missing data. Missing data patterns can be incorporated as fixed effects into LMMs to evaluate the contribution of the presence of informative missingness to and control for the effects of missingness on outcomes. Pattern mixture models are a useful method to address the presence and effect of informative missingness in longitudinal studies.

  7. Fluvial Apophenia

    NASA Astrophysics Data System (ADS)

    Coulthard, Tom; Armitage, John

    2017-04-01

    Apophenia describes the experience of seeing meaningful patterns or connections in random or meaningless data. Francis Bacon was one of the first to identify its role as a "human understanding is of its own nature prone to suppose the existence of more order and regularity in the world than it finds". Examples include pareidolia (seeing shapes in random patterns), gamblers fallacy (feeling past events alter probability), confirmation bias (bias to supporting a hypothesis rather than disproving), and he clustering illusion (an inability to recognise actual random data, instead believing there are patterns). Increasingly, researchers use records of past floods stored in sedimentary archives to make inferences about past environments, and to describe how climate and flooding may have changed. However, it is a seductive conclusion, to infer that drivers of landscape change can lead to changes in fluvial behaviour. Using past studies and computer simulations of river morphodynamics we explore how meaningful the link between drivers and fluvial changes is. Simple linear numerical models would suggest a direct relation between cause and effect, despite the potential for thresholds, phase changes, time-lags and damping. However, a comparatively small increase in model complexity (e.g. the Stream Power law) introducing non-linear behaviour and Increasing the complexity further can lead to the generation of time-dependent outputs despite constant forcing. We will use this range of findings to explore how apophenia may manifest itself in studies of fluvial systems, what this can mean and how we can try to account for it. Whilst discussed in the context of fluvial systems the concepts and inferences from this presentation are highly relevant to many other studies/disciplines.

  8. Normal and compound poisson approximations for pattern occurrences in NGS reads.

    PubMed

    Zhai, Zhiyuan; Reinert, Gesine; Song, Kai; Waterman, Michael S; Luan, Yihui; Sun, Fengzhu

    2012-06-01

    Next generation sequencing (NGS) technologies are now widely used in many biological studies. In NGS, sequence reads are randomly sampled from the genome sequence of interest. Most computational approaches for NGS data first map the reads to the genome and then analyze the data based on the mapped reads. Since many organisms have unknown genome sequences and many reads cannot be uniquely mapped to the genomes even if the genome sequences are known, alternative analytical methods are needed for the study of NGS data. Here we suggest using word patterns to analyze NGS data. Word pattern counting (the study of the probabilistic distribution of the number of occurrences of word patterns in one or multiple long sequences) has played an important role in molecular sequence analysis. However, no studies are available on the distribution of the number of occurrences of word patterns in NGS reads. In this article, we build probabilistic models for the background sequence and the sampling process of the sequence reads from the genome. Based on the models, we provide normal and compound Poisson approximations for the number of occurrences of word patterns from the sequence reads, with bounds on the approximation error. The main challenge is to consider the randomness in generating the long background sequence, as well as in the sampling of the reads using NGS. We show the accuracy of these approximations under a variety of conditions for different patterns with various characteristics. Under realistic assumptions, the compound Poisson approximation seems to outperform the normal approximation in most situations. These approximate distributions can be used to evaluate the statistical significance of the occurrence of patterns from NGS data. The theory and the computational algorithm for calculating the approximate distributions are then used to analyze ChIP-Seq data using transcription factor GABP. Software is available online (www-rcf.usc.edu/∼fsun/Programs/NGS_motif_power/NGS_motif_power.html). In addition, Supplementary Material can be found online (www.liebertonline.com/cmb).

  9. Species richness effects on ecosystem multifunctionality depend on evenness, composition and spatial pattern

    USGS Publications Warehouse

    Maestre, F.T.; Castillo-Monroy, A. P.; Bowker, M.A.; Ochoa-Hueso, R.

    2012-01-01

    1. Recent studies have suggested that the simultaneous maintenance of multiple ecosystem functions (multifunctionality) is positively supported by species richness. However, little is known regarding the relative importance of other community attributes (e.g. spatial pattern, species evenness) as drivers of multifunctionality. 2. We conducted two microcosm experiments using model biological soil crust communities dominated by lichens to: (i) evaluate the joint effects and relative importance of changes in species composition, spatial pattern (clumped and random distribution of lichens), evenness (maximal and low evenness) and richness (from two to eight species) on soil functions related to nutrient cycling (β-glucosidase, urease and acid phosphatase enzymes, in situ N availability, total N, organic C, and N fixation), and (ii) assess how these community attributes affect multifunctionality. 3. Species richness, composition and spatial pattern affected multiple ecosystem functions (e.g. organic C, total N, N availability, β-glucosidase activity), albeit the magnitude and direction of their effects varied with the particular function, experiment and soil depth considered. Changes in species composition had effects on organic C, total N and the activity of β-glucosidase. Significant species richness × evenness and spatial pattern × evenness interactions were found when analysing functions such as organic C, total N and the activity of phosphatase. 4. The probability of sustaining multiple ecosystem functions increased with species richness, but this effect was largely modulated by attributes such as species evenness, composition and spatial pattern. Overall, we found that model communities with high species richness, random spatial pattern and low evenness increased multifunctionality. 5. Synthesis. Our results illustrate how different community attributes have a diverse impact on ecosystem functions related to nutrient cycling, and provide new experimental evidence illustrating the importance of the spatial pattern of organisms on ecosystem functioning. They also indicate that species richness is not the only biotic driver of multifunctionality, and that particular combinations of community attributes may be required to maximize it.

  10. A Simulation Study Comparing Epidemic Dynamics on Exponential Random Graph and Edge-Triangle Configuration Type Contact Network Models

    PubMed Central

    Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry

    2015-01-01

    We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701

  11. Application of stochastic processes in random growth and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panagiotis

    We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.

  12. Functional response and capture timing in an individual-based model: predation by northern squawfish (Ptychocheilus oregonensis) on juvenile salmonids in the Columbia River

    USGS Publications Warehouse

    Petersen, James H.; DeAngelis, Donald L.

    1992-01-01

    The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.

  13. Worldwide patterns of ischemic heart disease mortality from 1980 to 2010.

    PubMed

    Gouvinhas, Cláudia; Severo, Milton; Azevedo, Ana; Lunet, Nuno

    2014-01-01

    The trends in the IHD mortality rates vary widely across countries, reflecting the heterogeneity in the variation of the exposure to the main risk factors and in the access to different management strategies among settings. We aimed to identify model-based patterns in the time trends in IHD mortality in 50 countries from the five continents, between 1980 and 2010. Mixed models were used to identify time trends in age-standardized mortality rates (ASMR) (age group 35+years; world standard population), all including random terms for intercept, slope, quadratic and cubic. Model-based clustering was used to identify the patterns. We identified five main patterns of IHD mortality trends in the last three decades, similar for men and women. Pattern 1 had the highest ASMR and pattern 2 exhibited the most pronounced decrease in ASMR during the entire study period. Pattern 3 was characterized by an initial increase in ASMR, followed by a sharp decline. Countries in pattern 4 had the lowest ASMR throughout the study period. It was further divided into patterns 4a (consistent decrease in ASMR throughout the period of analysis) and 4b (less pronounced declines and highest rates observed mostly between 1996 and 2004). There was no correspondence between the geographic or economical grouping of the analyzed countries and the patterns found in this study. Our study yielded a new framework for the description, interpretation and prediction of IHD mortality trends worldwide. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Imaging in laser spectroscopy by a single-pixel camera based on speckle patterns

    NASA Astrophysics Data System (ADS)

    Žídek, K.; Václavík, J.

    2016-11-01

    Compressed sensing (CS) is a branch of computational optics able to reconstruct an image (or any other information) from a reduced number of measurements - thus significantly saving measurement time. It relies on encoding the detected information by a random pattern and consequent mathematical reconstruction. CS can be the enabling step to carry out imaging in many time-consuming measurements. The critical step in CS experiments is the method to invoke encoding by a random mask. Complex devices and relay optics are commonly used for the purpose. We present a new approach of creating the random mask by using laser speckles from coherent laser light passing through a diffusor. This concept is especially powerful in laser spectroscopy, where it does not require any complicated modification of the current techniques. The main advantage consist in the unmatched simplicity of the random pattern generation and a versatility of the pattern resolution. Unlike in the case of commonly used random masks, here the pattern fineness can be adjusted by changing the laser spot size being diffused. We demonstrate the pattern tuning together with the connected changes in the pattern statistics. In particular, the issue of patterns orthogonality, which is important for the CS applications, is discussed. Finally, we demonstrate on a set of 200 acquired speckle patterns that the concept can be successfully employed for single-pixel camera imaging. We discuss requirements on detector noise for the image reconstruction.

  15. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  16. Vehicle track segmentation using higher order random fields

    DOE PAGES

    Quach, Tu -Thach

    2017-01-09

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  17. Vehicle track segmentation using higher order random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quach, Tu -Thach

    Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less

  18. Inferring tectonic activity using drainage network and RT model: an example from the western Himalayas, India

    NASA Astrophysics Data System (ADS)

    Sahoo, Ramendra; Jain, Vikrant

    2017-04-01

    Morphology of the landscape and derived features are regarded to be an important tool for inferring about tectonic activity in an area, since surface exposures of these subsurface processes may not be available or may get eroded away over time. This has led to an extensive research in application of the non-planar morphological attributes like river long profile and hypsometry for tectonic studies, whereas drainage network as a proxy for tectonic activity has not been explored greatly. Though, significant work has been done on drainage network pattern which started in a qualitative manner and over the years, has evolved to incorporate more quantitative aspects, like studying the evolution of a network under the influence of external and internal controls. Random Topology (RT) model is one of these concepts, which elucidates the connection between evolution of a drainage network pattern and the entropy of the drainage system and it states that in absence of any geological controls, a natural population of channel networks will be topologically random. We have used the entropy maximization principle to provide a theoretical structure for the RT model. Furthermore, analysis was carried out on the drainage network structures around Jwalamukhi thrust in the Kangra reentrant in western Himalayas, India, to investigate the tectonic activity in the region. Around one thousand networks were extracted from the foot-wall (fw) and hanging-wall (hw) region of the thrust sheet and later categorized based on their magnitudes. We have adopted the goodness of fit test for comparing the network patterns in fw and hw drainage with those derived using the RT model. The null hypothesis for the test was, the drainage networks in the fw are statistically more similar than those on the hw, to the network patterns derived using the RT model for any given magnitude. The test results are favorable to our null hypothesis for networks with smaller magnitudes (< 9), whereas for larger magnitudes, both hw and fw networks were found to be statistically not similar to the model network patterns. Calculation of pattern frequency for each magnitude and subsequent hypothesis testing were carried out using Matlab (v R2015a). Our results will help to define drainage network pattern as one of the geomorphic proxy to identify tectonically active area. This study also serve as a supplementary proof of the neo-tectonic control on the morphology of landscape and its derivatives around the Jwalamukhi thrust. Additionally, it will help to verify the theory of probabilistic evolution of drainage networks.

  19. Using circuit theory to model connectivity in ecology, evolution, and conservation.

    PubMed

    McRae, Brad H; Dickson, Brett G; Keitt, Timothy H; Shah, Viral B

    2008-10-01

    Connectivity among populations and habitats is important for a wide range of ecological processes. Understanding, preserving, and restoring connectivity in complex landscapes requires connectivity models and metrics that are reliable, efficient, and process based. We introduce a new class of ecological connectivity models based in electrical circuit theory. Although they have been applied in other disciplines, circuit-theoretic connectivity models are new to ecology. They offer distinct advantages over common analytic connectivity models, including a theoretical basis in random walk theory and an ability to evaluate contributions of multiple dispersal pathways. Resistance, current, and voltage calculated across graphs or raster grids can be related to ecological processes (such as individual movement and gene flow) that occur across large population networks or landscapes. Efficient algorithms can quickly solve networks with millions of nodes, or landscapes with millions of raster cells. Here we review basic circuit theory, discuss relationships between circuit and random walk theories, and describe applications in ecology, evolution, and conservation. We provide examples of how circuit models can be used to predict movement patterns and fates of random walkers in complex landscapes and to identify important habitat patches and movement corridors for conservation planning.

  20. A complexity theory model in science education problem solving: random walks for working memory and mental capacity.

    PubMed

    Stamovlasis, Dimitrios; Tsaparlis, Georgios

    2003-07-01

    The present study examines the role of limited human channel capacity from a science education perspective. A model of science problem solving has been previously validated by applying concepts and tools of complexity theory (the working memory, random walk method). The method correlated the subjects' rank-order achievement scores in organic-synthesis chemistry problems with the subjects' working memory capacity. In this work, we apply the same nonlinear approach to a different data set, taken from chemical-equilibrium problem solving. In contrast to the organic-synthesis problems, these problems are algorithmic, require numerical calculations, and have a complex logical structure. As a result, these problems cause deviations from the model, and affect the pattern observed with the nonlinear method. In addition to Baddeley's working memory capacity, the Pascual-Leone's mental (M-) capacity is examined by the same random-walk method. As the complexity of the problem increases, the fractal dimension of the working memory random walk demonstrates a sudden drop, while the fractal dimension of the M-capacity random walk decreases in a linear fashion. A review of the basic features of the two capacities and their relation is included. The method and findings have consequences for problem solving not only in chemistry and science education, but also in other disciplines.

  1. Soil Parameter Mapping and Ad Hoc Power Analysis to Increase Blocking Efficiency Prior to Establishing a Long-Term Field Experiment.

    PubMed

    Collins, Doug; Benedict, Chris; Bary, Andy; Cogger, Craig

    2015-01-01

    The spatial heterogeneity of soil and weed populations poses a challenge to researchers. Unlike aboveground variability, below-ground variability is more difficult to discern without a strategic soil sampling pattern. While blocking is commonly used to control environmental variation, this strategy is rarely informed by data about current soil conditions. Fifty georeferenced sites were located in a 0.65 ha area prior to establishing a long-term field experiment. Soil organic matter (OM) and weed seed bank populations were analyzed at each site and the spatial structure was modeled with semivariograms and interpolated with kriging to map the surface. These maps were used to formulate three strategic blocking patterns and the efficiency of each pattern was compared to a completely randomized design and a west to east model not informed by soil variability. Compared to OM, weeds were more variable across the landscape and had a shorter range of autocorrelation, and models to increase blocking efficiency resulted in less increase in power. Weeds and OM were not correlated, so no model examined improved power equally for both parameters. Compared to the west to east blocking pattern, the final blocking pattern chosen resulted in a 7-fold increase in power for OM and a 36% increase in power for weeds.

  2. Collective iteration behavior for online social networks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Li, Ren-De; Guo, Qiang; Zhang, Yi-Cheng

    2018-06-01

    Understanding the patterns of collective behavior in online social network (OSNs) is critical to expanding the knowledge of human behavior and tie relationship. In this paper, we investigate a specific pattern called social signature in Facebook and Wiki users' online communication behaviors, capturing the distribution of frequency of interactions between different alters over time in the ego network. The empirical results show that there are robust social signatures of interactions no matter how friends change over time, which indicates that a stable commutation pattern exists in online communication. By comparing a random null model, we find the that commutation pattern is heterogeneous between ego and alters. Furthermore, in order to regenerate the pattern of the social signature, we present a preferential interaction model, which assumes that new users intend to look for the old users with strong ties while old users have tendency to interact with new friends. The experimental results show that the presented model can reproduce the heterogeneity of social signature by adjusting 2 parameters, the number of communicating targets m and the max number of interactions n, for Facebook users, m = n = 5, for Wiki users, m = 2 and n = 8. This work helps in deeply understanding the regularity of social signature.

  3. Stochastic modelling of animal movement.

    PubMed

    Smouse, Peter E; Focardi, Stefano; Moorcroft, Paul R; Kie, John G; Forester, James D; Morales, Juan M

    2010-07-27

    Modern animal movement modelling derives from two traditions. Lagrangian models, based on random walk behaviour, are useful for multi-step trajectories of single animals. Continuous Eulerian models describe expected behaviour, averaged over stochastic realizations, and are usefully applied to ensembles of individuals. We illustrate three modern research arenas. (i) Models of home-range formation describe the process of an animal 'settling down', accomplished by including one or more focal points that attract the animal's movements. (ii) Memory-based models are used to predict how accumulated experience translates into biased movement choices, employing reinforced random walk behaviour, with previous visitation increasing or decreasing the probability of repetition. (iii) Lévy movement involves a step-length distribution that is over-dispersed, relative to standard probability distributions, and adaptive in exploring new environments or searching for rare targets. Each of these modelling arenas implies more detail in the movement pattern than general models of movement can accommodate, but realistic empiric evaluation of their predictions requires dense locational data, both in time and space, only available with modern GPS telemetry.

  4. The Stability of Perceived Pubertal Timing across Adolescence

    PubMed Central

    Cance, Jessica Duncan; Ennett, Susan T.; Morgan-Lopez, Antonio A.; Foshee, Vangie A.

    2011-01-01

    It is unknown whether perceived pubertal timing changes as puberty progresses or whether it is an important component of adolescent identity formation that is fixed early in pubertal development. The purpose of this study is to examine the stability of perceived pubertal timing among a school-based sample of rural adolescents aged 11 to 17 (N=6,425; 50% female; 53% White). Two measures of pubertal timing were used, stage-normative, based on the Pubertal Development Scale, a self-report scale of secondary sexual characteristics, and peer-normative, a one-item measure of perceived pubertal timing. Two longitudinal methods were used: one-way random effects ANOVA models and latent class analysis. When calculating intraclass correlation coefficients using the one-way random effects ANOVA models, which is based on the average reliability from one time point to the next, both measures had similar, but poor, stability. In contrast, latent class analysis, which looks at the longitudinal response pattern of each individual and treats deviation from that pattern as measurement error, showed three stable and distinct response patterns for both measures: always early, always on-time, and always late. Study results suggest instability in perceived pubertal timing from one age to the next, but this instability is likely due to measurement error. Thus, it may be necessary to take into account the longitudinal pattern of perceived pubertal timing across adolescence rather than measuring perceived pubertal timing at one point in time. PMID:21983873

  5. Effects of traffic generation patterns on the robustness of complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Jiajing; Zeng, Junwen; Chen, Zhenhao; Tse, Chi K.; Chen, Bokui

    2018-02-01

    Cascading failures in communication networks with heterogeneous node functions are studied in this paper. In such networks, the traffic dynamics are highly dependent on the traffic generation patterns which are in turn determined by the locations of the hosts. The data-packet traffic model is applied to Barabási-Albert scale-free networks to study the cascading failures in such networks and to explore the effects of traffic generation patterns on network robustness. It is found that placing the hosts at high-degree nodes in a network can make the network more robust against both intentional attacks and random failures. It is also shown that the traffic generation pattern plays an important role in network design.

  6. Development of Maps of Simple and Complex Cells in the Primary Visual Cortex

    PubMed Central

    Antolík, Ján; Bednar, James A.

    2011-01-01

    Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067

  7. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    PubMed

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  8. Stochastic modelling for lake thermokarst and peatland patterns in permafrost and near permafrost zones

    NASA Astrophysics Data System (ADS)

    Orlov, Timofey; Sadkov, Sergey; Panchenko, Evgeniy; Zverev, Andrey

    2017-04-01

    Peatlands occupy a significant share of the cryolithozone area. They are currently experiencing an intense affection by oil and gas field development, as well as by the construction of infrastructure. That poses the importance of the peatland studies, including those dealing with the forecast of peatland evolution. Earlier we conducted a similar probabilistic modelling for the areas of thermokarst development. Principle points of that were: 1. Appearance of a thermokarst depression within an area given is the random event which probability is directly proportional to the size of the area ( Δs). For small sites the probability of one thermokarst depression to appear is much greater than that for several ones, i.e. p1 = γ Δs + o (Δs) pk = o (Δs) \\quad k=2,3 ... 2. Growth of a new thermokarst depression is a random variable independent on other depressions' growth. It happens due to thermoabrasion and, hence, is directly proportional to the amount of heat in the lake and is inversely proportional to the lateral surface area of the lake depression. By using this model, we are able to get analytically two main laws of the morphological pattern for lake thermokarst plains. First, the distribution of a number of thermokarst depressions (centers) at a random plot obey the Poisson law: P(k,s) = (γ s)^k/k! e-γ s. where γ is an average number of depressions per area unit, s is a square of a trial sites. Second, lognormal distribution of diameters of thermokarst lakes is true at any time, i.e. density distribution is given by the equation: fd (x,t)=1/√{2πσ x √{t}} e-

  9. Spatial patterns and biodiversity in off-lattice simulations of a cyclic three-species Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.

    2018-02-01

    Stochastic simulations of cyclic three-species spatial predator-prey models are usually performed in square lattices with nearest-neighbour interactions starting from random initial conditions. In this letter we describe the results of off-lattice Lotka-Volterra stochastic simulations, showing that the emergence of spiral patterns does occur for sufficiently high values of the (conserved) total density of individuals. We also investigate the dynamics in our simulations, finding an empirical relation characterizing the dependence of the characteristic peak frequency and amplitude on the total density. Finally, we study the impact of the total density on the extinction probability, showing how a low population density may jeopardize biodiversity.

  10. The Intransitivity of Educational Preferences

    ERIC Educational Resources Information Center

    Smith, Debra Candace

    2013-01-01

    This study sought to answer the question of whether the existence of cycles in education are random events, or if cycles in education are likely to be expected on a regular basis due to intransitive decision-making patterns of stakeholders. This was a quantitative study, modeled after two previously conducted studies (Davis, 1958/59; May, 1954),…

  11. Pedological memory in forest soil development

    Treesearch

    Jonathan D. Phillips; Daniel A. Marion

    2004-01-01

    Individual trees may have significant impacts on soil morphology. If these impacts are non-random such that some microsites are repeatedly preferentially affected by trees, complex local spatial variability of soils would result. A model of self-reinforcing pedologic influences of trees (SRPIT) is proposed to explain patterns of soil variability in the Ouachita...

  12. A UTILITY THEORY OF OLD AGE.

    ERIC Educational Resources Information Center

    HAMLIN, ROY M.

    HERZBERG'S JOB SATISFACTION MODEL SERVES AS THE BASIS FOR AN ANALYSIS OF OLD AGE. THE PATTERN VARIES AMONG INDIVIDUALS, BUT THE CAPACITY FOR ORGANIZED BEHAVIOR RATHER THAN RANDOM STRESS REDUCTION SUPPLIES EACH INDIVIDUAL WITH A TASK. THE HYPOTHESIS IS THAT IF THE OLDER INDIVIDUAL REALIZES UTILITY IN HIS YEARS BEYOND 70, HE WILL RETAIN COMPETENCE…

  13. Generalized run-and-turn motions: From bacteria to Lévy walks

    NASA Astrophysics Data System (ADS)

    Detcheverry, François

    2017-07-01

    Swimming bacteria exhibit a repertoire of motility patterns, in which persistent motion is interrupted by turning events. What are the statistical properties of such random walks? If some particular instances have long been studied, the general case where turning times do not follow a Poisson process has remained unsolved. We present a generic extension of the continuous time random walks formalism relying on operators and noncommutative calculus. The approach is first applied to a unimodal model of bacterial motion. We examine the existence of a minimum in velocity correlation function and discuss the maximum of diffusivity at an optimal value of rotational diffusion. The model is then extended to bimodal patterns and includes as particular cases all swimming strategies: run-and-tumble, run-stop, run-reverse and run-reverse-flick. We characterize their velocity correlation functions and investigate how bimodality affects diffusivity. Finally, the wider applicability of the method is illustrated by considering curved trajectories and Lévy walks. Our results are relevant for intermittent motion of living beings, be they swimming micro-organisms or crawling cells.

  14. Spatial-temporal clustering of tornadoes

    NASA Astrophysics Data System (ADS)

    Malamud, Bruce D.; Turcotte, Donald L.; Brooks, Harold E.

    2016-12-01

    The standard measure of the intensity of a tornado is the Enhanced Fujita scale, which is based qualitatively on the damage caused by a tornado. An alternative measure of tornado intensity is the tornado path length, L. Here we examine the spatial-temporal clustering of severe tornadoes, which we define as having path lengths L ≥ 10 km. Of particular concern are tornado outbreaks, when a large number of severe tornadoes occur in a day in a restricted region. We apply a spatial-temporal clustering analysis developed for earthquakes. We take all pairs of severe tornadoes in observed and modelled outbreaks, and for each pair plot the spatial lag (distance between touchdown points) against the temporal lag (time between touchdown points). We apply our spatial-temporal lag methodology to the intense tornado outbreaks in the central United States on 26 and 27 April 2011, which resulted in over 300 fatalities and produced 109 severe (L ≥ 10 km) tornadoes. The patterns of spatial-temporal lag correlations that we obtain for the 2 days are strikingly different. On 26 April 2011, there were 45 severe tornadoes and our clustering analysis is dominated by a complex sequence of linear features. We associate the linear patterns with the tornadoes generated in either a single cell thunderstorm or a closely spaced cluster of single cell thunderstorms moving at a near-constant velocity. Our study of a derecho tornado outbreak of six severe tornadoes on 4 April 2011 along with modelled outbreak scenarios confirms this association. On 27 April 2011, there were 64 severe tornadoes and our clustering analysis is predominantly random with virtually no embedded linear patterns. We associate this pattern with a large number of interacting supercell thunderstorms generating tornadoes randomly in space and time. In order to better understand these associations, we also applied our approach to the Great Plains tornado outbreak of 3 May 1999. Careful studies by others have associated individual tornadoes with specified supercell thunderstorms. Our analysis of the 3 May 1999 tornado outbreak directly associated linear features in the largely random spatial-temporal analysis with several supercell thunderstorms, which we then confirmed using model scenarios of synthetic tornado outbreaks. We suggest that it may be possible to develop a semi-automated modelling of tornado touchdowns to match the type of observations made on the 3 May 1999 outbreak.

  15. Spatial-Temporal Clustering of Tornadoes

    NASA Astrophysics Data System (ADS)

    Malamud, Bruce D.; Turcotte, Donald L.; Brooks, Harold E.

    2017-04-01

    The standard measure of the intensity of a tornado is the Enhanced Fujita scale, which is based qualitatively on the damage caused by a tornado. An alternative measure of tornado intensity is the tornado path length, L. Here we examine the spatial-temporal clustering of severe tornadoes, which we define as having path lengths L ≥ 10 km. Of particular concern are tornado outbreaks, when a large number of severe tornadoes occur in a day in a restricted region. We apply a spatial-temporal clustering analysis developed for earthquakes. We take all pairs of severe tornadoes in observed and modelled outbreaks, and for each pair plot the spatial lag (distance between touchdown points) against the temporal lag (time between touchdown points). We apply our spatial-temporal lag methodology to the intense tornado outbreaks in the central United States on 26 and 27 April 2011, which resulted in over 300 fatalities and produced 109 severe (L ≥ 10 km) tornadoes. The patterns of spatial-temporal lag correlations that we obtain for the 2 days are strikingly different. On 26 April 2011, there were 45 severe tornadoes and our clustering analysis is dominated by a complex sequence of linear features. We associate the linear patterns with the tornadoes generated in either a single cell thunderstorm or a closely spaced cluster of single cell thunderstorms moving at a near-constant velocity. Our study of a derecho tornado outbreak of six severe tornadoes on 4 April 2011 along with modelled outbreak scenarios confirms this association. On 27 April 2011, there were 64 severe tornadoes and our clustering analysis is predominantly random with virtually no embedded linear patterns. We associate this pattern with a large number of interacting supercell thunderstorms generating tornadoes randomly in space and time. In order to better understand these associations, we also applied our approach to the Great Plains tornado outbreak of 3 May 1999. Careful studies by others have associated individual tornadoes with specified supercell thunderstorms. Our analysis of the 3 May 1999 tornado outbreak directly associated linear features in the largely random spatial-temporal analysis with several supercell thunderstorms, which we then confirmed using model scenarios of synthetic tornado outbreaks. We suggest that it may be possible to develop a semi-automated modelling of tornado touchdowns to match the type of observations made on the 3 May 1999 outbreak.

  16. Elevation Control on Vegetation Organization in a Semiarid Ecosystem in Central New Mexico

    NASA Astrophysics Data System (ADS)

    Nudurupati, S. S.; Istanbulluoglu, E.; Adams, J. M.; Hobley, D. E. J.; Gasparini, N. M.; Tucker, G. E.; Hutton, E. W. H.

    2015-12-01

    Many semiarid and desert ecosystems are characterized by patchy and dynamic vegetation. Topography plays a commanding role on vegetation patterns. It is observed that plant biomes and biodiversity vary systematically with slope and aspect, from shrublands in low desert elevations, to mixed grass/shrublands in mid elevations, and forests at high elevations. In this study, we investigate the role of elevation dependent climatology on vegetation organization in a semiarid New Mexico catchment where elevation and hillslope aspect play a defining role on plant types. An ecohydrologic cellular automaton model developed within Landlab (component based modeling framework) is used. The model couples local vegetation dynamics (that simulate biomass production based on local soil moisture and potential evapotranspiration) and plant establishment and mortality based on competition for resources and space. This model is driven by elevation dependent rainfall pulses and solar radiation. The domain is initialized with randomly assigned plant types and the model parameters that couple plant response with soil moisture are systematically changed. Climate perturbation experiments are conducted to examine spatial vegetation organization and associated timescales. Model results reproduce elevation and aspect controls on observed vegetation patterns indicating that this model captures necessary and sufficient conditions that explain these observed ecohydrological patterns.

  17. The reduction of adult neurogenesis in depression impairs the retrieval of new as well as remote episodic memory

    PubMed Central

    Fang, Jing; Demic, Selver; Cheng, Sen

    2018-01-01

    Major depressive disorder (MDD) is associated with an impairment of episodic memory, but the mechanisms underlying this deficit remain unclear. Animal models of MDD find impaired adult neurogenesis (AN) in the dentate gyrus (DG), and AN in DG has been suggested to play a critical role in reducing the interference between overlapping memories through pattern separation. Here, we study the effect of reduced AN in MDD on the accuracy of episodic memory using computational modeling. We focus on how memory is affected when periods with a normal rate of AN (asymptomatic states) alternate with periods with a low rate (depressive episodes), which has never been studied before. Also, unlike previous models of adult neurogenesis, which consider memories as static patterns, we model episodic memory as sequences of neural activity patterns. In our model, AN adds additional random components to the memory patterns, which results in the decorrelation of similar patterns. Consistent with previous studies, higher rates of AN lead to higher memory accuracy in our model, which implies that memories stored in the depressive state are impaired. Intriguingly, our model makes the novel prediction that memories stored in an earlier asymptomatic state are also impaired by a later depressive episode. This retrograde effect exacerbates with increased duration of the depressive episode. Finally, pattern separation at the sensory processing stage does not improve, but rather worsens, the accuracy of episodic memory retrieval, suggesting an explanation for why AN is found in brain areas serving memory rather than sensory function. In conclusion, while cognitive retrieval biases might contribute to episodic memory deficits in MDD, our model suggests a mechanistic explanation that affects all episodic memories, regardless of emotional relevance. PMID:29879169

  18. The reduction of adult neurogenesis in depression impairs the retrieval of new as well as remote episodic memory.

    PubMed

    Fang, Jing; Demic, Selver; Cheng, Sen

    2018-01-01

    Major depressive disorder (MDD) is associated with an impairment of episodic memory, but the mechanisms underlying this deficit remain unclear. Animal models of MDD find impaired adult neurogenesis (AN) in the dentate gyrus (DG), and AN in DG has been suggested to play a critical role in reducing the interference between overlapping memories through pattern separation. Here, we study the effect of reduced AN in MDD on the accuracy of episodic memory using computational modeling. We focus on how memory is affected when periods with a normal rate of AN (asymptomatic states) alternate with periods with a low rate (depressive episodes), which has never been studied before. Also, unlike previous models of adult neurogenesis, which consider memories as static patterns, we model episodic memory as sequences of neural activity patterns. In our model, AN adds additional random components to the memory patterns, which results in the decorrelation of similar patterns. Consistent with previous studies, higher rates of AN lead to higher memory accuracy in our model, which implies that memories stored in the depressive state are impaired. Intriguingly, our model makes the novel prediction that memories stored in an earlier asymptomatic state are also impaired by a later depressive episode. This retrograde effect exacerbates with increased duration of the depressive episode. Finally, pattern separation at the sensory processing stage does not improve, but rather worsens, the accuracy of episodic memory retrieval, suggesting an explanation for why AN is found in brain areas serving memory rather than sensory function. In conclusion, while cognitive retrieval biases might contribute to episodic memory deficits in MDD, our model suggests a mechanistic explanation that affects all episodic memories, regardless of emotional relevance.

  19. Declines in moose population density at Isle Royle National Park, MI, USA and accompanied changes in landscape patterns

    USGS Publications Warehouse

    De Jager, N. R.; Pastor, J.

    2009-01-01

    Ungulate herbivores create patterns of forage availability, plant species composition, and soil fertility as they range across large landscapes and consume large quantities of plant material. Over time, herbivore populations fluctuate, producing great potential for spatio-temporal landscape dynamics. In this study, we extend the spatial and temporal extent of a long-term investigation of the relationship of landscape patterns to moose foraging behavior at Isle Royale National Park, MI. We examined how patterns of browse availability and consumption, plant basal area, and soil fertility changed during a recent decline in the moose population. We used geostatistics to examine changes in the nature of spatial patterns in two valleys over 18 years and across short-range and long-range distance scales. Landscape patterns of available and consumed browse changed from either repeated patches or randomly distributed patches in 1988-1992 to random point distributions by 2007 after a recent record high peak followed by a rapid decline in the moose population. Patterns of available and consumed browse became decoupled during the moose population low, which is in contrast to coupled patterns during the earlier high moose population. Distributions of plant basal area and soil nitrogen availability also switched from repeated patches to randomly distributed patches in one valley and to random point distributions in the other valley. Rapid declines in moose population density may release vegetation and soil fertility from browsing pressure and in turn create random landscape patterns. ?? Springer Science+Business Media B.V. 2009.

  20. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    NASA Astrophysics Data System (ADS)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  1. Using multilevel spatial models to understand salamander site occupancy patterns after wildfire

    USGS Publications Warehouse

    Chelgren, Nathan; Adams, Michael J.; Bailey, Larissa L.; Bury, R. Bruce

    2011-01-01

    Studies of the distribution of elusive forest wildlife have suffered from the confounding of true presence with the uncertainty of detection. Occupancy modeling, which incorporates probabilities of species detection conditional on presence, is an emerging approach for reducing observation bias. However, the current likelihood modeling framework is restrictive for handling unexplained sources of variation in the response that may occur when there are dependence structures such as smaller sampling units that are nested within larger sampling units. We used multilevel Bayesian occupancy modeling to handle dependence structures and to partition sources of variation in occupancy of sites by terrestrial salamanders (family Plethodontidae) within and surrounding an earlier wildfire in western Oregon, USA. Comparison of model fit favored a spatial N-mixture model that accounted for variation in salamander abundance over models that were based on binary detection/non-detection data. Though catch per unit effort was higher in burned areas than unburned, there was strong support that this pattern was due to a higher probability of capture for individuals in burned plots. Within the burn, the odds of capturing an individual given it was present were 2.06 times the odds outside the burn, reflecting reduced complexity of ground cover in the burn. There was weak support that true occupancy was lower within the burned area. While the odds of occupancy in the burn were 0.49 times the odds outside the burn among the five species, the magnitude of variation attributed to the burn was small in comparison to variation attributed to other landscape variables and to unexplained, spatially autocorrelated random variation. While ordinary occupancy models may separate the biological pattern of interest from variation in detection probability when all sources of variation are known, the addition of random effects structures for unexplained sources of variation in occupancy and detection probability may often more appropriately represent levels of uncertainty. ?? 2011 by the Ecological Society of America.

  2. Probability Theory Plus Noise: Descriptive Estimation and Inferential Judgment.

    PubMed

    Costello, Fintan; Watts, Paul

    2018-01-01

    We describe a computational model of two central aspects of people's probabilistic reasoning: descriptive probability estimation and inferential probability judgment. This model assumes that people's reasoning follows standard frequentist probability theory, but it is subject to random noise. This random noise has a regressive effect in descriptive probability estimation, moving probability estimates away from normative probabilities and toward the center of the probability scale. This random noise has an anti-regressive effect in inferential judgement, however. These regressive and anti-regressive effects explain various reliable and systematic biases seen in people's descriptive probability estimation and inferential probability judgment. This model predicts that these contrary effects will tend to cancel out in tasks that involve both descriptive estimation and inferential judgement, leading to unbiased responses in those tasks. We test this model by applying it to one such task, described by Gallistel et al. ). Participants' median responses in this task were unbiased, agreeing with normative probability theory over the full range of responses. Our model captures the pattern of unbiased responses in this task, while simultaneously explaining systematic biases away from normatively correct probabilities seen in other tasks. Copyright © 2018 Cognitive Science Society, Inc.

  3. Master stability functions reveal diffusion-driven pattern formation in networks

    NASA Astrophysics Data System (ADS)

    Brechtel, Andreas; Gramlich, Philipp; Ritterskamp, Daniel; Drossel, Barbara; Gross, Thilo

    2018-03-01

    We study diffusion-driven pattern formation in networks of networks, a class of multilayer systems, where different layers have the same topology, but different internal dynamics. Agents are assumed to disperse within a layer by undergoing random walks, while they can be created or destroyed by reactions between or within a layer. We show that the stability of homogeneous steady states can be analyzed with a master stability function approach that reveals a deep analogy between pattern formation in networks and pattern formation in continuous space. For illustration, we consider a generalized model of ecological meta-food webs. This fairly complex model describes the dispersal of many different species across a region consisting of a network of individual habitats while subject to realistic, nonlinear predator-prey interactions. In this example, the method reveals the intricate dependence of the dynamics on the spatial structure. The ability of the proposed approach to deal with this fairly complex system highlights it as a promising tool for ecology and other applications.

  4. Prospective Measurement of Daily Health Behaviors: Modeling Temporal Patterns in Missing Data, Sexual Behavior, and Substance Use in an Online Daily Diary Study of Gay and Bisexual Men.

    PubMed

    Rendina, H Jonathon; Ventuneac, Ana; Mustanski, Brian; Grov, Christian; Parsons, Jeffrey T

    2016-08-01

    Daily diary and other intensive longitudinal methods are increasingly being used to investigate fluctuations in psychological and behavioral processes. To inform the development of this methodology, we sought to explore predictors of and patterns in diary compliance and behavioral reports. We used multilevel modeling to analyze data from an online daily diary study of 371 gay and bisexual men focused on sexual behavior and substance use. We found that greater education and older age as well as lower frequency of substance use were associated with higher compliance. Using polynomial and trigonometric functions, we found evidence for circaseptan patterns in compliance, sexual behavior, and substance use, as well as linear declines in compliance and behavior over time. The results suggest potential sources of non-random patterns of missing data and suggest that trigonometric terms provide a similar but more parsimonious investigation of circaseptan rhythms than do third-order polynomial terms.

  5. Pattern selection and super-patterns in the bounded confidence model

    DOE PAGES

    Ben-Naim, E.; Scheel, A.

    2015-10-26

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes ofmore » the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. Furthermore, the spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.« less

  6. Prospective Measurement of Daily Health Behaviors: Modeling Temporal Patterns in Missing Data, Sexual Behavior, and Substance Use in an Online Daily Diary Study of Gay and Bisexual Men

    PubMed Central

    Rendina, H. Jonathon; Ventuneac, Ana; Mustanski, Brian; Grov, Christian; Parsons, Jeffrey T.

    2016-01-01

    Daily diary and other intensive longitudinal methods are increasingly being used to investigate fluctuations in psychological and behavioral processes. To inform the development of this methodology, we sought to explore predictors of and patterns in diary compliance and behavioral reports. We used multilevel modeling to analyze data from an online daily diary study of 371 gay and bisexual men focused on sexual behavior and substance use. We found that greater education and older age as well as lower frequency of substance use were associated with higher compliance. Using polynomial and trigonometric functions, we found evidence for circaseptan patterns in compliance, sexual behavior, and substance use, as well as linear declines in compliance and behavior over time. The results suggest potential sources of non-random patterns of missing data and suggest that trigonometric terms provide a similar but more parsimonious investigation of circaseptan rhythms than do third-order polynomial terms. PMID:26992392

  7. Helical Turing patterns in the Lengyel-Epstein model in thin cylindrical layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bánsági, T.; Taylor, A. F., E-mail: A.F.Taylor@sheffield.ac.uk

    2015-06-15

    The formation of Turing patterns was investigated in thin cylindrical layers using the Lengyel-Epstein model of the chlorine dioxide-iodine-malonic acid reaction. The influence of the width of the layer W and the diameter D of the inner cylinder on the pattern with intrinsic wavelength l were determined in simulations with initial random noise perturbations to the uniform state for W < l/2 and D ∼ l or lower. We show that the geometric constraints of the reaction domain may result in the formation of helical Turing patterns with parameters that give stripes (b = 0.2) or spots (b = 0.37) in two dimensions. For b = 0.2, the helices weremore » composed of lamellae and defects were likely as the diameter of the cylinder increased. With b = 0.37, the helices consisted of semi-cylinders and the orientation of stripes on the outer surface (and hence winding number) increased with increasing diameter until a new stripe appeared.« less

  8. Neutral Community Dynamics and the Evolution of Species Interactions.

    PubMed

    Coelho, Marco Túlio P; Rangel, Thiago F

    2018-04-01

    A contemporary goal in ecology is to determine the ecological and evolutionary processes that generate recurring structural patterns in mutualistic networks. One of the great challenges is testing the capacity of neutral processes to replicate observed patterns in ecological networks, since the original formulation of the neutral theory lacks trophic interactions. Here, we develop a stochastic-simulation neutral model adding trophic interactions to the neutral theory of biodiversity. Without invoking ecological differences among individuals of different species, and assuming that ecological interactions emerge randomly, we demonstrate that a spatially explicit multitrophic neutral model is able to capture the recurrent structural patterns of mutualistic networks (i.e., degree distribution, connectance, nestedness, and phylogenetic signal of species interactions). Nonrandom species distribution, caused by probabilistic events of migration and speciation, create nonrandom network patterns. These findings have broad implications for the interpretation of niche-based processes as drivers of ecological networks, as well as for the integration of network structures with demographic stochasticity.

  9. Pattern selection and super-patterns in the bounded confidence model

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Scheel, A.

    2015-10-01

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.

  10. Pattern of students' conceptual change on magnetic field based on students' mental models

    NASA Astrophysics Data System (ADS)

    Hamid, Rimba; Widodo, Ari; Sopandi, Wahyu

    2017-05-01

    Students understanding about natural phenomena can be identified by analyzing their mental model. Changes in students' mental model are good indicator of students' conceptual change. This research aims at identifying students' conceptual change by analyzing changes in students' mental model. Participants of the study were twenty five elementary school students. Data were collected through throughout the lessons (prior to the lessons, during the lessons and after the lessons) based on students' written responses and individual interviews. Lessons were designed to facilitate students' conceptual change by allowing students to work in groups of students who have the similar ideas. Therefore, lessons were students-directed. Changes of students' ideas in every stage of the lessons were identified and analyzed. The results showed that there are three patterns of students' mental models, namely type of scientific (44%), analogous to everyday life (52%), and intuitive (4%). Further analyses of the pattern of their conceptual change identifies four different patterns, i.e. consistently correct (20%), consistently incomplete (16%), changing from incorrect to incomplete (8%), changing from incomplete to complete (32%), changing from complete to incorrect (4%), and changing from incorrect to complete (4%). This study suggest that the process of learning science does not move in a linear and progressive ways, rather they move in random and may move backward and forward.

  11. Robust-yet-fragile nature of interdependent networks

    NASA Astrophysics Data System (ADS)

    Tan, Fei; Xia, Yongxiang; Wei, Zhi

    2015-05-01

    Interdependent networks have been shown to be extremely vulnerable based on the percolation model. Parshani et al. [Europhys. Lett. 92, 68002 (2010), 10.1209/0295-5075/92/68002] further indicated that the more intersimilar networks are, the more robust they are to random failures. When traffic load is considered, how do the coupling patterns impact cascading failures in interdependent networks? This question has been largely unexplored until now. In this paper, we address this question by investigating the robustness of interdependent Erdös-Rényi random graphs and Barabási-Albert scale-free networks under either random failures or intentional attacks. It is found that interdependent Erdös-Rényi random graphs are robust yet fragile under either random failures or intentional attacks. Interdependent Barabási-Albert scale-free networks, however, are only robust yet fragile under random failures but fragile under intentional attacks. We further analyze the interdependent communication network and power grid and achieve similar results. These results advance our understanding of how interdependency shapes network robustness.

  12. Contrasting species and functional beta diversity in montane ant assemblages.

    PubMed

    Bishop, Tom R; Robertson, Mark P; van Rensburg, Berndt J; Parr, Catherine L

    2015-09-01

    Beta diversity describes the variation in species composition between sites and can be used to infer why different species occupy different parts of the globe. It can be viewed in a number of ways. First, it can be partitioned into two distinct patterns: turnover and nestedness. Second, it can be investigated from either a species identity or a functional-trait point of view. We aim to document for the first time how these two aspects of beta diversity vary in response to a large environmental gradient. Maloti-Drakensberg Mountains, southern Africa. We sampled ant assemblages along an extensive elevational gradient (900-3000 m a.s.l.) twice yearly for 7 years, and collected functional-trait information related to the species' dietary and habitat-structure preferences. We used recently developed methods to partition species and functional beta diversity into their turnover and nestedness components. A series of null models were used to test whether the observed beta diversity patterns differed from random expectations. Species beta diversity was driven by turnover, but functional beta diversity was composed of both turnover and nestedness patterns at different parts of the gradient. Null models revealed that deterministic processes were likely to be responsible for the species patterns but that the functional changes were indistinguishable from stochasticity. Different ant species are found with increasing elevation, but they tend to represent an increasingly nested subset of the available functional strategies. This finding is unique and narrows down the list of possible factors that control ant existence across elevation. We conclude that diet and habitat preferences have little role in structuring ant assemblages in montane environments and that some other factor must be driving the non-random patterns of species turnover. This finding also highlights the importance of distinguishing between different kinds of beta diversity.

  13. Wavelength selection beyond turing

    NASA Astrophysics Data System (ADS)

    Zelnik, Yuval R.; Tzuk, Omer

    2017-06-01

    Spatial patterns arising spontaneously due to internal processes are ubiquitous in nature, varying from periodic patterns of dryland vegetation to complex structures of bacterial colonies. Many of these patterns can be explained in the context of a Turing instability, where patterns emerge due to two locally interacting components that diffuse with different speeds in the medium. Turing patterns are multistable, meaning that many different patterns with different wavelengths are possible for the same set of parameters. Nevertheless, in a given region typically only one such wavelength is dominant. In the Turing instability region, random initial conditions will mostly lead to a wavelength that is similar to that of the leading eigenvector that arises from the linear stability analysis, but when venturing beyond, little is known about the pattern that will emerge. Using dryland vegetation as a case study, we use different models of drylands ecosystems to study the wavelength pattern that is selected in various scenarios beyond the Turing instability region, focusing on the phenomena of localized states and repeated local disturbances.

  14. Active dynamics of colloidal particles in time-varying laser speckle patterns

    PubMed Central

    Bianchi, Silvio; Pruner, Riccardo; Vizsnyiczai, Gaszton; Maggi, Claudio; Di Leonardo, Roberto

    2016-01-01

    Colloidal particles immersed in a dynamic speckle pattern experience an optical force that fluctuates both in space and time. The resulting dynamics presents many interesting analogies with a broad class of non-equilibrium systems like: active colloids, self propelled microorganisms, transport in dynamical intracellular environments. Here we show that the use of a spatial light modulator allows to generate light fields that fluctuate with controllable space and time correlations and a prescribed average intensity profile. In particular we generate ring-shaped random patterns that can confine a colloidal particle over a quasi one-dimensional random energy landscape. We find a mean square displacement that is diffusive at both short and long times, while a superdiffusive or subdiffusive behavior is observed at intermediate times depending on the value of the speckles correlation time. We propose two alternative models for the mean square displacement in the two limiting cases of a short or long speckles correlation time. A simple interpolation formula is shown to account for the full phenomenology observed in the mean square displacement across the entire range from fast to slow fluctuating speckles. PMID:27279540

  15. Effect of Electroacupuncture at The Zusanli Point (Stomach-36) on Dorsal Random Pattern Skin Flap Survival in a Rat Model.

    PubMed

    Wang, Li-Ren; Cai, Le-Yi; Lin, Ding-Sheng; Cao, Bin; Li, Zhi-Jie

    2017-10-01

    Random skin flaps are commonly used for wound repair and reconstruction. Electroacupuncture at The Zusanli point could enhance microcirculation and blood perfusion in random skin flaps. To determine whether electroacupuncture at The Zusanli point can improve the survival of random skin flaps in a rat model. Thirty-six male Sprague Dawley rats were randomly divided into 3 groups: control group (no electroacupuncture), Group A (electroacupuncture at a nonacupoint near The Zusanli point), and Group B (electroacupuncture at The Zusanli point). McFarlane flaps were established. On postoperative Day 2, malondialdehyde (MDA) and superoxide dismutase were detected. The flap survival rate was evaluated, inflammation was examined in hematoxylin and eosin-stained slices, and the expression of vascular endothelial growth factor (VEGF) was measured immunohistochemically on Day 7. The mean survival area of the flaps in Group B was significantly larger than that in the control group and Group A. Superoxide dismutase activity and VEGF expression level were significantly higher in Group B than those in the control group and Group A, whereas MDA and inflammation levels in Group B were significantly lower than those in the other 2 groups. Electroacupuncture at The Zusanli point can effectively improve the random flap survival.

  16. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  17. Perception of Randomness: On the Time of Streaks

    ERIC Educational Resources Information Center

    Sun, Yanlong; Wang, Hongbin

    2010-01-01

    People tend to think that streaks in random sequential events are rare and remarkable. When they actually encounter streaks, they tend to consider the underlying process as non-random. The present paper examines the time of pattern occurrences in sequences of Bernoulli trials, and shows that among all patterns of the same length, a streak is the…

  18. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  19. Multiple-trait structured antedependence model to study the relationship between litter size and birth weight in pigs and rabbits.

    PubMed

    David, Ingrid; Garreau, Hervé; Balmisse, Elodie; Billon, Yvon; Canario, Laurianne

    2017-01-20

    Some genetic studies need to take into account correlations between traits that are repeatedly measured over time. Multiple-trait random regression models are commonly used to analyze repeated traits but suffer from several major drawbacks. In the present study, we developed a multiple-trait extension of the structured antedependence model (SAD) to overcome this issue and validated its usefulness by modeling the association between litter size (LS) and average birth weight (ABW) over parities in pigs and rabbits. The single-trait SAD model assumes that a random effect at time [Formula: see text] can be explained by the previous values of the random effect (i.e. at previous times). The proposed multiple-trait extension of the SAD model consists in adding a cross-antedependence parameter to the single-trait SAD model. This model can be easily fitted using ASReml and the OWN Fortran program that we have developed. In comparison with the random regression model, we used our multiple-trait SAD model to analyze the LS and ABW of 4345 litters from 1817 Large White sows and 8706 litters from 2286 L-1777 does over a maximum of five successive parities. For both species, the multiple-trait SAD fitted the data better than the random regression model. The difference between AIC of the two models (AIC_random regression-AIC_SAD) were equal to 7 and 227 for pigs and rabbits, respectively. A similar pattern of heritability and correlation estimates was obtained for both species. Heritabilities were lower for LS (ranging from 0.09 to 0.29) than for ABW (ranging from 0.23 to 0.39). The general trend was a decrease of the genetic correlation for a given trait between more distant parities. Estimates of genetic correlations between LS and ABW were negative and ranged from -0.03 to -0.52 across parities. No correlation was observed between the permanent environmental effects, except between the permanent environmental effects of LS and ABW of the same parity, for which the estimate of the correlation was strongly negative (ranging from -0.57 to -0.67). We demonstrated that application of our multiple-trait SAD model is feasible for studying several traits with repeated measurements and showed that it provided a better fit to the data than the random regression model.

  20. Plasticity-Driven Self-Organization under Topological Constraints Accounts for Non-random Features of Cortical Synaptic Wiring

    PubMed Central

    Miner, Daniel; Triesch, Jochen

    2016-01-01

    Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring. PMID:26866369

  1. NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.

    PubMed

    Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N

    2016-11-01

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.

  2. Integument pattern formation involves genetic and epigenetic controls: feather arrays simulated by digital hormone models

    PubMed Central

    Jiang, Ting-Xin; Widelitz, Randall B.; Shen, Wei-Min; Will, Peter; Wu, Da-Yu; Lin, Chih-Min; Jung, Han-Sung; Chuong, Cheng-Ming

    2015-01-01

    Pattern formation is a fundamental morphogenetic process. Models based on genetic and epigenetic control have been proposed but remain controversial. Here we use feather morphogenesis for further evaluation. Adhesion molecules and/or signaling molecules were first expressed homogenously in feather tracts (restrictive mode, appear earlier) or directly in bud or inter-bud regions (de novo mode, appear later). They either activate or inhibit bud formation, but paradoxically co-localize in the bud. Using feather bud reconstitution, we showed that completely dissociated cells can reform periodic patterns without reference to previous positional codes. The patterning process has the characteristics of being self-organizing, dynamic and plastic. The final pattern is an equilibrium state reached by competition, and the number and size of buds can be altered based on cell number and activator/inhibitor ratio, respectively. We developed a Digital Hormone Model which consists of (1) competent cells without identity that move randomly in a space, (2) extracellular signaling hormones which diffuse by a reaction-diffusion mechanism and activate or inhibit cell adhesion, and (3) cells which respond with topological stochastic actions manifested as changes in cell adhesion. Based on probability, the results are cell clusters arranged in dots or stripes. Thus genetic control provides combinational molecular information which defines the properties of the cells but not the final pattern. Epigenetic control governs interactions among cells and their environment based on physical-chemical rules (such as those described in the Digital Hormone Model). Complex integument patterning is the sum of these two components of control and that is why integument patterns are usually similar but non-identical. These principles may be shared by other pattern formation processes such as barb ridge formation, fingerprints, pigmentation patterning, etc. The Digital Hormone Model can also be applied to swarming robot navigation, reaching intelligent automata and representing a self-re-configurable type of control rather than a follow-the-instruction type of control. PMID:15272377

  3. Quantifying patterns of research interest evolution

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.

  4. Soil Parameter Mapping and Ad Hoc Power Analysis to Increase Blocking Efficiency Prior to Establishing a Long-Term Field Experiment

    PubMed Central

    Collins, Doug; Benedict, Chris; Bary, Andy; Cogger, Craig

    2015-01-01

    The spatial heterogeneity of soil and weed populations poses a challenge to researchers. Unlike aboveground variability, below-ground variability is more difficult to discern without a strategic soil sampling pattern. While blocking is commonly used to control environmental variation, this strategy is rarely informed by data about current soil conditions. Fifty georeferenced sites were located in a 0.65 ha area prior to establishing a long-term field experiment. Soil organic matter (OM) and weed seed bank populations were analyzed at each site and the spatial structure was modeled with semivariograms and interpolated with kriging to map the surface. These maps were used to formulate three strategic blocking patterns and the efficiency of each pattern was compared to a completely randomized design and a west to east model not informed by soil variability. Compared to OM, weeds were more variable across the landscape and had a shorter range of autocorrelation, and models to increase blocking efficiency resulted in less increase in power. Weeds and OM were not correlated, so no model examined improved power equally for both parameters. Compared to the west to east blocking pattern, the final blocking pattern chosen resulted in a 7-fold increase in power for OM and a 36% increase in power for weeds. PMID:26247056

  5. Investigating the Relationship between Effective Communication of Spouse and Father-Child Relationship (Test Pattern Causes to Education Parents)

    ERIC Educational Resources Information Center

    Ataeifar, Robabeh; Amiri, Sholeh; Ali Nadi, Mohammad

    2016-01-01

    This research is targeted with the plan of father-child model or effective relationship mediating of spouses or investigating attachment style, personality traits, communication skills, and spouses' sexual satisfaction. Based on this, 260 people (father and child) were selected through random sampling method based on share. Participants were…

  6. Educational Differences in U.S. Adult Mortality: A Cohort Perspective

    ERIC Educational Resources Information Center

    Masters, Ryan K.; Hummer, Robert A.; Powers, Daniel A.

    2012-01-01

    We use hierarchical cross-classified random-effects models to simultaneously measure age, period, and cohort patterns of mortality risk between 1986 and 2006 for non-Hispanic white and non-Hispanic black men and women with less than a high school education, a high school education, and more than a high school education. We examine all-cause…

  7. Detailed modeling of local anisotropy and transverse Ku interplay regarding hysteresis loop in FeCuNbSiB nanocrystalline ribbons

    NASA Astrophysics Data System (ADS)

    Geoffroy, Olivier; Boust, Nicolas; Chazal, Hervé; Flury, Sébastien; Roudet, James

    2018-04-01

    This article focuses on the modeling of the hysteresis loop featured by Fe-Cu-Nb-Si-B nanocrystalline alloys with transverse induced anisotropy. The magnetization reversal process of a magnetic correlated volume (CV), characterized by the induced anisotropy Ku, and a deviation of the local easy magnetization direction featuring the effect of a local incoherent anisotropy Ki, is analyzed, taking account of magnetostatic interactions. Solving the equations shows that considering a unique typical kind of CV does not enable accounting for both the domain pattern and the coercivity. Actually, the classical majority CVs obeying the random anisotropy model explains well the domain pattern but considering another kind of CVs, minority, mingled with classical ones, featuring a magnitude of Ki comparable to Ku, is necessary to account for coercivity. The model has been successfully compared with experimental data.

  8. Characteristics of the transmission of autoregressive sub-patterns in financial time series

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong

    2014-09-01

    There are many types of autoregressive patterns in financial time series, and they form a transmission process. Here, we define autoregressive patterns quantitatively through an econometrical regression model. We present a computational algorithm that sets the autoregressive patterns as nodes and transmissions between patterns as edges, and then converts the transmission process of autoregressive patterns in a time series into a network. We utilised daily Shanghai (securities) composite index time series to study the transmission characteristics of autoregressive patterns. We found statistically significant evidence that the financial market is not random and that there are similar characteristics between parts and whole time series. A few types of autoregressive sub-patterns and transmission patterns drive the oscillations of the financial market. A clustering effect on fluctuations appears in the transmission process, and certain non-major autoregressive sub-patterns have high media capabilities in the financial time series. Different stock indexes exhibit similar characteristics in the transmission of fluctuation information. This work not only proposes a distinctive perspective for analysing financial time series but also provides important information for investors.

  9. Characteristics of the transmission of autoregressive sub-patterns in financial time series

    PubMed Central

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong

    2014-01-01

    There are many types of autoregressive patterns in financial time series, and they form a transmission process. Here, we define autoregressive patterns quantitatively through an econometrical regression model. We present a computational algorithm that sets the autoregressive patterns as nodes and transmissions between patterns as edges, and then converts the transmission process of autoregressive patterns in a time series into a network. We utilised daily Shanghai (securities) composite index time series to study the transmission characteristics of autoregressive patterns. We found statistically significant evidence that the financial market is not random and that there are similar characteristics between parts and whole time series. A few types of autoregressive sub-patterns and transmission patterns drive the oscillations of the financial market. A clustering effect on fluctuations appears in the transmission process, and certain non-major autoregressive sub-patterns have high media capabilities in the financial time series. Different stock indexes exhibit similar characteristics in the transmission of fluctuation information. This work not only proposes a distinctive perspective for analysing financial time series but also provides important information for investors. PMID:25189200

  10. Landau instability and mobility edges of the interacting one-dimensional Bose gas in weak random potentials

    NASA Astrophysics Data System (ADS)

    Cherny, Alexander Yu; Caux, Jean-Sébastien; Brand, Joachim

    2018-01-01

    We study the frictional force exerted on the trapped, interacting 1D Bose gas under the influence of a moving random potential. Specifically we consider weak potentials generated by optical speckle patterns with finite correlation length. We show that repulsive interactions between bosons lead to a superfluid response and suppression of frictional force, which can inhibit the onset of Anderson localisation. We perform a quantitative analysis of the Landau instability based on the dynamic structure factor of the integrable Lieb-Liniger model and demonstrate the existence of effective mobility edges.

  11. Synchronised firing patterns in a random network of adaptive exponential integrate-and-fire neuron model.

    PubMed

    Borges, F S; Protachevicz, P R; Lameu, E L; Bonetti, R C; Iarosz, K C; Caldas, I L; Baptista, M S; Batista, A M

    2017-06-01

    We have studied neuronal synchronisation in a random network of adaptive exponential integrate-and-fire neurons. We study how spiking or bursting synchronous behaviour appears as a function of the coupling strength and the probability of connections, by constructing parameter spaces that identify these synchronous behaviours from measurements of the inter-spike interval and the calculation of the order parameter. Moreover, we verify the robustness of synchronisation by applying an external perturbation to each neuron. The simulations show that bursting synchronisation is more robust than spike synchronisation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Probabilistic arithmetic automata and their applications.

    PubMed

    Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven

    2012-01-01

    We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.

  13. Sign language spotting with a threshold model based on conditional random fields.

    PubMed

    Yang, Hee-Deok; Sclaroff, Stan; Lee, Seong-Whan

    2009-07-01

    Sign language spotting is the task of detecting and recognizing signs in a signed utterance, in a set vocabulary. The difficulty of sign language spotting is that instances of signs vary in both motion and appearance. Moreover, signs appear within a continuous gesture stream, interspersed with transitional movements between signs in a vocabulary and nonsign patterns (which include out-of-vocabulary signs, epentheses, and other movements that do not correspond to signs). In this paper, a novel method for designing threshold models in a conditional random field (CRF) model is proposed which performs an adaptive threshold for distinguishing between signs in a vocabulary and nonsign patterns. A short-sign detector, a hand appearance-based sign verification method, and a subsign reasoning method are included to further improve sign language spotting accuracy. Experiments demonstrate that our system can spot signs from continuous data with an 87.0 percent spotting rate and can recognize signs from isolated data with a 93.5 percent recognition rate versus 73.5 percent and 85.4 percent, respectively, for CRFs without a threshold model, short-sign detection, subsign reasoning, and hand appearance-based sign verification. Our system can also achieve a 15.0 percent sign error rate (SER) from continuous data and a 6.4 percent SER from isolated data versus 76.2 percent and 14.5 percent, respectively, for conventional CRFs.

  14. Determination of the influence of dispersion pattern of pesticide-resistant individuals on the reliability of resistance estimates using different sampling plans.

    PubMed

    Shah, R; Worner, S P; Chapman, R B

    2012-10-01

    Pesticide resistance monitoring includes resistance detection and subsequent documentation/ measurement. Resistance detection would require at least one (≥1) resistant individual(s) to be present in a sample to initiate management strategies. Resistance documentation, on the other hand, would attempt to get an estimate of the entire population (≥90%) of the resistant individuals. A computer simulation model was used to compare the efficiency of simple random and systematic sampling plans to detect resistant individuals and to document their frequencies when the resistant individuals were randomly or patchily distributed. A patchy dispersion pattern of resistant individuals influenced the sampling efficiency of systematic sampling plans while the efficiency of random sampling was independent of such patchiness. When resistant individuals were randomly distributed, sample sizes required to detect at least one resistant individual (resistance detection) with a probability of 0.95 were 300 (1%) and 50 (10% and 20%); whereas, when resistant individuals were patchily distributed, using systematic sampling, sample sizes required for such detection were 6000 (1%), 600 (10%) and 300 (20%). Sample sizes of 900 and 400 would be required to detect ≥90% of resistant individuals (resistance documentation) with a probability of 0.95 when resistant individuals were randomly dispersed and present at a frequency of 10% and 20%, respectively; whereas, when resistant individuals were patchily distributed, using systematic sampling, a sample size of 3000 and 1500, respectively, was necessary. Small sample sizes either underestimated or overestimated the resistance frequency. A simple random sampling plan is, therefore, recommended for insecticide resistance detection and subsequent documentation.

  15. Bet-hedging as a complex interaction among developmental instability, environmental heterogeneity, dispersal, and life-history strategy.

    PubMed

    Scheiner, Samuel M

    2014-02-01

    One potential evolutionary response to environmental heterogeneity is the production of randomly variable offspring through developmental instability, a type of bet-hedging. I used an individual-based, genetically explicit model to examine the evolution of developmental instability. The model considered both temporal and spatial heterogeneity alone and in combination, the effect of migration pattern (stepping stone vs. island), and life-history strategy. I confirmed that temporal heterogeneity alone requires a threshold amount of variation to select for a substantial amount of developmental instability. For spatial heterogeneity only, the response to selection on developmental instability depended on the life-history strategy and the form and pattern of dispersal with the greatest response for island migration when selection occurred before dispersal. Both spatial and temporal variation alone select for similar amounts of instability, but in combination resulted in substantially more instability than either alone. Local adaptation traded off against bet-hedging, but not in a simple linear fashion. I found higher-order interactions between life-history patterns, dispersal rates, dispersal patterns, and environmental heterogeneity that are not explainable by simple intuition. We need additional modeling efforts to understand these interactions and empirical tests that explicitly account for all of these factors.

  16. MOAB: a spatially explicit, individual-based expert system for creating animal foraging models

    USGS Publications Warehouse

    Carter, J.; Finn, John T.

    1999-01-01

    We describe the development, structure, and corroboration process of a simulation model of animal behavior (MOAB). MOAB can create spatially explicit, individual-based animal foraging models. Users can create or replicate heterogeneous landscape patterns, and place resources and individual animals of a goven species on that landscape to simultaneously simulate the foraging behavior of multiple species. The heuristic rules for animal behavior are maintained in a user-modifiable expert system. MOAB can be used to explore hypotheses concerning the influence of landscape patttern on animal movement and foraging behavior. A red fox (Vulpes vulpes L.) foraging and nest predation model was created to test MOAB's capabilities. Foxes were simulated for 30-day periods using both expert system and random movement rules. Home range size, territory formation and other available simulation studies. A striped skunk (Mephitis mephitis L.) model also was developed. The expert system model proved superior to stochastic in respect to territory formation, general movement patterns and home range size.

  17. Reactive and anticipatory looking in 6-month-old infants during a visual expectation paradigm.

    PubMed

    Quan, Jeffry; Bureau, Jean-François; Abdul Malik, Adam B; Wong, Johnny; Rifkin-Graboi, Anne

    2017-10-01

    This article presents data from 278 six-month-old infants who completed a visual expectation paradigm in which audiovisual stimuli were first presented randomly (random phase), and then in a spatial pattern (pattern phase). Infants' eye gaze behaviour was tracked with a 60 Hz Tobii eye-tracker in order to measure two types of looking behaviour: reactive looking (i.e., latency to shift eye gaze in reaction to the appearance of stimuli) and anticipatory looking (i.e., percentage of time spent looking at the location where the next stimulus is about to appear during the inter-stimulus interval). Data pertaining to missing data and task order effects are presented. Further analyses show that infants' reactive looking was faster in the pattern phase, compared to the random phase, and their anticipatory looking increased from random to pattern phases. Within the pattern phase, infants' reactive looking showed a quadratic trend, with reactive looking time latencies peaking in the middle portion of the phase. Similarly, within the pattern phase, infants' anticipatory looking also showed a quadratic trend, with anticipatory looking peaking during the middle portion of the phase.

  18. Modeling and predicting urban growth pattern of the Tokyo metropolitan area based on cellular automata

    NASA Astrophysics Data System (ADS)

    Zhao, Yaolong; Zhao, Junsan; Murayama, Yuji

    2008-10-01

    The period of high economic growth in Japan which began in the latter half of the 1950s led to a massive migration of population from rural regions to the Tokyo metropolitan area. This phenomenon brought about rapid urban growth and urban structure changes in this area. Purpose of this study is to establish a constrained CA (Cellular Automata) model with GIS (Geographical Information Systems) to simulate urban growth pattern in the Tokyo metropolitan area towards predicting urban form and landscape for the near future. Urban land-use is classified into multi-categories for interpreting the effect of interaction among land-use categories in the spatial process of urban growth. Driving factors of urban growth pattern, such as land condition, railway network, land-use zoning, random perturbation, and neighborhood interaction and so forth, are explored and integrated into this model. These driving factors are calibrated based on exploratory spatial data analysis (ESDA), spatial statistics, logistic regression, and "trial and error" approach. The simulation is assessed at both macro and micro classification levels in three ways: visual approach; fractal dimension; and spatial metrics. Results indicate that this model provides an effective prototype to simulate and predict urban growth pattern of the Tokyo metropolitan area.

  19. NDRAM: nonlinear dynamic recurrent associative memory for learning bipolar and nonbipolar correlated patterns.

    PubMed

    Chartier, Sylvain; Proulx, Robert

    2005-11-01

    This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.

  20. A new modelling approach for zooplankton behaviour

    NASA Astrophysics Data System (ADS)

    Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.

    We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.

  1. Longitudinal data analysis with non-ignorable missing data.

    PubMed

    Tseng, Chi-hong; Elashoff, Robert; Li, Ning; Li, Gang

    2016-02-01

    A common problem in the longitudinal data analysis is the missing data problem. Two types of missing patterns are generally considered in statistical literature: monotone and non-monotone missing data. Nonmonotone missing data occur when study participants intermittently miss scheduled visits, while monotone missing data can be from discontinued participation, loss to follow-up, and mortality. Although many novel statistical approaches have been developed to handle missing data in recent years, few methods are available to provide inferences to handle both types of missing data simultaneously. In this article, a latent random effects model is proposed to analyze longitudinal outcomes with both monotone and non-monotone missingness in the context of missing not at random. Another significant contribution of this article is to propose a new computational algorithm for latent random effects models. To reduce the computational burden of high-dimensional integration problem in latent random effects models, we develop a new computational algorithm that uses a new adaptive quadrature approach in conjunction with the Taylor series approximation for the likelihood function to simplify the E-step computation in the expectation-maximization algorithm. Simulation study is performed and the data from the scleroderma lung study are used to demonstrate the effectiveness of this method. © The Author(s) 2012.

  2. Spatial pattern of Baccharis platypoda shrub as determined by sex and life stages

    NASA Astrophysics Data System (ADS)

    Fonseca, Darliana da Costa; de Oliveira, Marcio Leles Romarco; Pereira, Israel Marinho; Gonzaga, Anne Priscila Dias; de Moura, Cristiane Coelho; Machado, Evandro Luiz Mendonça

    2017-11-01

    Spatial patterns of dioecious species can be determined by their nutritional requirements and intraspecific competition, apart from being a response to environmental heterogeneity. The aim of the study was to evaluate the spatial pattern of populations of a dioecious shrub reporting to sex and reproductive stage patterns of individuals. Sampling was carried out in three areas located in the meridional portion of Serra do Espinhaço, where in individuals of the studied species were mapped. The spatial pattern was determined through O-ring analysis and Ripley's K-function and the distribution of individuals' frequencies was verified through x2 test. Populations in two areas showed an aggregate spatial pattern tending towards random or uniform according to the observed scale. Male and female adults presented an aggregate pattern at smaller scales, while random and uniform patterns were verified above 20 m for individuals of both sexes of the areas A2 and A3. Young individuals presented an aggregate pattern in all areas and spatial independence in relation to adult individuals, especially female plants. The interactions between individuals of both genders presented spatial independence with respect to spatial distribution. Baccharis platypoda showed characteristics in accordance with the spatial distribution of savannic and dioecious species, whereas the population was aggregated tending towards random at greater spatial scales. Young individuals showed an aggregated pattern at different scales compared to adults, without positive association between them. Female and male adult individuals presented similar characteristics, confirming that adult individuals at greater scales are randomly distributed despite their distinct preferences for environments with moisture variation.

  3. The informational architecture of the cell.

    PubMed

    Walker, Sara Imari; Kim, Hyunju; Davies, Paul C W

    2016-03-13

    We compare the informational architecture of biological and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the Boolean network model for regulation of the cell cycle of the fission yeast Schizosaccharomyces pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: Erdös-Rényi and scale-free. We report patterns in local information processing and storage that do indeed distinguish biological from random, associated with control nodes that regulate the function of the fission yeast cell-cycle network. Conversely, we find that integrated information, which serves as a global measure of 'emergent' information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell-cycle network in particular, and more generally for illuminating any distinctive physics that may be operative in life. © 2016 The Author(s).

  4. Neural-Network Simulator

    NASA Technical Reports Server (NTRS)

    Mitchell, Paul H.

    1991-01-01

    F77NNS (FORTRAN 77 Neural Network Simulator) computer program simulates popular back-error-propagation neural network. Designed to take advantage of vectorization when used on computers having this capability, also used on any computer equipped with ANSI-77 FORTRAN Compiler. Problems involving matching of patterns or mathematical modeling of systems fit class of problems F77NNS designed to solve. Program has restart capability so neural network solved in stages suitable to user's resources and desires. Enables user to customize patterns of connections between layers of network. Size of neural network F77NNS applied to limited only by amount of random-access memory available to user.

  5. Simulation of Long-Term Landscape-Level Fuel Treatment Effects on Large Wildfires

    Treesearch

    Mark A. Finney; Rob C. Seli; Charles W. McHugh; Alan A. Ager; Berni Bahro; James K. Agee

    2006-01-01

    A simulation system was developed to explore how fuel treatments placed in random and optimal spatial patterns affect the growth and behavior of large fires when implemented at different rates over the course of five decades. The system consists of a forest/fuel dynamics simulation module (FVS), logic for deriving fuel model dynamics from FVS output, a spatial fuel...

  6. Incorporating landscape fuel treatment modeling into the Forest Vegetation Simulator

    Treesearch

    Robert C. Seli; Alan A. Ager; Nicholas L. Crookston; Mark A. Finney; Berni Bahro; James K. Agee; Charles W. McHugh

    2008-01-01

    A simulation system was developed to explore how fuel treatments placed in random and optimal spatial patterns affect the growth and behavior of large fires when implemented at different rates over the course of five decades. The system consists of several command line programs linked together: (1) FVS with the Parallel Processor (PPE) and Fire and Fuels (FFE)...

  7. Quark model and strange baryon production in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Bialas, A.

    1998-12-01

    It is pointed out that the recent data on strange baryon and antibaryon production in Pb-Pb collisions at 159 GeV/c agree well with the hypothesis of an intermediate state of quasi-free and randomly distributed constituent quarks and antiquarks. Also the S-S data are consistent with this hypothesis. The p-Pb data follow a different pattern.

  8. In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.

    PubMed

    Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A

    2018-04-01

    Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.

  9. A hierarchical model for regional analysis of population change using Christmas Bird Count data, with application to the American Black Duck

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.; Niven, D.K.

    2006-01-01

    Analysis of Christmas Bird Count (CBC) data is complicated by the need to account for variation in effort on counts and to provide summaries over large geographic regions. We describe a hierarchical model for analysis of population change using CBC data that addresses these needs. The effect of effort is modeled parametrically, with parameter values varying among strata as identically distributed random effects. Year and site effects are modeled hierarchically, accommodating large regional variation in number of samples and precision of estimates. The resulting model is complex, but a Bayesian analysis can be conducted using Markov chain Monte Carlo techniques. We analyze CBC data for American Black Ducks (Anas rubripes), a species of considerable management interest that has historically been monitored using winter surveys. Over the interval 1966-2003, Black Duck populations showed distinct regional patterns of population change. The patterns shown by CBC data are similar to those shown by the Midwinter Waterfowl Inventory for the United States.

  10. Encryption method based on pseudo random spatial light modulation for single-fibre data transmission

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Zyczkowski, Marek

    2017-11-01

    Optical cryptosystems can provide encryption and sometimes compression simultaneously. They are increasingly attractive for information securing especially for image encryption. Our studies shown that the optical cryptosystems can be used to encrypt optical data transmission. We propose and study a new method for securing fibre data communication. The paper presents a method for optical encryption of data transmitted with a single optical fibre. The encryption process relies on pseudo-random spatial light modulation, combination of two encryption keys and the Compressed Sensing framework. A linear combination of light pulses with pseudo-random patterns provides a required encryption performance. We propose an architecture to transmit the encrypted data through the optical fibre. The paper describes the method, presents the theoretical analysis, design of physical model and results of experiment.

  11. Predicting network modules of cell cycle regulators using relative protein abundance statistics.

    PubMed

    Oguz, Cihan; Watson, Layne T; Baumann, William T; Tyson, John J

    2017-02-28

    Parameter estimation in systems biology is typically done by enforcing experimental observations through an objective function as the parameter space of a model is explored by numerical simulations. Past studies have shown that one usually finds a set of "feasible" parameter vectors that fit the available experimental data equally well, and that these alternative vectors can make different predictions under novel experimental conditions. In this study, we characterize the feasible region of a complex model of the budding yeast cell cycle under a large set of discrete experimental constraints in order to test whether the statistical features of relative protein abundance predictions are influenced by the topology of the cell cycle regulatory network. Using differential evolution, we generate an ensemble of feasible parameter vectors that reproduce the phenotypes (viable or inviable) of wild-type yeast cells and 110 mutant strains. We use this ensemble to predict the phenotypes of 129 mutant strains for which experimental data is not available. We identify 86 novel mutants that are predicted to be viable and then rank the cell cycle proteins in terms of their contributions to cumulative variability of relative protein abundance predictions. Proteins involved in "regulation of cell size" and "regulation of G1/S transition" contribute most to predictive variability, whereas proteins involved in "positive regulation of transcription involved in exit from mitosis," "mitotic spindle assembly checkpoint" and "negative regulation of cyclin-dependent protein kinase by cyclin degradation" contribute the least. These results suggest that the statistics of these predictions may be generating patterns specific to individual network modules (START, S/G2/M, and EXIT). To test this hypothesis, we develop random forest models for predicting the network modules of cell cycle regulators using relative abundance statistics as model inputs. Predictive performance is assessed by the areas under receiver operating characteristics curves (AUC). Our models generate an AUC range of 0.83-0.87 as opposed to randomized models with AUC values around 0.50. By using differential evolution and random forest modeling, we show that the model prediction statistics generate distinct network module-specific patterns within the cell cycle network.

  12. The frequency hopping pattern design for random hopping frequency signal based on stationary phase principle

    NASA Astrophysics Data System (ADS)

    Liao, Zhikun; Lu, Dawei; Hu, Jiemin; Zhang, Jun

    2018-04-01

    For the random hopping frequency signal, the modulated frequencies are randomly distributed over given bandwidth. The randomness of modulated frequency not only improves the electronic counter countermeasure capability for radar systems, but also determines its performance of range compression. In this paper, the range ambiguity function of RHF signal is firstly derived. Then, a design method of frequency hopping pattern based on stationary phase principle to improve the peak to side-lobe ratio is proposed. Finally, the simulated experiments show a good effectiveness of the presented design method.

  13. Comparability of item quality indices from sparse data matrices with random and non-random missing data patterns.

    PubMed

    Wolfe, Edward W; McGill, Michael T

    2011-01-01

    This article summarizes a simulation study of the performance of five item quality indicators (the weighted and unweighted versions of the mean square and standardized mean square fit indices and the point-measure correlation) under conditions of relatively high and low amounts of missing data under both random and conditional patterns of missing data for testing contexts such as those encountered in operational administrations of a computerized adaptive certification or licensure examination. The results suggest that weighted fit indices, particularly the standardized mean square index, and the point-measure correlation provide the most consistent information between random and conditional missing data patterns and that these indices perform more comparably for items near the passing score than for items with extreme difficulty values.

  14. Dynamical Signatures of Living Systems

    NASA Technical Reports Server (NTRS)

    Zak, M.

    1999-01-01

    One of the main challenges in modeling living systems is to distinguish a random walk of physical origin (for instance, Brownian motions) from those of biological origin and that will constitute the starting point of the proposed approach. As conjectured, the biological random walk must be nonlinear. Indeed, any stochastic Markov process can be described by linear Fokker-Planck equation (or its discretized version), only that type of process has been observed in the inanimate world. However, all such processes always converge to a stable (ergodic or periodic) state, i.e., to the states of a lower complexity and high entropy. At the same time, the evolution of living systems directed toward a higher level of complexity if complexity is associated with a number of structural variations. The simplest way to mimic such a tendency is to incorporate a nonlinearity into the random walk; then the probability evolution will attain the features of diffusion equation: the formation and dissipation of shock waves initiated by small shallow wave disturbances. As a result, the evolution never "dies:" it produces new different configurations which are accompanied by an increase or decrease of entropy (the decrease takes place during formation of shock waves, the increase-during their dissipation). In other words, the evolution can be directed "against the second law of thermodynamics" by forming patterns outside of equilibrium in the probability space. Due to that, a specie is not locked up in a certain pattern of behavior: it still can perform a variety of motions, and only the statistics of these motions is constrained by this pattern. It should be emphasized that such a "twist" is based upon the concept of reflection, i.e., the existence of the self-image (adopted from psychology). The model consists of a generator of stochastic processes which represents the motor dynamics in the form of nonlinear random walks, and a simulator of the nonlinear version of the diffusion equation which represents the mental dynamics. It has been demonstrated that coupled mental-motor dynamics can simulate emerging self-organization, prey-predator games, collaboration and competition, "collective brain," etc.

  15. Large-Eddy Atmosphere-Land-Surface Modelling over Heterogeneous Surfaces: Model Development and Comparison with Measurements

    NASA Astrophysics Data System (ADS)

    Shao, Yaping; Liu, Shaofeng; Schween, Jan H.; Crewell, Susanne

    2013-08-01

    A model is developed for the large-eddy simulation (LES) of heterogeneous atmosphere and land-surface processes. This couples a LES model with a land-surface scheme. New developments are made to the land-surface scheme to ensure the adequate representation of atmosphere-land-surface transfers on the large-eddy scale. These include, (1) a multi-layer canopy scheme; (2) a method for flux estimates consistent with the large-eddy subgrid closure; and (3) an appropriate soil-layer configuration. The model is then applied to a heterogeneous region with 60-m horizontal resolution and the results are compared with ground-based and airborne measurements. The simulated sensible and latent heat fluxes are found to agree well with the eddy-correlation measurements. Good agreement is also found in the modelled and observed net radiation, ground heat flux, soil temperature and moisture. Based on the model results, we study the patterns of the sensible and latent heat fluxes, how such patterns come into existence, and how large eddies propagate and destroy land-surface signals in the atmosphere. Near the surface, the flux and land-use patterns are found to be closely correlated. In the lower boundary layer, small eddies bearing land-surface signals organize and develop into larger eddies, which carry the signals to considerably higher levels. As a result, the instantaneous flux patterns appear to be unrelated to the land-use patterns, but on average, the correlation between them is significant and persistent up to about 650 m. For a given land-surface type, the scatter of the fluxes amounts to several hundred W { m }^{-2}, due to (1) large-eddy randomness; (2) rapid large-eddy and surface feedback; and (3) local advection related to surface heterogeneity.

  16. Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.

    PubMed

    Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai

    2018-02-01

    The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Social aggregation in pea aphids: experiment and random walk modeling.

    PubMed

    Nilsen, Christa; Paige, John; Warner, Olivia; Mayhew, Benjamin; Sutley, Ryan; Lam, Matthew; Bernoff, Andrew J; Topaz, Chad M

    2013-01-01

    From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  18. Simple rules govern the patterns of Arctic sea ice melt ponds

    NASA Astrophysics Data System (ADS)

    Popovic, P.; Cael, B. B.; Abbot, D. S.; Silber, M.

    2017-12-01

    Climate change, amplified in the far north, has led to a rapid sea ice decline in recent years. Melt ponds that form on the surface of Arctic sea ice in the summer significantly lower the ice albedo, thereby accelerating ice melt. Pond geometry controls the details of this crucial feedback. However, currently it is unclear how to model this intricate geometry. Here we show that an extremely simple model of voids surrounding randomly sized and placed overlapping circles reproduces the essential features of pond patterns. The model has only two parameters, circle scale and the fraction of the surface covered by voids, and we choose them by comparing the model to pond images. Using these parameters the void model robustly reproduces all of the examined pond features such as the ponds' area-perimeter relationship and the area-abundance relationship over nearly 7 orders of magnitude. By analyzing airborne photographs of sea ice, we also find that the typical pond scale is surprisingly constant across different years, regions, and ice types. These results demonstrate that the geometric and abundance patterns of Arctic melt ponds can be simply described, and can guide future models of Arctic melt ponds to improve predictions of how sea ice will respond to Arctic warming.

  19. Network analysis of named entity co-occurrences in written texts

    NASA Astrophysics Data System (ADS)

    Amancio, Diego Raphael

    2016-06-01

    The use of methods borrowed from statistics and physics to analyze written texts has allowed the discovery of unprecedent patterns of human behavior and cognition by establishing links between models features and language structure. While current models have been useful to unveil patterns via analysis of syntactical and semantical networks, only a few works have probed the relevance of investigating the structure arising from the relationship between relevant entities such as characters, locations and organizations. In this study, we represent entities appearing in the same context as a co-occurrence network, where links are established according to a null model based on random, shuffled texts. Computational simulations performed in novels revealed that the proposed model displays interesting topological features, such as the small world feature, characterized by high values of clustering coefficient. The effectiveness of our model was verified in a practical pattern recognition task in real networks. When compared with traditional word adjacency networks, our model displayed optimized results in identifying unknown references in texts. Because the proposed representation plays a complementary role in characterizing unstructured documents via topological analysis of named entities, we believe that it could be useful to improve the characterization of written texts (and related systems), specially if combined with traditional approaches based on statistical and deeper paradigms.

  20. Repurposing Blu-ray movie discs as quasi-random nanoimprinting templates for photon management

    NASA Astrophysics Data System (ADS)

    Smith, Alexander J.; Wang, Chen; Guo, Dongning; Sun, Cheng; Huang, Jiaxing

    2014-11-01

    Quasi-random nanostructures have attracted significant interests for photon management purposes. To optimize such patterns, typically very expensive fabrication processes are needed to create the pre-designed, subwavelength nanostructures. While quasi-random photonic nanostructures are abundant in nature (for example, in structural coloration), interestingly, they also exist in Blu-ray movie discs, an already mass-produced consumer product. Here we uncover that Blu-ray disc patterns are surprisingly well suited for light-trapping applications. While the algorithms in the Blu-ray industrial standard were developed with the intention of optimizing data compression and error tolerance, they have also created quasi-random arrangement of islands and pits on the final media discs that are nearly optimized for photon management over the solar spectrum, regardless of the information stored on the discs. As a proof-of-concept, imprinting polymer solar cells with the Blu-ray patterns indeed increases their efficiencies. Simulation suggests that Blu-ray patterns could be broadly applied for solar cells made of other materials.

  1. Statistical characterization of spatial patterns of rainfall cells in extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Bacchi, Baldassare; Ranzi, Roberto; Borga, Marco

    1996-11-01

    The assumption of a particular type of distribution of rainfall cells in space is needed for the formulation of several space-time rainfall models. In this study, weather radar-derived rain rate maps are employed to evaluate different types of spatial organization of rainfall cells in storms through the use of distance functions and second-moment measures. In particular the spatial point patterns of the local maxima of rainfall intensity are compared to a completely spatially random (CSR) point process by applying an objective distance measure. For all the analyzed radar maps the CSR assumption is rejected, indicating that at the resolution of the observation considered, rainfall cells are clustered. Therefore a theoretical framework for evaluating and fitting alternative models to the CSR is needed. This paper shows how the "reduced second-moment measure" of the point pattern can be employed to estimate the parameters of a Neyman-Scott model and to evaluate the degree of adequacy to the experimental data. Some limitations of this theoretical framework, and also its effectiveness, in comparison to the use of scaling functions, are discussed.

  2. A simple model for research interest evolution patterns

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Sir Isaac Newton supposedly remarked that in his scientific career he was like ``...a boy playing on the sea-shore ...finding a smoother pebble or a prettier shell than ordinary''. His remarkable modesty and famous understatement motivate us to seek regularities in how scientists shift their research focus as the career develops. Indeed, despite intensive investigations on how microscopic factors, such as incentives and risks, would influence a scientist's choice of research agenda, little is known on the macroscopic patterns in the research interest change undertaken by individual scientists throughout their careers. Here we make use of over 14,000 authors' publication records in physics. By quantifying statistical characteristics in the interest evolution, we model scientific research as a random walk, which reproduces patterns in individuals' careers observed empirically. Despite myriad of factors that shape and influence individual choices of research subjects, we identified regularities in this dynamical process that are well captured by a simple statistical model. The results advance our understanding of scientists' behaviors during their careers and open up avenues for future studies in the science of science.

  3. Lévy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Boyer, D.; Miramontes, O.; Larralde, H.

    2009-10-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  4. COMPUTERIZED EXPERT SYSTEM FOR EVALUATION OF AUTOMATED VISUAL FIELDS FROM THE ISCHEMIC OPTIC NEUROPATHY DECOMPRESSION TRIAL: METHODS, BASELINE FIELDS, AND SIX-MONTH LONGITUDINAL FOLLOW-UP

    PubMed Central

    Feldon, Steven E

    2004-01-01

    ABSTRACT Purpose To validate a computerized expert system evaluating visual fields in a prospective clinical trial, the Ischemic Optic Neuropathy Decompression Trial (IONDT). To identify the pattern and within-pattern severity of field defects for study eyes at baseline and 6-month follow-up. Design Humphrey visual field (HVF) change was used as the outcome measure for a prospective, randomized, multi-center trial to test the null hypothesis that optic nerve sheath decompression was ineffective in treating nonarteritic anterior ischemic optic neuropathy and to ascertain the natural history of the disease. Methods An expert panel established criteria for the type and severity of visual field defects. Using these criteria, a rule-based computerized expert system interpreted HVF from baseline and 6-month visits for patients randomized to surgery or careful follow-up and for patients who were not randomized. Results A computerized expert system was devised and validated. The system was then used to analyze HVFs. The pattern of defects found at baseline for patients randomized to surgery did not differ from that of patients randomized to careful follow-up. The most common pattern of defect was a superior and inferior arcuate with central scotoma for randomized eyes (19.2%) and a superior and inferior arcuate for nonrandomized eyes (30.6%). Field patterns at 6 months and baseline were not different. For randomized study eyes, the superior altitudinal defects improved (P = .03), as did the inferior altitudinal defects (P = .01). For nonrandomized study eyes, only the inferior altitudinal defects improved (P = .02). No treatment effect was noted. Conclusions A novel rule-based expert system successfully interpreted visual field defects at baseline of eyes enrolled in the IONDT. PMID:15747764

  5. Forging patterns and making waves from biology to geology: a commentary on Turing (1952) 'The chemical basis of morphogenesis'.

    PubMed

    Ball, Philip

    2015-04-19

    Alan Turing was neither a biologist nor a chemist, and yet the paper he published in 1952, 'The chemical basis of morphogenesis', on the spontaneous formation of patterns in systems undergoing reaction and diffusion of their ingredients has had a substantial impact on both fields, as well as in other areas as disparate as geomorphology and criminology. Motivated by the question of how a spherical embryo becomes a decidedly non-spherical organism such as a human being, Turing devised a mathematical model that explained how random fluctuations can drive the emergence of pattern and structure from initial uniformity. The spontaneous appearance of pattern and form in a system far away from its equilibrium state occurs in many types of natural process, and in some artificial ones too. It is often driven by very general mechanisms, of which Turing's model supplies one of the most versatile. For that reason, these patterns show striking similarities in systems that seem superficially to share nothing in common, such as the stripes of sand ripples and of pigmentation on a zebra skin. New examples of 'Turing patterns' in biology and beyond are still being discovered today. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society.

  6. Rationalizing spatial exploration patterns of wild animals and humans through a temporal discounting framework.

    PubMed

    Namboodiri, Vijay Mohan K; Levy, Joshua M; Mihalas, Stefan; Sims, David W; Hussain Shuler, Marshall G

    2016-08-02

    Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that "Lévy random walks"-which can produce power law path length distributions-are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent's goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.

  7. Eruption patterns of the chilean volcanoes Villarrica, Llaima, and Tupungatito

    NASA Astrophysics Data System (ADS)

    Muñoz, Miguel

    1983-09-01

    The historical eruption records of three Chilean volcanoes have been subjected to many statistical tests, and none have been found to differ significantly from random, or Poissonian, behaviour. The statistical analysis shows rough conformity with the descriptions determined from the eruption rate functions. It is possible that a constant eruption rate describes the activity of Villarrica; Llaima and Tupungatito present complex eruption rate patterns that appear, however, to have no statistical significance. Questions related to loading and extinction processes and to the existence of shallow secondary magma chambers to which magma is supplied from a deeper system are also addressed. The analysis and the computation of the serial correlation coefficients indicate that the three series may be regarded as stationary renewal processes. None of the test statistics indicates rejection of the Poisson hypothesis at a level less than 5%, but the coefficient of variation for the eruption series at Llaima is significantly different from the value expected for a Poisson process. Also, the estimates of the normalized spectrum of the counting process for the three series suggest a departure from the random model, but the deviations are not found to be significant at the 5% level. Kolmogorov-Smirnov and chi-squared test statistics, applied directly to ascertaining to which probability P the random Poisson model fits the data, indicate that there is significant agreement in the case of Villarrica ( P=0.59) and Tupungatito ( P=0.3). Even though the P-value for Llaima is a marginally significant 0.1 (which is equivalent to rejecting the Poisson model at the 90% confidence level), the series suggests that nonrandom features are possibly present in the eruptive activity of this volcano.

  8. Frustration in Condensed Matter and Protein Folding

    NASA Astrophysics Data System (ADS)

    Li, Z.; Tanner, S.; Conroy, B.; Owens, F.; Tran, M. M.; Boekema, C.

    2014-03-01

    By means of computer modeling, we are studying frustration in condensed matter and protein folding, including the influence of temperature and Thomson-figure formation. Frustration is due to competing interactions in a disordered state. The key issue is how the particles interact to reach the lowest frustration. The relaxation for frustration is mostly a power function (randomly assigned pattern) or an exponential function (regular patterns like Thomson figures). For the atomic Thomson model, frustration is predicted to decrease with the formation of Thomson figures at zero kelvin. We attempt to apply our frustration modeling to protein folding and dynamics. We investigate the homogeneous protein frustration that would cause the speed of the protein folding to increase. Increase of protein frustration (where frustration and hydrophobicity interplay with protein folding) may lead to a protein mutation. Research is supported by WiSE@SJSU and AFC San Jose.

  9. Functional Nonlinear Mixed Effects Models For Longitudinal Image Data

    PubMed Central

    Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu

    2015-01-01

    Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453

  10. Complex patterns of abnormal heartbeats

    NASA Technical Reports Server (NTRS)

    Schulte-Frohlinde, Verena; Ashkenazy, Yosef; Goldberger, Ary L.; Ivanov, Plamen Ch; Costa, Madalena; Morley-Davies, Adrian; Stanley, H. Eugene; Glass, Leon

    2002-01-01

    Individuals having frequent abnormal heartbeats interspersed with normal heartbeats may be at an increased risk of sudden cardiac death. However, mechanistic understanding of such cardiac arrhythmias is limited. We present a visual and qualitative method to display statistical properties of abnormal heartbeats. We introduce dynamical "heartprints" which reveal characteristic patterns in long clinical records encompassing approximately 10(5) heartbeats and may provide information about underlying mechanisms. We test if these dynamics can be reproduced by model simulations in which abnormal heartbeats are generated (i) randomly, (ii) at a fixed time interval following a preceding normal heartbeat, or (iii) by an independent oscillator that may or may not interact with the normal heartbeat. We compare the results of these three models and test their limitations to comprehensively simulate the statistical features of selected clinical records. This work introduces methods that can be used to test mathematical models of arrhythmogenesis and to develop a new understanding of underlying electrophysiologic mechanisms of cardiac arrhythmia.

  11. Representation of Time-Relevant Common Data Elements in the Cancer Data Standards Repository: Statistical Evaluation of an Ontological Approach

    PubMed Central

    Chen, Henry W; Du, Jingcheng; Song, Hsing-Yi; Liu, Xiangyu; Jiang, Guoqian

    2018-01-01

    Background Today, there is an increasing need to centralize and standardize electronic health data within clinical research as the volume of data continues to balloon. Domain-specific common data elements (CDEs) are emerging as a standard approach to clinical research data capturing and reporting. Recent efforts to standardize clinical study CDEs have been of great benefit in facilitating data integration and data sharing. The importance of the temporal dimension of clinical research studies has been well recognized; however, very few studies have focused on the formal representation of temporal constraints and temporal relationships within clinical research data in the biomedical research community. In particular, temporal information can be extremely powerful to enable high-quality cancer research. Objective The objective of the study was to develop and evaluate an ontological approach to represent the temporal aspects of cancer study CDEs. Methods We used CDEs recorded in the National Cancer Institute (NCI) Cancer Data Standards Repository (caDSR) and created a CDE parser to extract time-relevant CDEs from the caDSR. Using the Web Ontology Language (OWL)–based Time Event Ontology (TEO), we manually derived representative patterns to semantically model the temporal components of the CDEs using an observing set of randomly selected time-related CDEs (n=600) to create a set of TEO ontological representation patterns. In evaluating TEO’s ability to represent the temporal components of the CDEs, this set of representation patterns was tested against two test sets of randomly selected time-related CDEs (n=425). Results It was found that 94.2% (801/850) of the CDEs in the test sets could be represented by the TEO representation patterns. Conclusions In conclusion, TEO is a good ontological model for representing the temporal components of the CDEs recorded in caDSR. Our representative model can harness the Semantic Web reasoning and inferencing functionalities and present a means for temporal CDEs to be machine-readable, streamlining meaningful searches. PMID:29472179

  12. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  13. Mum, why do you keep on growing? Impacts of environmental variability on optimal growth and reproduction allocation strategies of annual plants.

    PubMed

    De Lara, Michel

    2006-05-01

    In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.

  14. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations

    NASA Astrophysics Data System (ADS)

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  15. Synchronous behaviour in network model based on human cortico-cortical connections.

    PubMed

    Protachevicz, Paulo Ricardo; Borges, Rafael Ribaski; Reis, Adriane da Silva; Borges, Fernando da Silva; Iarosz, Kelly Cristina; Caldas, Ibere Luiz; Lameu, Ewandson Luiz; Macau, Elbert Einstein Nehrer; Viana, Ricardo Luiz; Sokolov, Igor M; Ferrari, Fabiano A S; Kurths, Jürgen; Batista, Antonio Marcos

    2018-06-22

    We consider a network topology according to the cortico-cortical connec- tion network of the human brain, where each cortical area is composed of a random network of adaptive exponential integrate-and-fire neurons. Depending on the parameters, this neuron model can exhibit spike or burst patterns. As a diagnostic tool to identify spike and burst patterns we utilise the coefficient of variation of the neuronal inter-spike interval. In our neuronal network, we verify the existence of spike and burst synchronisation in different cortical areas. Our simulations show that the network arrangement, i.e., its rich-club organisation, plays an important role in the transition of the areas from desynchronous to synchronous behaviours. © 2018 Institute of Physics and Engineering in Medicine.

  16. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations.

    PubMed

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-07

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  17. Investigating species co-occurrence patterns when species are detected imperfectly

    USGS Publications Warehouse

    MacKenzie, D.I.; Bailey, L.L.; Nichols, J.D.

    2004-01-01

    1. Over the last 30 years there has been a great deal of interest in investigating patterns of species co-occurrence across a number of locations, which has led to the development of numerous methods to determine whether there is evidence that a particular pattern may not have occurred by random chance. 2. A key aspect that seems to have been largely overlooked is the possibility that species may not always be detected at a location when present, which leads to 'false absences' in a species presence/absence matrix that may cause incorrect inferences to be made about co-occurrence patterns. Furthermore, many of the published methods for investigating patterns of species co-occurrence do not account for potential differences in the site characteristics that may partially (at least) explain non-random patterns (e.g. due to species having similar/different habitat preferences). 3. Here we present a statistical method for modelling co-occurrence patterns between species while accounting for imperfect detection and site characteristics. This method requires that multiple presence/absence surveys for the species be conducted over a reasonably short period of time at most sites. The method yields unbiased estimates of probabilities of occurrence, and is practical when the number of species is small (< 4). 4. To illustrate the method we consider data collected on two terrestrial salamander species, Plethodonjordani and members of the Plethodon glutinosus complex, collected in the Great Smoky Mountains National Park, USA. We find no evidence that the species do not occur independently at sites once site elevation has been allowed for, although we find some evidence of a statistical interaction between species in terms of detectability that we suggest may be due to changes in relative abundances.

  18. Quasi-random array imaging collimator

    DOEpatents

    Fenimore, E.E.

    1980-08-20

    A hexagonally shaped quasi-random no-two-holes-touching imaging collimator. The quasi-random array imaging collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasing throughput by elimination of a substrate. The present invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.

  19. Random array grid collimator

    DOEpatents

    Fenimore, E.E.

    1980-08-22

    A hexagonally shaped quasi-random no-two-holes touching grid collimator. The quasi-random array grid collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasng throughput by elimination of a substrate. The presentation invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.

  20. Spatial pattern enhances ecosystem functioning in an African savanna.

    PubMed

    Pringle, Robert M; Doak, Daniel F; Brody, Alison K; Jocqué, Rudy; Palmer, Todd M

    2010-05-25

    The finding that regular spatial patterns can emerge in nature from local interactions between organisms has prompted a search for the ecological importance of these patterns. Theoretical models have predicted that patterning may have positive emergent effects on fundamental ecosystem functions, such as productivity. We provide empirical support for this prediction. In dryland ecosystems, termite mounds are often hotspots of plant growth (primary productivity). Using detailed observations and manipulative experiments in an African savanna, we show that these mounds are also local hotspots of animal abundance (secondary and tertiary productivity): insect abundance and biomass decreased with distance from the nearest termite mound, as did the abundance, biomass, and reproductive output of insect-eating predators. Null-model analyses indicated that at the landscape scale, the evenly spaced distribution of termite mounds produced dramatically greater abundance, biomass, and reproductive output of consumers across trophic levels than would be obtained in landscapes with randomly distributed mounds. These emergent properties of spatial pattern arose because the average distance from an arbitrarily chosen point to the nearest feature in a landscape is minimized in landscapes where the features are hyper-dispersed (i.e., uniformly spaced). This suggests that the linkage between patterning and ecosystem functioning will be common to systems spanning the range of human management intensities. The centrality of spatial pattern to system-wide biomass accumulation underscores the need to conserve pattern-generating organisms and mechanisms, and to incorporate landscape patterning in efforts to restore degraded habitats and maximize the delivery of ecosystem services.

  1. Robust image retrieval from noisy inputs using lattice associative memories

    NASA Astrophysics Data System (ADS)

    Urcid, Gonzalo; Nieves-V., José Angel; García-A., Anmi; Valdiviezo-N., Juan Carlos

    2009-02-01

    Lattice associative memories also known as morphological associative memories are fully connected feedforward neural networks with no hidden layers, whose computation at each node is carried out with lattice algebra operations. These networks are a relatively recent development in the field of associative memories that has proven to be an alternative way to work with sets of pattern pairs for which the storage and retrieval stages use minimax algebra. Different associative memory models have been proposed to cope with the problem of pattern recall under input degradations, such as occlusions or random noise, where input patterns can be composed of binary or real valued entries. In comparison to these and other artificial neural network memories, lattice algebra based memories display better performance for storage and recall capability; however, the computational techniques devised to achieve that purpose require additional processing or provide partial success when inputs are presented with undetermined noise levels. Robust retrieval capability of an associative memory model is usually expressed by a high percentage of perfect recalls from non-perfect input. The procedure described here uses noise masking defined by simple lattice operations together with appropriate metrics, such as the normalized mean squared error or signal to noise ratio, to boost the recall performance of either the min or max lattice auto-associative memories. Using a single lattice associative memory, illustrative examples are given that demonstrate the enhanced retrieval of correct gray-scale image associations from inputs corrupted with random noise.

  2. Non-random co-occurrence of native and exotic plant species in Mediterranean grasslands

    NASA Astrophysics Data System (ADS)

    de Miguel, José M.; Martín-Forés, Irene; Acosta-Gallo, Belén; del Pozo, Alejandro; Ovalle, Carlos; Sánchez-Jardón, Laura; Castro, Isabel; Casado, Miguel A.

    2016-11-01

    Invasion by exotic species in Mediterranean grasslands has determined assembly patterns of native and introduced species, knowledge of which provides information on the ecological processes underlying these novel communities. We considered grasslands from Spain and Chile. For each country we considered the whole grassland community and we split species into two subsets: in Chile, species were classified as natives or colonizers (i.e. exotics); in Spain, species were classified as exclusives (present in Spain but not in Chile) or colonizers (Spanish natives and exotics into Chile). We used null models and co-occurrence indices calculated in each country for each one of 15 sites distributed along a precipitation gradient and subjected to similar silvopastoral exploitation. We compared values of species co-occurrence between countries and between species subsets (natives/colonizers in Chile; exclusives/colonizers in Spain) within each country and we characterised them according to climatic variables. We hypothesized that: a) the different coexistence time of the species in both regions should give rise to communities presenting a spatial pattern further from random in Spain than in Chile, b) the co-occurrence patterns in the grasslands are affected by mesoclimatic factors in both regions. The patterns of co-occurrence are similar in Spain and Chile, mostly showing a spatial pattern more segregated than expected by random. The colonizer species are more segregated in Spain than in Chile, possibly determined by the longer residence time of the species in the source area than in the invaded one. The segregation of species in Chile is related to water availability, being species less segregated in habitat with greater water deficit; in Spain no relationship with climatic variables was found. After an invasion process, our results suggest that the possible process of alteration of the original Chilean communities has not prevented the assembly between the native and colonizer species together.

  3. [Variables related to the emergence of differential patterns in work motivation].

    PubMed

    Arrieta, Carlos; Navarro, José; Vicente, Susana

    2008-11-01

    Several longitudinal studies have shown that motivation at work acts chaotically. In very few cases, it may be linear or random. However, the factors that might explain why these different patterns emerge have not been analysed to date. In this exploratory study, we interviewed 73 employees whose motivational patterns were previously known. The results revealed that chaotic patterns were associated with high levels of motivation, self-efficacy beliefs, and perceptions of instrumentality, and also with intrinsic personal goal orientation and a perception of high work control. Linear patterns were associated with extrinsic goals and a perception of work as difficult, and random patterns were linked to high flexibility at work.

  4. Research on three-phase traffic flow modeling based on interaction range

    NASA Astrophysics Data System (ADS)

    Zeng, Jun-Wei; Yang, Xu-Gang; Qian, Yong-Sheng; Wei, Xu-Ting

    2017-12-01

    On the basis of the multiple velocity difference effect (MVDE) model and under short-range interaction, a new three-phase traffic flow model (S-MVDE) is proposed through careful consideration of the influence of the relationship between the speeds of the two adjacent cars on the running state of the rear car. The random slowing rule in the MVDE model is modified in order to emphasize the influence of vehicle interaction between two vehicles on the probability of vehicles’ deceleration. A single-lane model which without bottleneck structure under periodic boundary conditions is simulated, and it is proved that the traffic flow simulated by S-MVDE model will generate the synchronous flow of three-phase traffic theory. Under the open boundary, the model is expanded by adding an on-ramp, the congestion pattern caused by the bottleneck is simulated at different main road flow rates and on-ramp flow rates, which is compared with the traffic congestion pattern observed by Kerner et al. and it is found that the results are consistent with the congestion characteristics in the three-phase traffic flow theory.

  5. The Hot (Invisible?) Hand: Can Time Sequence Patterns of Success/Failure in Sports Be Modeled as Repeated Random Independent Trials?

    PubMed Central

    Yaari, Gur; Eisenmann, Shmuel

    2011-01-01

    The long lasting debate initiated by Gilovich, Vallone and Tversky in is revisited: does a “hot hand” phenomenon exist in sports? Hereby we come back to one of the cases analyzed by the original study, but with a much larger data set: all free throws taken during five regular seasons () of the National Basketball Association (NBA). Evidence supporting the existence of the “hot hand” phenomenon is provided. However, while statistical traces of this phenomenon are observed in the data, an open question still remains: are these non random patterns a result of “success breeds success” and “failure breeds failure” mechanisms or simply “better” and “worse” periods? Although free throws data is not adequate to answer this question in a definite way, we speculate based on it, that the latter is the dominant cause behind the appearance of the “hot hand” phenomenon in the data. PMID:21998630

  6. The hot (invisible?) hand: can time sequence patterns of success/failure in sports be modeled as repeated random independent trials?

    PubMed

    Yaari, Gur; Eisenmann, Shmuel

    2011-01-01

    The long lasting debate initiated by Gilovich, Vallone and Tversky in [Formula: see text] is revisited: does a "hot hand" phenomenon exist in sports? Hereby we come back to one of the cases analyzed by the original study, but with a much larger data set: all free throws taken during five regular seasons ([Formula: see text]) of the National Basketball Association (NBA). Evidence supporting the existence of the "hot hand" phenomenon is provided. However, while statistical traces of this phenomenon are observed in the data, an open question still remains: are these non random patterns a result of "success breeds success" and "failure breeds failure" mechanisms or simply "better" and "worse" periods? Although free throws data is not adequate to answer this question in a definite way, we speculate based on it, that the latter is the dominant cause behind the appearance of the "hot hand" phenomenon in the data.

  7. Distinct aggregation patterns and fluid porous phase in a 2D model for colloids with competitive interactions

    NASA Astrophysics Data System (ADS)

    Bordin, José Rafael

    2018-04-01

    In this paper we explore the self-assembly patterns in a two dimensional colloidal system using extensive Langevin Dynamics simulations. The pair potential proposed to model the competitive interaction have a short range length scale between first neighbors and a second characteristic length scale between third neighbors. We investigate how the temperature and colloidal density will affect the assembled morphologies. The potential shows aggregate patterns similar to observed in previous works, as clusters, stripes and porous phase. Nevertheless, we observe at high densities and temperatures a porous mesophase with a high mobility, which we name fluid porous phase, while at lower temperatures the porous structure is rigid. triangular packing was observed for the colloids and pores in both solid and fluid porous phases. Our results show that the porous structure is well defined for a large range of temperature and density, and that the fluid porous phase is a consequence of the competitive interaction and the random forces from the Langevin Dynamics.

  8. Robustness of Modeling of Out-of-Service Gas Mechanical Face Seal

    NASA Technical Reports Server (NTRS)

    Green, Itzhak

    2007-01-01

    Gas lubricated mechanical face seal are ubiquitous in many high performance applications such as compressors and gas turbines. The literature contains various analyses of seals having orderly face patterns (radial taper, waves, spiral grooves, etc.). These are useful for design purposes and for performance predictions. However, seals returning from service (or from testing) inevitably contain wear tracks and warped faces that depart from the aforementioned orderly patterns. Questions then arise as to the heat generated at the interface, leakage rates, axial displacement and tilts, minimum film thickness, contact forces, etc. This work describes an analysis of seals that may inherit any (i.e., random) face pattern. A comprehensive computer code is developed, based upon the Newton- Raphson method, which solves for the equilibrium of the axial force and tilting moments that are generated by asperity contact and fluid film effects. A contact mechanics model is incorporated along with a finite volume method that solves the compressible Reynolds equation. Results are presented for a production seal that has sustained a testing cycle.

  9. Speckle lithography for fabricating Gaussian, quasi-random 2D structures and black silicon structures.

    PubMed

    Bingi, Jayachandra; Murukeshan, Vadakke Matham

    2015-12-18

    Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices.

  10. Gender, Alcohol Consumption Patterns, and Engagement in Sexually Intimate Behaviors among Adolescents and Young Adults in Nha Trang, Viet Nam

    ERIC Educational Resources Information Center

    Kaljee, Linda M.; Green, Mackenzie S.; Zhan, Min; Riel, Rosemary; Lerdboon, Porntip; Lostutter, Ty W.; Tho, Le Huu; Luong, Vo Van; Minh, Truong Tan

    2011-01-01

    A randomly selected cross-sectional survey was conducted with 880 youth (16 to 24 years) in Nha Trang City to assess relationships between alcohol consumption and sexual behaviors. A timeline followback method was employed. Chi-square, generalized logit modeling and logistic regression analyses were performed. Of the sample, 78.2% male and 56.1%…

  11. Information processing in dendrites I. Input pattern generalisation.

    PubMed

    Gurney, K N

    2001-10-01

    In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.

  12. Global patterns and predictions of seafloor biomass using random forests.

    PubMed

    Wei, Chih-Lin; Rowe, Gilbert T; Escobar-Briones, Elva; Boetius, Antje; Soltwedel, Thomas; Caley, M Julian; Soliman, Yousria; Huettmann, Falk; Qu, Fangyuan; Yu, Zishan; Pitcher, C Roland; Haedrich, Richard L; Wicksten, Mary K; Rex, Michael A; Baguley, Jeffrey G; Sharma, Jyotsna; Danovaro, Roberto; MacDonald, Ian R; Nunnally, Clifton C; Deming, Jody W; Montagna, Paul; Lévesque, Mélanie; Weslawski, Jan Marcin; Wlodarska-Kowalczuk, Maria; Ingole, Baban S; Bett, Brian J; Billett, David S M; Yool, Andrew; Bluhm, Bodil A; Iken, Katrin; Narayanaswamy, Bhavani E

    2010-12-30

    A comprehensive seafloor biomass and abundance database has been constructed from 24 oceanographic institutions worldwide within the Census of Marine Life (CoML) field projects. The machine-learning algorithm, Random Forests, was employed to model and predict seafloor standing stocks from surface primary production, water-column integrated and export particulate organic matter (POM), seafloor relief, and bottom water properties. The predictive models explain 63% to 88% of stock variance among the major size groups. Individual and composite maps of predicted global seafloor biomass and abundance are generated for bacteria, meiofauna, macrofauna, and megafauna (invertebrates and fishes). Patterns of benthic standing stocks were positive functions of surface primary production and delivery of the particulate organic carbon (POC) flux to the seafloor. At a regional scale, the census maps illustrate that integrated biomass is highest at the poles, on continental margins associated with coastal upwelling and with broad zones associated with equatorial divergence. Lowest values are consistently encountered on the central abyssal plains of major ocean basins The shift of biomass dominance groups with depth is shown to be affected by the decrease in average body size rather than abundance, presumably due to decrease in quantity and quality of food supply. This biomass census and associated maps are vital components of mechanistic deep-sea food web models and global carbon cycling, and as such provide fundamental information that can be incorporated into evidence-based management.

  13. Global Patterns and Predictions of Seafloor Biomass Using Random Forests

    PubMed Central

    Wei, Chih-Lin; Rowe, Gilbert T.; Escobar-Briones, Elva; Boetius, Antje; Soltwedel, Thomas; Caley, M. Julian; Soliman, Yousria; Huettmann, Falk; Qu, Fangyuan; Yu, Zishan; Pitcher, C. Roland; Haedrich, Richard L.; Wicksten, Mary K.; Rex, Michael A.; Baguley, Jeffrey G.; Sharma, Jyotsna; Danovaro, Roberto; MacDonald, Ian R.; Nunnally, Clifton C.; Deming, Jody W.; Montagna, Paul; Lévesque, Mélanie; Weslawski, Jan Marcin; Wlodarska-Kowalczuk, Maria; Ingole, Baban S.; Bett, Brian J.; Billett, David S. M.; Yool, Andrew; Bluhm, Bodil A.; Iken, Katrin; Narayanaswamy, Bhavani E.

    2010-01-01

    A comprehensive seafloor biomass and abundance database has been constructed from 24 oceanographic institutions worldwide within the Census of Marine Life (CoML) field projects. The machine-learning algorithm, Random Forests, was employed to model and predict seafloor standing stocks from surface primary production, water-column integrated and export particulate organic matter (POM), seafloor relief, and bottom water properties. The predictive models explain 63% to 88% of stock variance among the major size groups. Individual and composite maps of predicted global seafloor biomass and abundance are generated for bacteria, meiofauna, macrofauna, and megafauna (invertebrates and fishes). Patterns of benthic standing stocks were positive functions of surface primary production and delivery of the particulate organic carbon (POC) flux to the seafloor. At a regional scale, the census maps illustrate that integrated biomass is highest at the poles, on continental margins associated with coastal upwelling and with broad zones associated with equatorial divergence. Lowest values are consistently encountered on the central abyssal plains of major ocean basins The shift of biomass dominance groups with depth is shown to be affected by the decrease in average body size rather than abundance, presumably due to decrease in quantity and quality of food supply. This biomass census and associated maps are vital components of mechanistic deep-sea food web models and global carbon cycling, and as such provide fundamental information that can be incorporated into evidence-based management. PMID:21209928

  14. Quantifying the evolution of individual scientific impact.

    PubMed

    Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabási, Albert-László

    2016-11-04

    Despite the frequent use of numerous quantitative indicators to gauge the professional impact of a scientist, little is known about how scientific impact emerges and evolves in time. Here, we quantify the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist's sequence of publications. This random-impact rule allows us to formulate a stochastic model that uncouples the effects of productivity, individual ability, and luck and unveils the existence of universal patterns governing the emergence of scientific success. The model assigns a unique individual parameter Q to each scientist, which is stable during a career, and it accurately predicts the evolution of a scientist's impact, from the h-index to cumulative citations, and independent recognitions, such as prizes. Copyright © 2016, American Association for the Advancement of Science.

  15. Fat-tailed fluctuations in the size of organizations: the role of social influence.

    PubMed

    Mondani, Hernan; Holme, Petter; Liljeros, Fredrik

    2014-01-01

    Organizational growth processes have consistently been shown to exhibit a fatter-than-Gaussian growth-rate distribution in a variety of settings. Long periods of relatively small changes are interrupted by sudden changes in all size scales. This kind of extreme events can have important consequences for the development of biological and socio-economic systems. Existing models do not derive this aggregated pattern from agent actions at the micro level. We develop an agent-based simulation model on a social network. We take our departure in a model by a Schwarzkopf et al. on a scale-free network. We reproduce the fat-tailed pattern out of internal dynamics alone, and also find that it is robust with respect to network topology. Thus, the social network and the local interactions are a prerequisite for generating the pattern, but not the network topology itself. We further extend the model with a parameter δ that weights the relative fraction of an individual's neighbours belonging to a given organization, representing a contextual aspect of social influence. In the lower limit of this parameter, the fraction is irrelevant and choice of organization is random. In the upper limit of the parameter, the largest fraction quickly dominates, leading to a winner-takes-all situation. We recover the real pattern as an intermediate case between these two extremes.

  16. Extended fault inversion with random slipmaps: A resolution test for the 2012 Mw 7.6 Nicoya, Costa Rica earthquake from a Popperian inversion strategy.

    NASA Astrophysics Data System (ADS)

    Ángel López Comino, José; Stich, Daniel; Ferreira, Ana M. G.; Morales Soto, José

    2015-04-01

    The inversion of seismic data for extended fault slip distributions provides us detailed models of earthquake sources. The validity of the solutions depends on the fit between observed and synthetic seismograms generated with the source model. However, there may exist more than one model that fit the data in a similar way, leading to a multiplicity of solutions. This underdetermined problem has been analyzed and studied by several authors, who agree that inverting for a single best model may become overly dependent on the details of the procedure. We have addressed this resolution problem by using a global search that scans the solutions domain using random slipmaps, applying a Popperian inversion strategy that involves the generation of a representative set of slip distributions. The proposed technique solves the forward problem for a large set of models calculating their corresponding synthetic seismograms. Then, we propose to perform extended fault inversion through falsification, that is, falsify inappropriate trial models that do not reproduce the data within a reasonable level of mismodelling. The remainder of surviving trial models forms our set of coequal solutions. Thereby the ambiguities that might exist can be detected by taking a look at the solutions, allowing for an efficient assessment of the resolution. The solution set may contain only members with similar slip distributions, or else uncover some fundamental ambiguity like, for example, different patterns of main slip patches or different patterns of rupture propagation. For a feasibility study, the proposed resolution test has been evaluated using teleseismic body wave recordings from the September 5th 2012 Nicoya, Costa Rica earthquake. Note that the inversion strategy can be applied to any type of seismic, geodetic or tsunami data for which we can handle the forward problem. A 2D von Karman distribution is used to describe the spectrum of heterogeneity in slipmaps, and we generate possible models by spectral synthesis for random phase, keeping the rake angle, rupture velocity and slip velocity function fixed. The 2012 Nicoya earthquake turns out to be relatively well constrained from 50 teleseismic waveforms. The solution set contains 252 out of 10.000 trial models with normalized L1-fit within 5 percent from the global minimum. The set includes only similar solutions -a single centred slip patch- with minor differences. Uncertainties are related to the details of the slip maximum, including the amount of peak slip (2m to 3.5m), as well as the characteristics of peripheral slip below 1 m. Synthetic tests suggest that slip patterns like Nicoya may be a fortunate case, while it may be more difficult to unambiguously reconstruct more distributed slip from teleseismic data.

  17. Modeling missing data in knowledge space theory.

    PubMed

    de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio

    2015-12-01

    Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data. (c) 2015 APA, all rights reserved).

  18. Novel layered clustering-based approach for generating ensemble of classifiers.

    PubMed

    Rahman, Ashfaqur; Verma, Brijesh

    2011-05-01

    This paper introduces a novel concept for creating an ensemble of classifiers. The concept is based on generating an ensemble of classifiers through clustering of data at multiple layers. The ensemble classifier model generates a set of alternative clustering of a dataset at different layers by randomly initializing the clustering parameters and trains a set of base classifiers on the patterns at different clusters in different layers. A test pattern is classified by first finding the appropriate cluster at each layer and then using the corresponding base classifier. The decisions obtained at different layers are fused into a final verdict using majority voting. As the base classifiers are trained on overlapping patterns at different layers, the proposed approach achieves diversity among the individual classifiers. Identification of difficult-to-classify patterns through clustering as well as achievement of diversity through layering leads to better classification results as evidenced from the experimental results.

  19. Pattern Selection and Super-Patterns in Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Ben-Naim, Eli; Scheel, Arnd

    We study pattern formation in the bounded confidence model of opinion dynamics. In this random process, opinion is quantified by a single variable. Two agents may interact and reach a fair compromise, but only if their difference of opinion falls below a fixed threshold. Starting from a uniform distribution of opinions with compact support, a traveling wave forms and it propagates from the domain boundary into the unstable uniform state. Consequently, the system reaches a steady state with isolated clusters that are separated by distance larger than the interaction range. These clusters form a quasi-periodic pattern where the sizes of the clusters and the separations between them are nearly constant. We obtain analytically the average separation between clusters L. Interestingly, there are also very small quasi-periodic modulations in the size of the clusters. The spatial periods of these modulations are a series of integers that follow from the continued-fraction representation of the irrational average separation L.

  20. Punctuated equilibrium dynamics in human communications

    NASA Astrophysics Data System (ADS)

    Peng, Dan; Han, Xiao-Pu; Wei, Zong-Wen; Wang, Bing-Hong

    2015-10-01

    A minimal model based on network incorporating individual interactions is proposed to study the non-Poisson statistical properties of human behavior: individuals in system interact with their neighbors, the probability of an individual acting correlates to its activity, and all the individuals involved in action will change their activities randomly. The model reproduces varieties of spatial-temporal patterns observed in empirical studies of human daily communications, providing insight into various human activities and embracing a range of realistic social interacting systems, particularly, intriguing bimodal phenomenon. This model bridges priority queueing theory and punctuated equilibrium dynamics, and our modeling and analysis is likely to shed light on non-Poisson phenomena in many complex systems.

  1. Dynamic Assessment of Water Quality Based on a Variable Fuzzy Pattern Recognition Model

    PubMed Central

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-01-01

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results. PMID:25689998

  2. Dynamic assessment of water quality based on a variable fuzzy pattern recognition model.

    PubMed

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-02-16

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results.

  3. Spatial pattern affects diversity-productivity relationships in experimental meadow communities

    NASA Astrophysics Data System (ADS)

    Lamošová, Tereza; Doležal, Jiří; Lanta, Vojtěch; Lepš, Jan

    2010-05-01

    Plant species create aggregations of conspecifics as a consequence of limited seed dispersal, clonal growth and heterogeneous environment. Such intraspecific aggregation increases the importance of intraspecific competition relative to interspecific competition which may slow down competitive exclusion and promote species coexistence. To examine how spatial aggregation impacts the functioning of experimental assemblages of varying species richness, eight perennial grassland species of different growth form were grown in random and aggregated patterns in monocultures, two-, four-, and eight-species mixtures. In mixtures with an aggregated pattern, monospecific clumps were interspecifically segregated. Mixed model ANOVA was used to test (i) how the total productivity and productivity of individual species is affected by the number of species in a mixture, and (ii) how these relationships are affected by spatial pattern of sown plants. The main patterns of productivity response to species richness conform to other studies: non-transgressive overyielding is omnipresent (the productivity of mixtures is higher than the average of its constituent species so that the net diversity, selection and complementarity effects are positive), whereas transgressive overyielding is found only in a minority of cases (average of log(overyielding) being close to zero or negative). The theoretical prediction that plants in a random pattern should produce more than in an aggregated pattern (the distances to neighbours are smaller and consequently the competition among neighbours stronger) was confirmed in monocultures of all the eight species. The situation is more complicated in mixtures, probably as a consequence of complicated interplay between interspecific and intraspecific competition. The most productive species ( Achillea, Holcus, Plantago) were competitively superior and increased their relative productivity with mixture richness. The intraspecific competition of these species is stronger than that of most other species. The aggregated pattern in the full mixture increased the survival of subordinate species, and consequently, we conclude that an aggregated pattern can promote species coexistence (or at least postpone competitive exclusion), particularly in comparison with homogeneously sown mixtures.

  4. Why the leopard got its spots: relating pattern development to ecology in felids

    PubMed Central

    Allen, William L.; Cuthill, Innes C.; Scott-Samuel, Nicholas E.; Baddeley, Roland

    2011-01-01

    A complete explanation of the diversity of animal colour patterns requires an understanding of both the developmental mechanisms generating them and their adaptive value. However, only two previous studies, which involved computer-generated evolving prey, have attempted to make this link. This study examines variation in the camouflage patterns displayed on the flanks of many felids. After controlling for the effects of shared ancestry using a fully resolved molecular phylogeny, this study shows how phenotypes from plausible felid coat pattern generation mechanisms relate to ecology. We found that likelihood of patterning and pattern attributes, such as complexity and irregularity, were related to felids' habitats, arboreality and nocturnality. Our analysis also indicates that disruptive selection is a likely explanation for the prevalence of melanistic forms in Felidae. Furthermore, we show that there is little phylogenetic signal in the visual appearance of felid patterning, indicating that camouflage adapts to ecology over relatively short time scales. Our method could be applied to any taxon with colour patterns that can reasonably be matched to reaction–diffusion and similar models, where the kinetics of the reaction between two or more initially randomly dispersed morphogens determines the outcome of pattern development. PMID:20961899

  5. Encoding plaintext by Fourier transform hologram in double random phase encoding using fingerprint keys

    NASA Astrophysics Data System (ADS)

    Takeda, Masafumi; Nakano, Kazuya; Suzuki, Hiroyuki; Yamaguchi, Masahiro

    2012-09-01

    It has been shown that biometric information can be used as a cipher key for binary data encryption by applying double random phase encoding. In such methods, binary data are encoded in a bit pattern image, and the decrypted image becomes a plain image when the key is genuine; otherwise, decrypted images become random images. In some cases, images decrypted by imposters may not be fully random, such that the blurred bit pattern can be partially observed. In this paper, we propose a novel bit coding method based on a Fourier transform hologram, which makes images decrypted by imposters more random. Computer experiments confirm that the method increases the randomness of images decrypted by imposters while keeping the false rejection rate as low as in the conventional method.

  6. Universal scaling in the branching of the tree of life.

    PubMed

    Herrada, E Alejandro; Tessone, Claudio J; Klemm, Konstantin; Eguíluz, Víctor M; Hernández-García, Emilio; Duarte, Carlos M

    2008-07-23

    Understanding the patterns and processes of diversification of life in the planet is a key challenge of science. The Tree of Life represents such diversification processes through the evolutionary relationships among the different taxa, and can be extended down to intra-specific relationships. Here we examine the topological properties of a large set of interspecific and intraspecific phylogenies and show that the branching patterns follow allometric rules conserved across the different levels in the Tree of Life, all significantly departing from those expected from the standard null models. The finding of non-random universal patterns of phylogenetic differentiation suggests that similar evolutionary forces drive diversification across the broad range of scales, from macro-evolutionary to micro-evolutionary processes, shaping the diversity of life on the planet.

  7. Emergence of robustness in networks of networks

    NASA Astrophysics Data System (ADS)

    Roth, Kevin; Morone, Flaviano; Min, Byungjoon; Makse, Hernán A.

    2017-06-01

    A model of interdependent networks of networks (NONs) was introduced recently [Proc. Natl. Acad. Sci. (USA) 114, 3849 (2017), 10.1073/pnas.1620808114] in the context of brain activation to identify the neural collective influencers in the brain NON. Here we investigate the emergence of robustness in such a model, and we develop an approach to derive an exact expression for the random percolation transition in Erdös-Rényi NONs of this kind. Analytical calculations are in agreement with numerical simulations, and highlight the robustness of the NON against random node failures, which thus presents a new robust universality class of NONs. The key aspect of this robust NON model is that a node can be activated even if it does not belong to the giant mutually connected component, thus allowing the NON to be built from below the percolation threshold, which is not possible in previous models of interdependent networks. Interestingly, the phase diagram of the model unveils particular patterns of interconnectivity for which the NON is most vulnerable, thereby marking the boundary above which the robustness of the system improves with increasing dependency connections.

  8. Dense modifiable interconnections utilizing photorefractive volume holograms

    NASA Astrophysics Data System (ADS)

    Psaltis, Demetri; Qiao, Yong

    1990-11-01

    This report describes an experimental two-layer optical neural network built at Caltech. The system uses photorefractive volume holograms to implement dense, modifiable synaptic interconnections and liquid crystal light valves (LCVS) to perform nonlinear thresholding operations. Kanerva's Sparse, Distributed Memory was implemented using this network and its ability to recognize handwritten character-alphabet (A-Z) has been demonstrated experimentally. According to Kanerva's model, the first layer has fixed, random weights of interconnections and the second layer is trained by sum-of-outer-products rule. After training, the recognition rates of the network on the training set (104 patterns) and test set (520 patterns) are 100 and 50 percent, respectively.

  9. Fluctuations of the transcription factor ATML1 generate the pattern of giant cells in the Arabidopsis sepal

    PubMed Central

    Meyer, Heather M; Teles, José; Formosa-Jordan, Pau; Refahi, Yassin; San-Bento, Rita; Ingram, Gwyneth; Jönsson, Henrik; Locke, James C W; Roeder, Adrienne H K

    2017-01-01

    Multicellular development produces patterns of specialized cell types. Yet, it is often unclear how individual cells within a field of identical cells initiate the patterning process. Using live imaging, quantitative image analyses and modeling, we show that during Arabidopsis thaliana sepal development, fluctuations in the concentration of the transcription factor ATML1 pattern a field of identical epidermal cells to differentiate into giant cells interspersed between smaller cells. We find that ATML1 is expressed in all epidermal cells. However, its level fluctuates in each of these cells. If ATML1 levels surpass a threshold during the G2 phase of the cell cycle, the cell will likely enter a state of endoreduplication and become giant. Otherwise, the cell divides. Our results demonstrate a fluctuation-driven patterning mechanism for how cell fate decisions can be initiated through a random yet tightly regulated process. DOI: http://dx.doi.org/10.7554/eLife.19131.001 PMID:28145865

  10. Increased entropy of signal transduction in the cancer metastasis phenotype.

    PubMed

    Teschendorff, Andrew E; Severini, Simone

    2010-07-30

    The statistical study of biological networks has led to important novel biological insights, such as the presence of hubs and hierarchical modularity. There is also a growing interest in studying the statistical properties of networks in the context of cancer genomics. However, relatively little is known as to what network features differ between the cancer and normal cell physiologies, or between different cancer cell phenotypes. Based on the observation that frequent genomic alterations underlie a more aggressive cancer phenotype, we asked if such an effect could be detectable as an increase in the randomness of local gene expression patterns. Using a breast cancer gene expression data set and a model network of protein interactions we derive constrained weighted networks defined by a stochastic information flux matrix reflecting expression correlations between interacting proteins. Based on this stochastic matrix we propose and compute an entropy measure that quantifies the degree of randomness in the local pattern of information flux around single genes. By comparing the local entropies in the non-metastatic versus metastatic breast cancer networks, we here show that breast cancers that metastasize are characterised by a small yet significant increase in the degree of randomness of local expression patterns. We validate this result in three additional breast cancer expression data sets and demonstrate that local entropy better characterises the metastatic phenotype than other non-entropy based measures. We show that increases in entropy can be used to identify genes and signalling pathways implicated in breast cancer metastasis and provide examples of de-novo discoveries of gene modules with known roles in apoptosis, immune-mediated tumour suppression, cell-cycle and tumour invasion. Importantly, we also identify a novel gene module within the insulin growth factor signalling pathway, alteration of which may predispose the tumour to metastasize. These results demonstrate that a metastatic cancer phenotype is characterised by an increase in the randomness of the local information flux patterns. Measures of local randomness in integrated protein interaction mRNA expression networks may therefore be useful for identifying genes and signalling pathways disrupted in one phenotype relative to another. Further exploration of the statistical properties of such integrated cancer expression and protein interaction networks will be a fruitful endeavour.

  11. Investigation of factors affecting the injury severity of single-vehicle rollover crashes: A random-effects generalized ordered probit model.

    PubMed

    Anarkooli, Alireza Jafari; Hosseinpour, Mehdi; Kardar, Adele

    2017-09-01

    Rollover crashes are responsible for a notable number of serious injuries and fatalities; hence, they are of great concern to transportation officials and safety researchers. However, only few published studies have analyzed the factors associated with severity outcomes of rollover crashes. This research has two objectives. The first objective is to investigate the effects of various factors, of which some have been rarely reported in the existing studies, on the injury severities of single-vehicle (SV) rollover crashes based on six-year crash data collected on the Malaysian federal roads. A random-effects generalized ordered probit (REGOP) model is employed in this study to analyze injury severity patterns caused by rollover crashes. The second objective is to examine the performance of the proposed approach, REGOP, for modeling rollover injury severity outcomes. To this end, a mixed logit (MXL) model is also fitted in this study because of its popularity in injury severity modeling. Regarding the effects of the explanatory variables on the injury severity of rollover crashes, the results reveal that factors including dark without supplemental lighting, rainy weather condition, light truck vehicles (e.g., sport utility vehicles, vans), heavy vehicles (e.g., bus, truck), improper overtaking, vehicle age, traffic volume and composition, number of travel lanes, speed limit, undulating terrain, presence of central median, and unsafe roadside conditions are positively associated with more severe SV rollover crashes. On the other hand, unpaved shoulder width, area type, driver occupation, and number of access points are found as the significant variables decreasing the probability of being killed or severely injured (i.e., KSI) in rollover crashes. Land use and side friction are significant and positively associated only with slight injury category. These findings provide valuable insights into the causes and factors affecting the injury severity patterns of rollover crashes, and thus can help develop effective countermeasures to reduce the severity of rollover crashes. The model comparison results show that the REGOP model is found to outperform the MXL model in terms of goodness-of-fit measures, and also is significantly superior to other extensions of ordered probit models, including generalized ordered probit and random-effects ordered probit (REOP) models. As a result, this research introduces REGOP as a promising tool for future research focusing on crash injury severity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Concurrent design of quasi-random photonic nanostructures

    PubMed Central

    Lee, Won-Kyu; Yu, Shuangcheng; Engel, Clifford J.; Reese, Thaddeus; Rhee, Dongjoon; Chen, Wei

    2017-01-01

    Nanostructured surfaces with quasi-random geometries can manipulate light over broadband wavelengths and wide ranges of angles. Optimization and realization of stochastic patterns have typically relied on serial, direct-write fabrication methods combined with real-space design. However, this approach is not suitable for customizable features or scalable nanomanufacturing. Moreover, trial-and-error processing cannot guarantee fabrication feasibility because processing–structure relations are not included in conventional designs. Here, we report wrinkle lithography integrated with concurrent design to produce quasi-random nanostructures in amorphous silicon at wafer scales that achieved over 160% light absorption enhancement from 800 to 1,200 nm. The quasi-periodicity of patterns, materials filling ratio, and feature depths could be independently controlled. We statistically represented the quasi-random patterns by Fourier spectral density functions (SDFs) that could bridge the processing–structure and structure–performance relations. Iterative search of the optimal structure via the SDF representation enabled concurrent design of nanostructures and processing. PMID:28760975

  13. GEM-CEDAR Challenge: Poynting Flux at DMSP and Modeled Joule Heat

    NASA Technical Reports Server (NTRS)

    Rastaetter, Lutz; Shim, Ja Soon; Kuznetsova, Maria M.; Kilcommons, Liam M.; Knipp, Delores J.; Codrescu, Mihail; Fuller-Rowell, Tim; Emery, Barbara; Weimer, Daniel R.; Cosgrove, Russell; hide

    2016-01-01

    Poynting flux into the ionosphere measures the electromagnetic energy coming from the magnetosphere. This energy flux can vary greatly between quiet times and geomagnetic active times. As part of the Geospace Environment Modeling-coupling energetics and dynamics of atmospheric regions modeling challenge, physics-based models of the 3-D ionosphere and ionospheric electrodynamics solvers of magnetosphere models that specify Joule heat and empirical models specifying Poynting flux were run for six geomagnetic storm events of varying intensity. We compared model results with Poynting flux values along the DMSP-15 satellite track computed from ion drift meter and magnetic field observations. Although being a different quantity, Joule heat can in practice be correlated to incoming Poynting flux because the energy is dissipated primarily in high latitudes where Poynting flux is being deposited. Within the physics-based model group, we find mixed results with some models overestimating Joule heat and some models agreeing better with observed Poynting flux rates as integrated over auroral passes. In contrast, empirical models tend to underestimate integrated Poynting flux values. Modeled Joule heat or Poynting flux patterns often resemble the observed Poynting flux patterns on a large scale, but amplitudes can differ by a factor of 2 or larger due to the highly localized nature of observed Poynting flux deposition that is not captured by the models. In addition, the positioning of modeled patterns appear to be randomly shifted against the observed Poynting flux energy input. This study is the first to compare Poynting flux and Joule heat in a large variety of models of the ionosphere.

  14. Optical side-effects of fs-laser treatment in refractive surgery investigated by means of a model eye

    PubMed Central

    Ackermann, Roland; Kammel, Robert; Merker, Marina; Kamm, Andreas; Tünnermann, Andreas; Nolte, Stefan

    2013-01-01

    Optical side-effects of fs-laser treatment in refractive surgery are investigated by means of a model eye. We show that rainbow glare is the predominant perturbation, which can be avoided by randomly distributing laser spots within the lens. For corneal applications such as fs-LASIK, even a regular grid with spot-to-spot distances of ~3 µm is sufficient to minimize rainbow glare perception. Contrast sensitivity is affected, when the lens is treated with large 3D-patterns. PMID:23413236

  15. Testing treatment effect in schizophrenia clinical trials with heavy patient dropout using latent class growth mixture models.

    PubMed

    Kong, Fanhui; Chen, Yeh-Fong

    2016-07-01

    By examining the outcome trajectories of the dropout patients with different reasons in the schizophrenia trials, we note that although patients are recruited from the same protocol that have compatible baseline characteristics, they may respond differently even to the same treatment. Some patients show consistent improvement while others only have temporary relief. This creates different patient subpopulations characterized by their response and dropout patterns. At the same time, those who continue to improve seem to be more likely to complete the study while those who only experience temporary relief have a higher chance to drop out. Such phenomenon appears to be quite general in schizophrenia clinical trials. This simultaneous inhomogeneity both in patient response as well as dropout patterns creates a scenario of missing not at random and therefore results in biases when we use the statistical methods based on the missing at random assumption to test treatment efficacy. In this paper, we propose to use the latent class growth mixture model, which is a special case of the latent mixture model, to conduct the statistical analyses in such situation. This model allows us to take the inhomogeneity among subpopulations into consideration to make more accurate inferences on the treatment effect at any visit time. Comparing with the conventional statistical methods such as mixed-effects model for repeated measures, we demonstrate through simulations that the proposed latent mixture model approach gives better control on the Type I error rate in testing treatment effect. Published 2016. This article is a U.S. Government work and is in the public domain in the USA. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Clustering of galaxies near damped Lyman-alpha systems with (z) = 2.6

    NASA Technical Reports Server (NTRS)

    Wolfe, A. M

    1993-01-01

    The galaxy two-point correlation function, xi, at (z) = 2.6 is determined by comparing the number of Ly-alpha-emitting galaxies in narrowband CCD fields selected for the presence of damped L-alpha absorption to their number in randomly selected control fields. Comparisons between the presented determination of (xi), a density-weighted volume average of xi, and model predictions for (xi) at large redshifts show that models in which the clustering pattern is fixed in proper coordinates are highly unlikely, while better agreement is obtained if the clustering pattern is fixed in comoving coordinates. Therefore, clustering of Ly-alpha-emitting galaxies around damped Ly-alpha systems at large redshifts is strong. It is concluded that the faint blue galaxies are drawn from a parent population different from normal galaxies, the presumed offspring of damped Ly-alpha systems.

  17. The inverse niche model for food webs with parasites

    USGS Publications Warehouse

    Warren, Christopher P.; Pascual, Mercedes; Lafferty, Kevin D.; Kuris, Armand M.

    2010-01-01

    Although parasites represent an important component of ecosystems, few field and theoretical studies have addressed the structure of parasites in food webs. We evaluate the structure of parasitic links in an extensive salt marsh food web, with a new model distinguishing parasitic links from non-parasitic links among free-living species. The proposed model is an extension of the niche model for food web structure, motivated by the potential role of size (and related metabolic rates) in structuring food webs. The proposed extension captures several properties observed in the data, including patterns of clustering and nestedness, better than does a random model. By relaxing specific assumptions, we demonstrate that two essential elements of the proposed model are the similarity of a parasite's hosts and the increasing degree of parasite specialization, along a one-dimensional niche axis. Thus, inverting one of the basic rules of the original model, the one determining consumers' generality appears critical. Our results support the role of size as one of the organizing principles underlying niche space and food web topology. They also strengthen the evidence for the non-random structure of parasitic links in food webs and open the door to addressing questions concerning the consequences and origins of this structure.

  18. Modeling carbachol-induced hippocampal network synchronization using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Dragomir, Andrei; Akay, Yasemin M.; Akay, Metin

    2010-10-01

    In this work we studied the neural state transitions undergone by the hippocampal neural network using a hidden Markov model (HMM) framework. We first employed a measure based on the Lempel-Ziv (LZ) estimator to characterize the changes in the hippocampal oscillation patterns in terms of their complexity. These oscillations correspond to different modes of hippocampal network synchronization induced by the cholinergic agonist carbachol in the CA1 region of mice hippocampus. HMMs are then used to model the dynamics of the LZ-derived complexity signals as first-order Markov chains. Consequently, the signals corresponding to our oscillation recordings can be segmented into a sequence of statistically discriminated hidden states. The segmentation is used for detecting transitions in neural synchronization modes in data recorded from wild-type and triple transgenic mice models (3xTG) of Alzheimer's disease (AD). Our data suggest that transition from low-frequency (delta range) continuous oscillation mode into high-frequency (theta range) oscillation, exhibiting repeated burst-type patterns, occurs always through a mode resembling a mixture of the two patterns, continuous with burst. The relatively random patterns of oscillation during this mode may reflect the fact that the neuronal network undergoes re-organization. Further insight into the time durations of these modes (retrieved via the HMM segmentation of the LZ-derived signals) reveals that the mixed mode lasts significantly longer (p < 10-4) in 3xTG AD mice. These findings, coupled with the documented cholinergic neurotransmission deficits in the 3xTG mice model, may be highly relevant for the case of AD.

  19. Rationalizing spatial exploration patterns of wild animals and humans through a temporal discounting framework

    PubMed Central

    Namboodiri, Vijay Mohan K.; Levy, Joshua M.; Mihalas, Stefan; Sims, David W.; Hussain Shuler, Marshall G.

    2016-01-01

    Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that “Lévy random walks”—which can produce power law path length distributions—are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent’s goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers. PMID:27385831

  20. Assessing randomness and complexity in human motion trajectories through analysis of symbolic sequences

    PubMed Central

    Peng, Zhen; Genewein, Tim; Braun, Daniel A.

    2014-01-01

    Complexity is a hallmark of intelligent behavior consisting both of regular patterns and random variation. To quantitatively assess the complexity and randomness of human motion, we designed a motor task in which we translated subjects' motion trajectories into strings of symbol sequences. In the first part of the experiment participants were asked to perform self-paced movements to create repetitive patterns, copy pre-specified letter sequences, and generate random movements. To investigate whether the degree of randomness can be manipulated, in the second part of the experiment participants were asked to perform unpredictable movements in the context of a pursuit game, where they received feedback from an online Bayesian predictor guessing their next move. We analyzed symbol sequences representing subjects' motion trajectories with five common complexity measures: predictability, compressibility, approximate entropy, Lempel-Ziv complexity, as well as effective measure complexity. We found that subjects' self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion. We also found that participants could change the randomness of their behavior depending on context and feedback. Our results suggest that humans can adjust both complexity and regularity in different movement types and contexts and that this can be assessed with information-theoretic measures of the symbolic sequences generated from movement trajectories. PMID:24744716

  1. Cost effective solution using inverse lithography OPC for DRAM random contact layer

    NASA Astrophysics Data System (ADS)

    Jun, Jinhyuck; Hwang, Jaehee; Choi, Jaeseung; Oh, Seyoung; Park, Chanha; Yang, Hyunjo; Dam, Thuc; Do, Munhoe; Lee, Dong Chan; Xiao, Guangming; Choi, Jung-Hoe; Lucas, Kevin

    2017-04-01

    Many different advanced devices and design layers currently employ double patterning technology (DPT) as a means to overcome lithographic and OPC limitations at low k1 values. Certainly device layers with k1 value below 0.25 require DPT or other pitch splitting methodologies. DPT has also been used to improve patterning of certain device layers with k1 values slightly above 0.25, due to the difficulty of achieving sufficient pattern fidelity with only a single exposure. Unfortunately, this broad adoption of DPT also came with a significant increase in patterning process cost. In this paper, we discuss the development of a single patterning technology process using an integrated Inverse Lithography Technology (ILT) flow for mask synthesis. A single pattering technology flow will reduce the manufacturing cost for a k1 > 0.25 full chip random contact layer in a memory device by replacing the more expensive DPT process with ILT flow, while also maintaining good lithographic production quality and manufacturable OPC/RET production metrics. This new integrated flow consists of applying ILT to the difficult core region and traditional rule-based assist features (RBAFs) with OPC to the peripheral region of a DRAM contact layer. Comparisons of wafer results between the ILT process and the non-ILT process showed the lithographic benefits of ILT and its ability to enable a robust single patterning process for this low-k1 device layer. Advanced modeling with a negative tone develop (NTD) process achieved the accuracy levels needed for ILT to control feature shapes through dose and focus. Details of these afore mentioned results will be described in the paper.

  2. A 3D Monte Carlo model of radiation affecting cells, and its application to neuronal cells and GCR irradiation

    NASA Astrophysics Data System (ADS)

    Ponomarev, Artem; Sundaresan, Alamelu; Kim, Angela; Vazquez, Marcelo E.; Guida, Peter; Kim, Myung-Hee; Cucinotta, Francis A.

    A 3D Monte Carlo model of radiation transport in matter is applied to study the effect of heavy ion radiation on human neuronal cells. Central nervous system effects, including cognitive impairment, are suspected from the heavy ion component of galactic cosmic radiation (GCR) during space missions. The model can count, for instance, the number of direct hits from ions, which will have the most affect on the cells. For comparison, the remote hits, which are received through δ-rays from the projectile traversing space outside the volume of the cell, are also simulated and their contribution is estimated. To simulate tissue effects from irradiation, cellular matrices of neuronal cells, which were derived from confocal microscopy, were simulated in our model. To produce this realistic model of the brain tissue, image segmentation was used to identify cells in the images of cells cultures. The segmented cells were inserted pixel by pixel into the modeled physical space, which represents a volume of interacting cells with periodic boundary conditions (PBCs). PBCs were used to extrapolate the model results to the macroscopic tissue structures. Specific spatial patterns for cell apoptosis are expected from GCR, as heavy ions produce concentrated damage along their trajectories. The apoptotic cell patterns were modeled based on the action cross sections for apoptosis, which were estimated from the available experimental data. The cell patterns were characterized with an autocorrelation function, which values are higher for non-random cell patterns, and the values of the autocorrelation function were compared for X rays and Fe ion irradiations. The autocorrelation function indicates the directionality effects present in apoptotic neuronal cells from GCR.

  3. Modeling change in potential landscape vulnerability to forest insect and pathogen disturbances: methods for forested subwatersheds sampled in the midscale interior Columbia River basin assessment.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; Craig A. Miller; Scott D. Kreiter; R. Brion Salter

    1999-01-01

    In the interior Columbia River basin midscale ecological assessment, including portions of the Klamath and Great Basins, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and...

  4. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  5. A Single Mechanism Can Account for Human Perception of Depth in Mixed Correlation Random Dot Stereograms

    PubMed Central

    Cumming, Bruce G.

    2016-01-01

    In order to extract retinal disparity from a visual scene, the brain must match corresponding points in the left and right retinae. This computationally demanding task is known as the stereo correspondence problem. The initial stage of the solution to the correspondence problem is generally thought to consist of a correlation-based computation. However, recent work by Doi et al suggests that human observers can see depth in a class of stimuli where the mean binocular correlation is 0 (half-matched random dot stereograms). Half-matched random dot stereograms are made up of an equal number of correlated and anticorrelated dots, and the binocular energy model—a well-known model of V1 binocular complex cells—fails to signal disparity here. This has led to the proposition that a second, match-based computation must be extracting disparity in these stimuli. Here we show that a straightforward modification to the binocular energy model—adding a point output nonlinearity—is by itself sufficient to produce cells that are disparity-tuned to half-matched random dot stereograms. We then show that a simple decision model using this single mechanism can reproduce psychometric functions generated by human observers, including reduced performance to large disparities and rapidly updating dot patterns. The model makes predictions about how performance should change with dot size in half-matched stereograms and temporal alternation in correlation, which we test in human observers. We conclude that a single correlation-based computation, based directly on already-known properties of V1 neurons, can account for the literature on mixed correlation random dot stereograms. PMID:27196696

  6. Applying machine learning methods for characterization of hexagonal prisms from their 2D scattering patterns - an investigation using modelled scattering data

    NASA Astrophysics Data System (ADS)

    Salawu, Emmanuel Oluwatobi; Hesse, Evelyn; Stopford, Chris; Davey, Neil; Sun, Yi

    2017-11-01

    Better understanding and characterization of cloud particles, whose properties and distributions affect climate and weather, are essential for the understanding of present climate and climate change. Since imaging cloud probes have limitations of optical resolution, especially for small particles (with diameter < 25 μm), instruments like the Small Ice Detector (SID) probes, which capture high-resolution spatial light scattering patterns from individual particles down to 1 μm in size, have been developed. In this work, we have proposed a method using Machine Learning techniques to estimate simulated particles' orientation-averaged projected sizes (PAD) and aspect ratio from their 2D scattering patterns. The two-dimensional light scattering patterns (2DLSP) of hexagonal prisms are computed using the Ray Tracing with Diffraction on Facets (RTDF) model. The 2DLSP cover the same angular range as the SID probes. We generated 2DLSP for 162 hexagonal prisms at 133 orientations for each. In a first step, the 2DLSP were transformed into rotation-invariant Zernike moments (ZMs), which are particularly suitable for analyses of pattern symmetry. Then we used ZMs, summed intensities, and root mean square contrast as inputs to the advanced Machine Learning methods. We created one random forests classifier for predicting prism orientation, 133 orientation-specific (OS) support vector classification models for predicting the prism aspect-ratios, 133 OS support vector regression models for estimating prism sizes, and another 133 OS Support Vector Regression (SVR) models for estimating the size PADs. We have achieved a high accuracy of 0.99 in predicting prism aspect ratios, and a low value of normalized mean square error of 0.004 for estimating the particle's size and size PADs.

  7. DeepDeath: Learning to predict the underlying cause of death with Big Data.

    PubMed

    Hassanzadeh, Hamid Reza; Ying Sha; Wang, May D

    2017-07-01

    Multiple cause-of-death data provides a valuable source of information that can be used to enhance health standards by predicting health related trajectories in societies with large populations. These data are often available in large quantities across U.S. states and require Big Data techniques to uncover complex hidden patterns. We design two different classes of models suitable for large-scale analysis of mortality data, a Hadoop-based ensemble of random forests trained over N-grams, and the DeepDeath, a deep classifier based on the recurrent neural network (RNN). We apply both classes to the mortality data provided by the National Center for Health Statistics and show that while both perform significantly better than the random classifier, the deep model that utilizes long short-term memory networks (LSTMs), surpasses the N-gram based models and is capable of learning the temporal aspect of the data without a need for building ad-hoc, expert-driven features.

  8. A null model for microbial diversification

    PubMed Central

    Straub, Timothy J.

    2017-01-01

    Whether prokaryotes (Bacteria and Archaea) are naturally organized into phenotypically and genetically cohesive units comparable to animal or plant species remains contested, frustrating attempts to estimate how many such units there might be, or to identify the ecological roles they play. Analyses of gene sequences in various closely related prokaryotic groups reveal that sequence diversity is typically organized into distinct clusters, and processes such as periodic selection and extensive recombination are understood to be drivers of cluster formation (“speciation”). However, observed patterns are rarely compared with those obtainable with simple null models of diversification under stochastic lineage birth and death and random genetic drift. Via a combination of simulations and analyses of core and phylogenetic marker genes, we show that patterns of diversity for the genera Escherichia, Neisseria, and Borrelia are generally indistinguishable from patterns arising under a null model. We suggest that caution should thus be taken in interpreting observed clustering as a result of selective evolutionary forces. Unknown forces do, however, appear to play a role in Helicobacter pylori, and some individual genes in all groups fail to conform to the null model. Taken together, we recommend the presented birth−death model as a null hypothesis in prokaryotic speciation studies. It is only when the real data are statistically different from the expectations under the null model that some speciation process should be invoked. PMID:28630293

  9. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    PubMed

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Prediction of black box warning by mining patterns of Convergent Focus Shift in clinical trial study populations using linked public data.

    PubMed

    Ma, Handong; Weng, Chunhua

    2016-04-01

    To link public data resources for predicting post-marketing drug safety label changes by analyzing the Convergent Focus Shift patterns among drug testing trials. We identified 256 top-selling prescription drugs between 2003 and 2013 and divided them into 83 BBW drugs (drugs with at least one black box warning label) and 173 ROBUST drugs (drugs without any black box warning label) based on their FDA black box warning (BBW) records. We retrieved 7499 clinical trials that each had at least one of these drugs for intervention from the ClinicalTrials.gov. We stratified all the trials by pre-marketing or post-marketing status, study phase, and study start date. For each trial, we retrieved drug and disease concepts from clinical trial summaries to model its study population using medParser and SNOMED-CT. Convergent Focus Shift (CFS) pattern was calculated and used to assess the temporal changes in study populations from pre-marketing to post-marketing trials for each drug. Then we selected 68 candidate drugs, 18 with BBW warning and 50 without, that each had at least nine pre-marketing trials and nine post-marketing trials for predictive modeling. A random forest predictive model was developed to predict BBW acquisition incidents based on CFS patterns among these drugs. Pre- and post-marketing trials of BBW and ROBUST drugs were compared to look for their differences in CFS patterns. Among the 18 BBW drugs, we consistently observed that the post-marketing trials focused more on recruiting patients with medical conditions previously unconsidered in the pre-marketing trials. In contrast, among the 50 ROBUST drugs, the post-marketing trials involved a variety of medications for testing their associations with target intervention(s). We found it feasible to predict BBW acquisitions using different CFS patterns between the two groups of drugs. Our random forest predictor achieved an AUC of 0.77. We also demonstrated the feasibility of the predictor for identifying long-term BBW acquisition events without compromising prediction accuracy. This study contributes a method for post-marketing pharmacovigilance using Convergent Focus Shift (CFS) patterns in clinical trial study populations mined from linked public data resources. These signals are otherwise unavailable from individual data resources. We demonstrated the added value of linked public data and the feasibility of integrating ClinicalTrials.gov summaries and drug safety labels for post-marketing surveillance. Future research is needed to ensure better accessibility and linkage of heterogeneous drug safety data for efficient pharmacovigilance. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Forging patterns and making waves from biology to geology: a commentary on Turing (1952) ‘The chemical basis of morphogenesis’

    PubMed Central

    Ball, Philip

    2015-01-01

    Alan Turing was neither a biologist nor a chemist, and yet the paper he published in 1952, ‘The chemical basis of morphogenesis’, on the spontaneous formation of patterns in systems undergoing reaction and diffusion of their ingredients has had a substantial impact on both fields, as well as in other areas as disparate as geomorphology and criminology. Motivated by the question of how a spherical embryo becomes a decidedly non-spherical organism such as a human being, Turing devised a mathematical model that explained how random fluctuations can drive the emergence of pattern and structure from initial uniformity. The spontaneous appearance of pattern and form in a system far away from its equilibrium state occurs in many types of natural process, and in some artificial ones too. It is often driven by very general mechanisms, of which Turing's model supplies one of the most versatile. For that reason, these patterns show striking similarities in systems that seem superficially to share nothing in common, such as the stripes of sand ripples and of pigmentation on a zebra skin. New examples of ‘Turing patterns' in biology and beyond are still being discovered today. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750229

  12. Design space exploration for early identification of yield limiting patterns

    NASA Astrophysics Data System (ADS)

    Li, Helen; Zou, Elain; Lee, Robben; Hong, Sid; Liu, Square; Wang, JinYan; Du, Chunshan; Zhang, Recco; Madkour, Kareem; Ali, Hussein; Hsu, Danny; Kabeel, Aliaa; ElManhawy, Wael; Kwan, Joe

    2016-03-01

    In order to resolve the causality dilemma of which comes first, accurate design rules or real designs, this paper presents a flow for exploration of the layout design space to early identify problematic patterns that will negatively affect the yield. A new random layout generating method called Layout Schema Generator (LSG) is reported in this paper, this method generates realistic design-like layouts without any design rule violation. Lithography simulation is then used on the generated layout to discover the potentially problematic patterns (hotspots). These hotspot patterns are further explored by randomly inducing feature and context variations to these identified hotspots through a flow called Hotspot variation Flow (HSV). Simulation is then performed on these expanded set of layout clips to further identify more problematic patterns. These patterns are then classified into design forbidden patterns that should be included in the design rule checker and legal patterns that need better handling in the RET recipes and processes.

  13. A simple spatiotemporal rabies model for skunk and bat interaction in northeast Texas.

    PubMed

    Borchering, Rebecca K; Liu, Hao; Steinhaus, Mara C; Gardner, Carl L; Kuang, Yang

    2012-12-07

    We formulate a simple partial differential equation model in an effort to qualitatively reproduce the spread dynamics and spatial pattern of rabies in northeast Texas with overlapping reservoir species (skunks and bats). Most existing models ignore reservoir species or model them with patchy models by ordinary differential equations. In our model, we incorporate interspecies rabies infection in addition to rabid population random movement. We apply this model to the confirmed case data from northeast Texas with most parameter values obtained or computed from the literature. Results of simulations using both our skunk-only model and our skunk and bat model demonstrate that the model with overlapping reservoir species more accurately reproduces the progression of rabies spread in northeast Texas. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Multiple Mating, Paternity and Complex Fertilisation Patterns in the Chokka Squid Loligo reynaudii

    PubMed Central

    Naud, Marie-Jose; Sauer, Warwick H. H.; McKeown, Niall J.; Shaw, Paul W.

    2016-01-01

    Polyandry is widespread and influences patterns of sexual selection, with implications for sexual conflict over mating. Assessing sperm precedence patterns is a first step towards understanding sperm competition within a female and elucidating the roles of male- and female-controlled factors. In this study behavioural field data and genetic data were combined to investigate polyandry in the chokka squid Loligo reynaudii. Microsatellite DNA-based paternity analysis revealed multiple paternity to be the norm, with 79% of broods sired by at least two males. Genetic data also determined that the male who was guarding the female at the moment of sampling was a sire in 81% of the families tested, highlighting mate guarding as a successful male tactic with postcopulatory benefits linked to sperm deposition site giving privileged access to extruded egg strings. As females lay multiple eggs in capsules (egg strings) wherein their position is not altered during maturation it is possible to describe the spatial / temporal sequence of fertilisation / sperm precedence There were four different patterns of fertilisation found among the tested egg strings: 1) unique sire; 2) dominant sire, with one or more rare sires; 3) randomly mixed paternity (two or more sires); and 4) a distinct switch in paternity occurring along the egg string. The latter pattern cannot be explained by a random use of stored sperm, and suggests postcopulatory female sperm choice. Collectively the data indicate multiple levels of male- and female-controlled influences on sperm precedence, and highlights squid as interesting models to study the interplay between sexual and natural selection. PMID:26872354

  15. Developmental stage related patterns of codon usage and genomic GC content: searching for evolutionary fingerprints with models of stem cell differentiation

    PubMed Central

    2007-01-01

    Background The usage of synonymous codons shows considerable variation among mammalian genes. How and why this usage is non-random are fundamental biological questions and remain controversial. It is also important to explore whether mammalian genes that are selectively expressed at different developmental stages bear different molecular features. Results In two models of mouse stem cell differentiation, we established correlations between codon usage and the patterns of gene expression. We found that the optimal codons exhibited variation (AT- or GC-ending codons) in different cell types within the developmental hierarchy. We also found that genes that were enriched (developmental-pivotal genes) or specifically expressed (developmental-specific genes) at different developmental stages had different patterns of codon usage and local genomic GC (GCg) content. Moreover, at the same developmental stage, developmental-specific genes generally used more GC-ending codons and had higher GCg content compared with developmental-pivotal genes. Further analyses suggest that the model of translational selection might be consistent with the developmental stage-related patterns of codon usage, especially for the AT-ending optimal codons. In addition, our data show that after human-mouse divergence, the influence of selective constraints is still detectable. Conclusion Our findings suggest that developmental stage-related patterns of gene expression are correlated with codon usage (GC3) and GCg content in stem cell hierarchies. Moreover, this paper provides evidence for the influence of natural selection at synonymous sites in the mouse genome and novel clues for linking the molecular features of genes to their patterns of expression during mammalian ontogenesis. PMID:17349061

  16. The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements

    DTIC Science & Technology

    2015-06-01

    of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation

  17. Influence of Schizotypy on Responding and Contingency Awareness on Free-Operant Schedules of Reinforcement

    ERIC Educational Resources Information Center

    Randell, Jordan; Searle, Rob; Reed, Phil

    2012-01-01

    Schedules of reinforcement typically produce reliable patterns of behaviour, and one factor that can cause deviations from these normally reliable patterns is schizotypy. Low scorers on the unusual experiences subscale of the Oxford-Liverpool Inventory of Feelings and Experiences performed as expected on a yoked random-ratio (RR), random-interval…

  18. Speckle lithography for fabricating Gaussian, quasi-random 2D structures and black silicon structures

    PubMed Central

    Bingi, Jayachandra; Murukeshan, Vadakke Matham

    2015-01-01

    Laser speckle pattern is a granular structure formed due to random coherent wavelet interference and generally considered as noise in optical systems including photolithography. Contrary to this, in this paper, we use the speckle pattern to generate predictable and controlled Gaussian random structures and quasi-random structures photo-lithographically. The random structures made using this proposed speckle lithography technique are quantified based on speckle statistics, radial distribution function (RDF) and fast Fourier transform (FFT). The control over the speckle size, density and speckle clustering facilitates the successful fabrication of black silicon with different surface structures. The controllability and tunability of randomness makes this technique a robust method for fabricating predictable 2D Gaussian random structures and black silicon structures. These structures can enhance the light trapping significantly in solar cells and hence enable improved energy harvesting. Further, this technique can enable efficient fabrication of disordered photonic structures and random media based devices. PMID:26679513

  19. Autonomous change of behavior for environmental context: An intermittent search model with misunderstanding search pattern

    NASA Astrophysics Data System (ADS)

    Murakami, Hisashi; Gunji, Yukio-Pegio

    2017-07-01

    Although foraging patterns have long been predicted to optimally adapt to environmental conditions, empirical evidence has been found in recent years. This evidence suggests that the search strategy of animals is open to change so that animals can flexibly respond to their environment. In this study, we began with a simple computational model that possesses the principal features of an intermittent strategy, i.e., careful local searches separated by longer steps, as a mechanism for relocation, where an agent in the model follows a rule to switch between two phases, but it could misunderstand this rule, i.e., the agent follows an ambiguous switching rule. Thanks to this ambiguity, the agent's foraging strategy can continuously change. First, we demonstrate that our model can exhibit an optimal change of strategy from Brownian-type to Lévy-type depending on the prey density, and we investigate the distribution of time intervals for switching between the phases. Moreover, we show that the model can display higher search efficiency than a correlated random walk.

  20. Integrating association data and disease dynamics: an illustration using African Buffalo in Kruger National Park

    USGS Publications Warehouse

    Cross, Paul C.; James O, Lloyd-Smith; Bowers, Justin A.; Hay, Craig T.; Hofmeyr, Markus; Getz, Wayne M.

    2004-01-01

    Recognition is a prerequisite for non-random association amongst individuals. We explore how non-random association patterns (i.e. who spends time with whom) affect disease dynamics. We estimated the amount of time individuals spent together per month using radio-tracking data from African buffalo and incorporated these data into a dynamic social network model. The dynamic nature of the network has a strong influence on simulated disease dynamics particularly for diseases with shorter infectious periods. Cluster analyses of the association data demonstrated that buffalo herds were not as well defined as previously thought. Associations were more tightly clustered in 2002 than 2003, perhaps due to drier conditions in 2003. As a result, diseases may spread faster during drought conditions due to increased population mixing. Association data are often collected but this is the first use of empirical data in a network disease model in a wildlife population.

  1. Cinematic Operation of the Cerebral Cortex Interpreted via Critical Transitions in Self-Organized Dynamic Systems

    PubMed Central

    Kozma, Robert; Freeman, Walter J.

    2017-01-01

    Measurements of local field potentials over the cortical surface and the scalp of animals and human subjects reveal intermittent bursts of beta and gamma oscillations. During the bursts, narrow-band metastable amplitude modulation (AM) patters emerge for a fraction of a second and ultimately dissolve to the broad-band random background activity. The burst process depends on previously learnt conditioned stimuli (CS), thus different AM patterns may emerge in response to different CS. This observation leads to our cinematic theory of cognition when perception happens in discrete steps manifested in the sequence of AM patterns. Our article summarizes findings in the past decades on experimental evidence of cinematic theory of cognition and relevant mathematical models. We treat cortices as dissipative systems that self-organize themselves near a critical level of activity that is a non-equilibrium metastable state. Criticality is arguably a key aspect of brains in their rapid adaptation, reconfiguration, high storage capacity, and sensitive response to external stimuli. Self-organized criticality (SOC) became an important concept to describe neural systems. We argue that transitions from one AM pattern to the other require the concept of phase transitions, extending beyond the dynamics described by SOC. We employ random graph theory (RGT) and percolation dynamics as fundamental mathematical approaches to model fluctuations in the cortical tissue. Our results indicate that perceptions are formed through a phase transition from a disorganized (high entropy) to a well-organized (low entropy) state, which explains the swiftness of the emergence of the perceptual experience in response to learned stimuli. PMID:28352218

  2. Smoking patterns and stimulus control in intermittent and daily smokers.

    PubMed

    Shiffman, Saul; Dunbar, Michael S; Li, Xiaoxue; Scholl, Sarah M; Tindle, Hilary A; Anderson, Stewart J; Ferguson, Stuart G

    2014-01-01

    Intermittent smokers (ITS) - who smoke less than daily - comprise an increasing proportion of adult smokers. Their smoking patterns challenge theoretical models of smoking motivation, which emphasize regular and frequent smoking to maintain nicotine levels and avoid withdrawal, but yet have gone largely unexamined. We characterized smoking patterns among 212 ITS (smoking 4-27 days per month) compared to 194 daily smokers (DS; smoking 5-30 cigarettes daily) who monitored situational antecedents of smoking using ecological momentary assessment. Subjects recorded each cigarette on an electronic diary, and situational variables were assessed in a random subset (n=21,539 smoking episodes); parallel assessments were obtained by beeping subjects at random when they were not smoking (n=26,930 non-smoking occasions). Compared to DS, ITS' smoking was more strongly associated with being away from home, being in a bar, drinking alcohol, socializing, being with friends and acquaintances, and when others were smoking. Mood had only modest effects in either group. DS' and ITS' smoking were substantially and equally suppressed by smoking restrictions, although ITS more often cited self-imposed restrictions. ITS' smoking was consistently more associated with environmental cues and contexts, especially those associated with positive or "indulgent" smoking situations. Stimulus control may be an important influence in maintaining smoking and making quitting difficult among ITS.

  3. Association and Host Selectivity in Multi-Host Pathogens

    PubMed Central

    Malpica, José M.; Sacristán, Soledad; Fraile, Aurora; García-Arenal, Fernando

    2006-01-01

    The distribution of multi-host pathogens over their host range conditions their population dynamics and structure. Also, host co-infection by different pathogens may have important consequences for the evolution of hosts and pathogens, and host-pathogen co-evolution. Hence it is of interest to know if the distribution of pathogens over their host range is random, or if there are associations between hosts and pathogens, or between pathogens sharing a host. To analyse these issues we propose indices for the observed patterns of host infection by pathogens, and for the observed patterns of co-infection, and tests to analyse if these patterns conform to randomness or reflect associations. Applying these tests to the prevalence of five plant viruses on 21 wild plant species evidenced host-virus associations: most hosts and viruses were selective for viruses and hosts, respectively. Interestingly, the more host-selective viruses were the more prevalent ones, suggesting that host specialisation is a successful strategy for multi-host pathogens. Analyses also showed that viruses tended to associate positively in co-infected hosts. The developed indices and tests provide the tools to analyse how strong and common are these associations among different groups of pathogens, which will help to understand and model the population biology of multi-host pathogens. PMID:17183670

  4. Non-random nature of spontaneous mIPSCs in mouse auditory brainstem neurons revealed by recurrence quantification analysis

    PubMed Central

    Leao, Richardson N; Leao, Fabricio N; Walmsley, Bruce

    2005-01-01

    A change in the spontaneous release of neurotransmitter is a useful indicator of processes occurring within presynaptic terminals. Linear techniques (e.g. Fourier transform) have been used to analyse spontaneous synaptic events in previous studies, but such methods are inappropriate if the timing pattern is complex. We have investigated spontaneous glycinergic miniature synaptic currents (mIPSCs) in principal cells of the medial nucleus of the trapezoid body. The random versus deterministic (or periodic) nature of mIPSCs was assessed using recurrence quantification analysis. Nonlinear methods were then used to quantify any detected determinism in spontaneous release, and to test for chaotic or fractal patterns. Modelling demonstrated that this procedure is much more sensitive in detecting periodicities than conventional techniques. mIPSCs were found to exhibit periodicities that were abolished by blockade of internal calcium stores with ryanodine, suggesting calcium oscillations in the presynaptic inhibitory terminals. Analysis indicated that mIPSC occurrences were chaotic in nature. Furthermore, periodicities were less evident in congenitally deaf mice than in normal mice, indicating that appropriate neural activity during development is necessary for the expression of deterministic chaos in mIPSC patterns. We suggest that chaotic oscillations of mIPSC occurrences play a physiological role in signal processing in the auditory brainstem. PMID:16271982

  5. A Healthy Dietary Pattern Reduces Lung Cancer Risk: A Systematic Review and Meta-Analysis.

    PubMed

    Sun, Yanlai; Li, Zhenxiang; Li, Jianning; Li, Zengjun; Han, Jianjun

    2016-03-04

    Diet and nutrients play an important role in cancer development and progress; a healthy dietary pattern has been found to be associated with several types of cancer. However, the association between a healthy eating pattern and lung cancer risk is still unclear. Therefore, we conducted a systematic review with meta-analysis to evaluate whether a healthy eating pattern might reduce lung cancer risk. We identified relevant studies from the PubMed and Embase databases up to October 2015, and the relative risks were extracted and combined by the fixed-effects model when no substantial heterogeneity was observed; otherwise, the random-effects model was employed. Subgroup and publication bias analyses were also performed. Finally, eight observational studies were included in the meta-analysis. The pooled relative risk of lung cancer for the highest vs. lowest category of healthy dietary pattern was 0.81 (95% confidence interval, CI: 0.75-0.86), and no significant heterogeneity was detected. The relative risks (RRs) for non-smokers, former smokers and current smokers were 0.89 (95% CI: 0.63-1.27), 0.74 (95% CI: 0.62-0.89) and 0.86 (95% CI: 0.79-0.93), respectively. The results remained stable in subgroup analyses by other confounders and sensitivity analysis. The results of our meta-analysis suggest that a healthy dietary pattern is associated with a lower lung cancer risk, and they provide more beneficial evidence for changing the diet pattern in the general population.

  6. Use of health insurance claim patterns to identify patients using nonsteroidal anti-inflammatory drugs for rheumatoid arthritis.

    PubMed

    Bernard, Marie-Agnès; Bénichou, Jacques; Blin, Patrick; Weill, Alain; Bégaud, Bernard; Abouelfath, Abdelilah; Moore, Nicholas; Fourrier-Réglat, Annie

    2012-06-01

    To determine healthcare claim patterns associated using nonsteroidal anti-inflammatory drugs (NSAIDs) for rheumatoid arthritis (RA). The CADEUS study randomly identified NSAID users within the French health insurance database. One-year claims data were extracted, and NSAID indication was obtained from prescribers. Logistic regression was used in a development sample to identify claim patterns predictive of RA and models applied to a validation sample. Analyses were stratified on the dispensation of immunosuppressive agents or specific antirheumatism treatment, and the area under the receiver operating characteristic curve was used to estimate discriminant power. NSAID indication was provided for 26,259 of the 45,217 patients included in the CADEUS cohort; it was RA for 956 patients. Two models were constructed using the development sample (n = 13,143), stratifying on the dispensation of an immunosuppressive agent or specific antirheumatism treatment. Discriminant power was high for both models (AUC > 0.80) and was not statistically different from that found when applied to the validation sample (n = 13,116). The models derived from this study may help to identify patients prescribed NSAIDs who are likely to have RA in claims databases without medical data such as treatment indication. Copyright © 2012 John Wiley & Sons, Ltd.

  7. On optimal current patterns for electrical impedance tomography.

    PubMed

    Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D

    2005-02-01

    We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.

  8. Application of Fractal theory for crash rate prediction: Insights from random parameters and latent class tobit models.

    PubMed

    Chand, Sai; Dixit, Vinayak V

    2018-03-01

    The repercussions from congestion and accidents on major highways can have significant negative impacts on the economy and environment. It is a primary objective of transport authorities to minimize the likelihood of these phenomena taking place, to improve safety and overall network performance. In this study, we use the Hurst Exponent metric from Fractal Theory, as a congestion indicator for crash-rate modeling. We analyze one month of traffic speed data at several monitor sites along the M4 motorway in Sydney, Australia and assess congestion patterns with the Hurst Exponent of speed (H speed ). Random Parameters and Latent Class Tobit models were estimated, to examine the effect of congestion on historical crash rates, while accounting for unobserved heterogeneity. Using a latent class modeling approach, the motorway sections were probabilistically classified into two segments, based on the presence of entry and exit ramps. This will allow transportation agencies to implement appropriate safety/traffic countermeasures when addressing accident hotspots or inadequately managed sections of motorway. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Modeling and Simulation of Upset-Inducing Disturbances for Digital Systems in an Electromagnetic Reverberation Chamber

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.

  10. Computational wavelength resolution for in-line lensless holography: phase-coded diffraction patterns and wavefront group-sparsity

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen

    2017-06-01

    In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.

  11. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    PubMed

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  12. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  13. Network properties of interstitial cells of Cajal affect intestinal pacemaker activity and motor patterns, according to a mathematical model of weakly coupled oscillators.

    PubMed

    Wei, Ruihan; Parsons, Sean P; Huizinga, Jan D

    2017-03-01

    What is the central question of this study? What are the effects of interstitial cells of Cajal (ICC) network perturbations on intestinal pacemaker activity and motor patterns? What is the main finding and its importance? Two-dimensional modelling of the ICC pacemaker activity according to a phase model of weakly coupled oscillators showed that network properties (coupling strength between oscillators, frequency gradient and frequency noise) strongly influence pacemaker network activity and subsequent motor patterns. The model explains motor patterns observed in physiological conditions and provides predictions and testable hypotheses for effects of ICC loss and frequency modulation on the motor patterns. Interstitial cells of Cajal (ICC) are the pacemaker cells of gut motility and are associated with motility disorders. Interstitial cells of Cajal form a network, but the contributions of its network properties to gut physiology and dysfunction are poorly understood. We modelled an ICC network as a two-dimensional network of weakly coupled oscillators with a frequency gradient and showed changes over time in video and graphical formats. Model parameters were obtained from slow-wave-driven contraction patterns in the mouse intestine and pacemaker slow-wave activities from the cat intestine. Marked changes in propagating oscillation patterns (including changes from propagation to non-propagating) were observed by changing network parameters (coupling strength between oscillators, the frequency gradient and frequency noise), which affected synchronization, propagation velocity and occurrence of dislocations (termination of an oscillation). Complete uncoupling of a circumferential ring of oscillators caused the proximal and distal section to desynchronize, but complete synchronization was maintained with only a single oscillator connecting the sections with high enough coupling. The network of oscillators could withstand loss; even with 40% of oscillators lost randomly within the network, significant synchronization and anterograde propagation remained. A local increase in pacemaker frequency diminished anterograde propagation; the effects were strongly dependent on location, frequency gradient and coupling strength. In summary, the model puts forth the hypothesis that fundamental changes in oscillation patterns (ICC slow-wave activity or circular muscle contractions) can occur through physiological modulation of network properties. Strong evidence is provided to accept the ICC network as a system of coupled oscillators. © 2016 The Authors. Experimental Physiology © 2016 The Physiological Society.

  14. Non-Deterministic Modelling of Food-Web Dynamics

    PubMed Central

    Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam

    2014-01-01

    A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245

  15. Thin films with disordered nanohole patterns for solar radiation absorbers

    NASA Astrophysics Data System (ADS)

    Fang, Xing; Lou, Minhan; Bao, Hua; Zhao, C. Y.

    2015-06-01

    The radiation absorption in thin films with three disordered nanohole patterns, i.e., random position, non-uniform radius, and amorphous pattern, are numerically investigated by finite-difference time-domain (FDTD) simulations. Disorder can alter the absorption spectra and has an impact on the broadband absorption performance. Compared to random position and non-uniform radius nanoholes, amorphous pattern can induce a much better integrated absorption. The power density spectra indicate that amorphous pattern nanoholes reduce the symmetry and provide more resonance modes that are desired for the broadband absorption. The application condition for amorphous pattern nanoholes shows that they are much more appropriate in absorption enhancement for weak absorption materials. Amorphous silicon thin films with disordered nanohole patterns are applied in solar radiation absorbers. Four configurations of thin films with different nanohole patterns show that interference between layers in absorbers will change the absorption performance. Therefore, it is necessary to optimize the whole radiation absorbers although single thin film with amorphous pattern nanohole has reached optimal absorption.

  16. Determination of the stacking fault density in highly defective single GaAs nanowires by means of coherent diffraction imaging

    NASA Astrophysics Data System (ADS)

    Davtyan, Arman; Biermanns, Andreas; Loffeld, Otmar; Pietsch, Ullrich

    2016-06-01

    Coherent x-ray diffraction imaging is used to measure diffraction patterns from individual highly defective nanowires, showing a complex speckle pattern instead of well-defined Bragg peaks. The approach is tested for nanowires of 500 nm diameter and 500 nm height predominately composed by zinc-blende (ZB) and twinned zinc-blende (TZB) phase domains. Phase retrieval is used to reconstruct the measured 2-dimensional intensity patterns recorded from single nanowires with 3.48 nm and 0.98 nm spatial resolution. Whereas the speckle amplitudes and distribution are perfectly reconstructed, no unique solution could be obtained for the phase structure. The number of phase switches is found to be proportional to the number of measured speckles and follows a narrow number distribution. Using data with 0.98 nm spatial resolution the mean number of phase switches is in reasonable agreement with estimates taken from TEM. However, since the resolved phase domain still is 3-4 times larger than a single GaAs bilayer we explain the non-ambiguous phase reconstruction by the fact that depending on starting phase and sequence of subroutines used during the phase retrieval the retrieved phase domain host a different sequence of randomly stacked bilayers. Modelling possible arrangements of bilayer sequences within a phase domain demonstrate that the complex speckle patterns measured can indeed be explained by the random arrangement of the ZB and TZB phase domains.

  17. Metal1 patterning study for random-logic applications with 193i, using calibrated OPC for litho and etch

    NASA Astrophysics Data System (ADS)

    Mailfert, Julien; Van de Kerkhove, Jeroen; De Bisschop, Peter; De Meyer, Kristin

    2014-03-01

    A Metal1-layer (M1) patterning study is conducted on 20nm node (N20) for random-logic applications. We quantified the printability performance on our test vehicle for N20, corresponding to Poly/M1 pitches of 90/64nm, and with a selected minimum M1 gap size of 70nm. The Metal1 layer is patterned with 193nm immersion lithography (193i) using Negative Tone Developer (NTD) resist, and a double-patterning Litho-Etch-Litho-Etch (LELE) process. Our study is based on Logic test blocks that we OPCed with a combination of calibrated models for litho and for etch. We report the Overlapping Process Window (OPW), based on a selection of test structures measured after-etch. We find that most of the OPW limiting structures are EOL (End-of-Line) configurations. Further analysis of these individual OPW limiters will reveal that they belong to different types, such as Resist 3D (R3D) and Mask 3D (M3D) sensitive structures, limiters related to OPC (Optical Proximity Corrections) options such as assist placement, or the choice of CD metrics and tolerances for calculation of the process windows itself. To guide this investigation, we will consider a `reference OPC' case to be compared with other solutions. In addition, rigorous simulations and OPC verifications will complete the after-etch measurements to help us to validate our experimental findings.

  18. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.

  19. Syntactic sequencing in Hebbian cell assemblies.

    PubMed

    Wennekers, Thomas; Palm, Günther

    2009-12-01

    Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.

  20. Patterns of crop cover under future climates.

    PubMed

    Porfirio, Luciana L; Newth, David; Harman, Ian N; Finnigan, John J; Cai, Yiyong

    2017-04-01

    We study changes in crop cover under future climate and socio-economic projections. This study is not only organised around the global and regional adaptation or vulnerability to climate change but also includes the influence of projected changes in socio-economic, technological and biophysical drivers, especially regional gross domestic product. The climatic data are obtained from simulations of RCP4.5 and 8.5 by four global circulation models/earth system models from 2000 to 2100. We use Random Forest, an empirical statistical model, to project the future crop cover. Our results show that, at the global scale, increases and decreases in crop cover cancel each other out. Crop cover in the Northern Hemisphere is projected to be impacted more by future climate than the in Southern Hemisphere because of the disparity in the warming rate and precipitation patterns between the two Hemispheres. We found that crop cover in temperate regions is projected to decrease more than in tropical regions. We identified regions of concern and opportunities for climate change adaptation and investment.

  1. Clines in quantitative traits: The role of migration patterns and selection scenarios

    PubMed Central

    Geroldinger, Ludwig; Bürger, Reinhard

    2015-01-01

    The existence, uniqueness, and shape of clines in a quantitative trait under selection toward a spatially varying optimum is studied. The focus is on deterministic diploid two-locus n-deme models subject to various migration patterns and selection scenarios. Migration patterns may exhibit isolation by distance, as in the stepping-stone model, or random dispersal, as in the island model. The phenotypic optimum may change abruptly in a single environmental step, more gradually, or not at all. Symmetry assumptions are imposed on phenotypic optima and migration rates. We study clines in the mean, variance, and linkage disequilibrium (LD). Clines result from polymorphic equilibria. The possible equilibrium configurations are determined as functions of the migration rate. Whereas for weak migration, many polymorphic equilibria may be simultaneously stable, their number decreases with increasing migration rate. Also for intermediate migration rates polymorphic equilibria are in general not unique, however, for loci of equal effects the corresponding clines in the mean, variance, and LD are unique. For sufficiently strong migration, no polymorphism is maintained. Both migration pattern and selection scenario exert strong influence on the existence and shape of clines. The results for discrete demes are compared with those from models in which space varies continuously and dispersal is modeled by diffusion. Comparisons with previous studies, which investigated clines under neutrality or under linkage equilibrium, are performed. If there is no long-distance migration, the environment does not change abruptly, and linkage is not very tight, populations are almost everywhere close to linkage equilibrium. PMID:25446959

  2. Stochastic tools hidden behind the empirical dielectric relaxation laws

    NASA Astrophysics Data System (ADS)

    Stanislavsky, Aleksander; Weron, Karina

    2017-03-01

    The paper is devoted to recent advances in stochastic modeling of anomalous kinetic processes observed in dielectric materials which are prominent examples of disordered (complex) systems. Theoretical studies of dynamical properties of ‘structures with variations’ (Goldenfield and Kadanoff 1999 Science 284 87-9) require application of such mathematical tools—by means of which their random nature can be analyzed and, independently of the details distinguishing various systems (dipolar materials, glasses, semiconductors, liquid crystals, polymers, etc), the empirical universal kinetic patterns can be derived. We begin with a brief survey of the historical background of the dielectric relaxation study. After a short outline of the theoretical ideas providing the random tools applicable to modeling of relaxation phenomena, we present probabilistic implications for the study of the relaxation-rate distribution models. In the framework of the probability distribution of relaxation rates we consider description of complex systems, in which relaxing entities form random clusters interacting with each other and single entities. Then we focus on stochastic mechanisms of the relaxation phenomenon. We discuss the diffusion approach and its usefulness for understanding of anomalous dynamics of relaxing systems. We also discuss extensions of the diffusive approach to systems under tempered random processes. Useful relationships among different stochastic approaches to the anomalous dynamics of complex systems allow us to get a fresh look at this subject. The paper closes with a final discussion on achievements of stochastic tools describing the anomalous time evolution of complex systems.

  3. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids

    PubMed Central

    Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229

  4. Stochastic and Statistical Analysis of Utility Revenues and Weather Data Analysis for Consumer Demand Estimation in Smart Grids.

    PubMed

    Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M

    2016-01-01

    In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.

  5. On the use of hidden Markov models for gaze pattern modeling

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.

  6. Particle Dynamics in the Sea: Processes of Production and Loss Governing the Abundance of Marine Snow

    DTIC Science & Technology

    1990-01-05

    pumping sys - tems. CARDER, STEWARD and BETZER (1982) describe a holographic device (HMV = "holographic microvelocimeter": COSTELLO, YOUNG, CARDER and BETZER...similar to aggregate porosities determined using collision calculations based on random particle trajectories in computer models (Tambo and Wata- nabe ...Similarly, sinking patterns of particles, behavior of zooplankton and processes occurring at boundary layers may be 202 obse’rved di rectly. I This sy

  7. Self-Injurious Behavior: An Animal Model of an Autism Endophenotype

    DTIC Science & Technology

    2012-01-01

    time there was a visible release of the pasta (not a drop) or a reformation of the digits holding the pasta via motor patterns of flexion/extension...review of 18 pasta -trials, nine trials randomly selected from each experimental group. Behaviors included on the code sheet were number of drops...failure to contact reaches, angling with head tilt, abnormal posture, use of a unilateral paw technique, and twirling of the pasta . Specific descriptions

  8. Transfer Learning for Adaptive Relation Extraction

    DTIC Science & Technology

    2011-09-13

    other NLP tasks, however, supervised learning approach fails when there is not a sufficient amount of labeled data for training, which is often the case...always 12 Syntactic Pattern Relation Instance Relation Type (Subtype) arg-2 arg-1 Arab leaders OTHER-AFF (Ethnic) his father PER-SOC (Family) South...for x. For sequence labeling tasks in NLP , linear-chain conditional random field has been rather suc- cessful. It is an undirected graphical model in

  9. Population pharmacokinetics of valnemulin in swine.

    PubMed

    Zhao, D H; Zhang, Z; Zhang, C Y; Liu, Z C; Deng, H; Yu, J J; Guo, J P; Liu, Y H

    2014-02-01

    This study was carried out in 121 pigs to develop a population pharmacokinetic (PPK) model by oral (p.o.) administration of valnemulin at a single dose of 10 mg/kg. Serum biochemistry parameters of each pig were determined prior to drug administration. Three to five blood samples were collected at random time points, but uniformly distributed in the absorption, distribution, and elimination phases of drug disposition. Plasma concentrations of valnemulin were determined by high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS). The concentration-time data were fitted to PPK models using nonlinear mixed effect modeling (NONMEM) with G77 FORTRAN compiler. NONMEM runs were executed using Wings for NONMEM. Fixed effects of weight, age, sex as well as biochemistry parameters, which may influence the PK of valnemulin, were investigated. The drug concentration-time data were adequately described by a one-compartmental model with first-order absorption. A random effect model of valnemulin revealed a pattern of log-normal distribution, and it satisfactorily characterized the observed interindividual variability. The distribution of random residual errors, however, suggested an additive model for the initial phase (<12 h) followed by a combined model that consists of both proportional and additive features (≥ 12 h), so that the intra-individual variability could be sufficiently characterized. Covariate analysis indicated that body weight had a conspicuous effect on valnemulin clearance (CL/F). The featured population PK values of Ka , V/F and CL/F were 0.292/h, 63.0 L and 41.3 L/h, respectively. © 2013 John Wiley & Sons Ltd.

  10. Global patterns of tropical forest fragmentation

    NASA Astrophysics Data System (ADS)

    Taubert, Franziska; Fischer, Rico; Groeneveld, Jürgen; Lehmann, Sebastian; Müller, Michael S.; Rödig, Edna; Wiegand, Thorsten; Huth, Andreas

    2018-02-01

    Remote sensing enables the quantification of tropical deforestation with high spatial resolution. This in-depth mapping has led to substantial advances in the analysis of continent-wide fragmentation of tropical forests. Here we identified approximately 130 million forest fragments in three continents that show surprisingly similar power-law size and perimeter distributions as well as fractal dimensions. Power-law distributions have been observed in many natural phenomena such as wildfires, landslides and earthquakes. The principles of percolation theory provide one explanation for the observed patterns, and suggest that forest fragmentation is close to the critical point of percolation; simulation modelling also supports this hypothesis. The observed patterns emerge not only from random deforestation, which can be described by percolation theory, but also from a wide range of deforestation and forest-recovery regimes. Our models predict that additional forest loss will result in a large increase in the total number of forest fragments—at maximum by a factor of 33 over 50 years—as well as a decrease in their size, and that these consequences could be partly mitigated by reforestation and forest protection.

  11. Evaluation of a multi-point method for determining acoustic impedance

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Parrott, Tony L.

    1988-01-01

    An investigation was conducted to explore potential improvements provided by a Multi-Point Method (MPM) over the Standing Wave Method (SWM) and Two-Microphone Method (TMM) for determining acoustic impedance. A wave propagation model was developed to model the standing wave pattern in an impedance tube. The acoustic impedance of a test specimen was calculated from a best fit of this standing wave pattern to pressure measurements obtained along the impedance tube centerline. Three measurement spacing distributions were examined: uniform, random, and selective. Calculated standing wave patterns match the point pressure measurement distributions with good agreement for a reflection factor magnitude range of 0.004 to 0.999. Comparisons of results using 2, 3, 6, and 18 measurement points showed that the most consistent results are obtained when using at least 6 evenly spaced pressure measurements per half-wavelength. Also, data were acquired with broadband noise added to the discrete frequency noise and impedances were calculated using the MPM and TMM algorithms. The results indicate that the MPM will be superior to the TMM in the presence of significant broadband noise levels associated with mean flow.

  12. Selecting for extinction: nonrandom disease-associated extinction homogenizes amphibian biotas.

    PubMed

    Smith, Kevin G; Lips, Karen R; Chase, Jonathan M

    2009-10-01

    Studying the patterns in which local extinctions occur is critical to understanding how extinctions affect biodiversity at local, regional and global spatial scales. To understand the importance of patterns of extinction at a regional spatial scale, we use data from extirpations associated with a widespread pathogenic agent of amphibian decline, Batrachochytrium dendrobatidis (Bd) as a model system. We apply novel null model analyses to these data to determine whether recent extirpations associated with Bd have resulted in selective extinction and homogenization of diverse tropical American amphibian biotas. We find that Bd-associated extinctions in this region were nonrandom and disproportionately, but not exclusively, affected low-occupancy and endemic species, resulting in homogenization of the remnant amphibian fauna. The pattern of extirpations also resulted in phylogenetic homogenization at the family level and ecological homogenization of reproductive mode and habitat association. Additionally, many more species were extirpated from the region than would be expected if extirpations occurred randomly. Our results indicate that amphibian declines in this region are an extinction filter, reducing regional amphibian biodiversity to highly similar relict assemblages and ultimately causing amplified biodiversity loss at regional and global scales.

  13. Paleolithic nutrition for metabolic syndrome: systematic review and meta-analysis.

    PubMed

    Manheimer, Eric W; van Zuuren, Esther J; Fedorowicz, Zbys; Pijl, Hanno

    2015-10-01

    Paleolithic nutrition, which has attracted substantial public attention lately because of its putative health benefits, differs radically from dietary patterns currently recommended in guidelines, particularly in terms of its recommendation to exclude grains, dairy, and nutritional products of industry. We evaluated whether a Paleolithic nutritional pattern improves risk factors for chronic disease more than do other dietary interventions. We conducted a systematic review of randomized controlled trials (RCTs) that compared the Paleolithic nutritional pattern with any other dietary pattern in participants with one or more of the 5 components of metabolic syndrome. Two reviewers independently extracted study data and assessed risk of bias. Outcome data were extracted from the first measurement time point (≤6 mo). A random-effects model was used to estimate the average intervention effect. The quality of the evidence was rated with the use of the Grading of Recommendations Assessment, Development and Evaluation approach. Four RCTs that involved 159 participants were included. The 4 control diets were based on distinct national nutrition guidelines but were broadly similar. Paleolithic nutrition resulted in greater short-term improvements than did the control diets (random-effects model) for waist circumference (mean difference: -2.38 cm; 95% CI: -4.73, -0.04 cm), triglycerides (-0.40 mmol/L; 95% CI: -0.76, -0.04 mmol/L), systolic blood pressure (-3.64 mm Hg; 95% CI: -7.36, 0.08 mm Hg), diastolic blood pressure (-2.48 mm Hg; 95% CI: -4.98, 0.02 mm Hg), HDL cholesterol (0.12 mmol/L; 95% CI: -0.03, 0.28 mmol/L), and fasting blood sugar (-0.16 mmol/L; 95% CI: -0.44, 0.11 mmol/L). The quality of the evidence for each of the 5 metabolic components was moderate. The home-delivery (n = 1) and dietary recommendation (n = 3) RCTs showed similar effects with the exception of greater improvements in triglycerides relative to the control with the home delivery. None of the RCTs evaluated an improvement in quality of life. The Paleolithic diet resulted in greater short-term improvements in metabolic syndrome components than did guideline-based control diets. The available data warrant additional evaluations of the health benefits of Paleolithic nutrition. This systematic review was registered at PROSPERO (www.crd.york.ac.uk/PROSPERO) as CRD42014015119. © 2015 American Society for Nutrition.

  14. Far field beam pattern of one MW combined beam of laser diode array amplifiers for space power transmission

    NASA Technical Reports Server (NTRS)

    Kwon, Jin H.; Lee, Ja H.

    1989-01-01

    The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.

  15. The spatial pattern of suicide in the US in relation to deprivation, fragmentation and rurality.

    PubMed

    Congdon, Peter

    2011-01-01

    Analysis of geographical patterns of suicide and psychiatric morbidity has demonstrated the impact of latent ecological variables (such as deprivation, rurality). Such latent variables may be derived by conventional multivariate techniques from sets of observed indices (for example, by principal components), by composite variable methods or by methods which explicitly consider the spatial framework of areas and, in particular, the spatial clustering of latent risks and outcomes. This article considers a latent random variable approach to explaining geographical contrasts in suicide in the US; and it develops a spatial structural equation model incorporating deprivation, social fragmentation and rurality. The approach allows for such latent spatial constructs to be correlated both within and between areas. Potential effects of area ethnic mix are also included. The model is applied to male and female suicide deaths over 2002–06 in 3142 US counties.

  16. Separation of man-made and natural patterns in high-altitude imagery of agricultural areas

    NASA Technical Reports Server (NTRS)

    Samulon, A. S.

    1975-01-01

    A nonstationary linear digital filter is designed and implemented which extracts the natural features from high-altitude imagery of agricultural areas. Essentially, from an original image a new image is created which displays information related to soil properties, drainage patterns, crop disease, and other natural phenomena, and contains no information about crop type or row spacing. A model is developed to express the recorded brightness in a narrow-band image in terms of man-made and natural contributions and which describes statistically the spatial properties of each. The form of the minimum mean-square error linear filter for estimation of the natural component of the scene is derived and a suboptimal filter is implemented. Nonstationarity of the two-dimensional random processes contained in the model requires a unique technique for deriving the optimum filter. Finally, the filter depends on knowledge of field boundaries. An algorithm for boundary location is proposed, discussed, and implemented.

  17. Determining Scale-dependent Patterns in Spatial and Temporal Datasets

    NASA Astrophysics Data System (ADS)

    Roy, A.; Perfect, E.; Mukerji, T.; Sylvester, L.

    2016-12-01

    Spatial and temporal datasets of interest to Earth scientists often contain plots of one variable against another, e.g., rainfall magnitude vs. time or fracture aperture vs. spacing. Such data, comprised of distributions of events along a transect / timeline along with their magnitudes, can display persistent or antipersistent trends, as well as random behavior, that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data. In a recent study we showed that lacunarity can be used for revealing changes in scale-dependent patterns in fracture spacing data. Here we present a further improvement in our technique, with lacunarity applied to various non-binary datasets comprised of event spacings and magnitudes. We test our technique on a set of four synthetic datasets, three of which are based on an autoregressive model and have magnitudes at every point along the "timeline" thus representing antipersistent, persistent, and random trends. The fourth dataset is made up of five clusters of events, each containing a set of random magnitudes. The concept of lacunarity ratio, LR, is introduced; this is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. It is demonstrated that LR can successfully delineate scale-dependent changes in terms of antipersistence and persistence in the synthetic datasets. This technique is then applied to three different types of data: a hundred-year rainfall record from Knoxville, TN, USA, a set of varved sediments from Marca Shale, and a set of fracture aperture and spacing data from NE Mexico. While the rainfall data and varved sediments both appear to be persistent at small scales, at larger scales they both become random. On the other hand, the fracture data shows antipersistence at small scale (within cluster) and random behavior at large scales. Such differences in behavior with respect to scale-dependent changes in antipersistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.

  18. Does diet-beverage intake affect dietary consumption patterns? Results from the Choose Healthy Options Consciously Everyday (CHOICE) randomized clinical trial123

    PubMed Central

    Piernas, Carmen; Tate, Deborah F; Wang, Xiaoshan

    2013-01-01

    Background: Little is understood about the effect of increased consumption of low-calorie sweeteners in diet beverages on dietary patterns and energy intake. Objective: We investigated whether energy intakes and dietary patterns were different in subjects who were randomly assigned to substitute caloric beverages with either water or diet beverages (DBs). Design: Participants from the Choose Healthy Options Consciously Everyday randomized clinical trial (a 6-mo, 3-arm study) were included in the analysis [water groups: n = 106 (94% women); DB group: n = 104 (82% women)]. For energy, macronutrient, and food and beverage intakes, we investigated the main effects of time, treatment, and the treatment-by-time interaction by using mixed models. Results: Overall, the macronutrient composition changed in both groups without significant differences between groups over time. Both groups reduced absolute intakes of total daily energy, carbohydrates, fat, protein, saturated fat, total sugar, added sugar, and other carbohydrates. The DB group decreased energy from all beverages more than the water group did only at month 3 (P-group-by-time < 0.05). Although the water group had a greater reduction in grain intake at month 3 and a greater increase in fruit and vegetable intake at month 6 (P-group-by-time < 0.05), the DB group had a greater reduction in dessert intake than the water group did at month 6 (P-group-by-time < 0.05). Conclusions: Participants in both intervention groups showed positive changes in energy intakes and dietary patterns. The DB group showed decreases in most caloric beverages and specifically reduced more desserts than the water group did. Our study does not provide evidence to suggest that a short-term consumption of DBs, compared with water, increases preferences for sweet foods and beverages. This trial was registered at clinicaltrials.gov as NCT01017783. PMID:23364015

  19. Phase information contained in meter-scale SAR images

    NASA Astrophysics Data System (ADS)

    Datcu, Mihai; Schwarz, Gottfried; Soccorsi, Matteo; Chaabouni, Houda

    2007-10-01

    The properties of single look complex SAR satellite images have already been analyzed by many investigators. A common belief is that, apart from inverse SAR methods or polarimetric applications, no information can be gained from the phase of each pixel. This belief is based on the assumption that we obtain uniformly distributed random phases when a sufficient number of small-scale scatterers are mixed in each image pixel. However, the random phase assumption does no longer hold for typical high resolution urban remote sensing scenes, when a limited number of prominent human-made scatterers with near-regular shape and sub-meter size lead to correlated phase patterns. If the pixel size shrinks to a critical threshold of about 1 meter, the reflectance of built-up urban scenes becomes dominated by typical metal reflectors, corner-like structures, and multiple scattering. The resulting phases are hard to model, but one can try to classify a scene based on the phase characteristics of neighboring image pixels. We provide a "cooking recipe" of how to analyze existing phase patterns that extend over neighboring pixels.

  20. Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704

  1. Geographic analysis of vaccine uptake in a cluster-randomized controlled trial in Hue, Vietnam.

    PubMed

    Ali, Mohammad; Thiem, Vu Dinh; Park, Jin-Kyung; Ochiai, Rion Leon; Canh, Do Gia; Danovaro-Holliday, M Carolina; Kaljee, Linda M; Clemens, John D; Acosta, Camilo J

    2007-09-01

    This paper identifies spatial patterns and predictors of vaccine uptake in a cluster-randomized controlled trial in Hue, Vietnam. Data for this study result from the integration of demographic surveillance, vaccine record, and geographic data of the study area. A multi-level cross-classified (non-hierarchical) model was used for analyzing the non-nested nature of individual's ecological data. Vaccine uptake was unevenly distributed in space and there was spatial variability among predictors of vaccine uptake. Vaccine uptake was higher among students with younger, male, or not literate family heads. Students from households with higher per-capita income were less likely to participate in the trial. Residency south of the river or further from a hospital/polyclinic was associated with higher vaccine uptake. Younger students were more likely to be vaccinated than older students in high- or low-risk areas, but not in the entire study area. The findings are important for the management of vaccine campaigns during a trial and for interpretation of disease patterns during vaccine-efficacy evaluation.

  2. Bayesian Hierarchical Random Intercept Model Based on Three Parameter Gamma Distribution

    NASA Astrophysics Data System (ADS)

    Wirawati, Ika; Iriawan, Nur; Irhamah

    2017-06-01

    Hierarchical data structures are common throughout many areas of research. Beforehand, the existence of this type of data was less noticed in the analysis. The appropriate statistical analysis to handle this type of data is the hierarchical linear model (HLM). This article will focus only on random intercept model (RIM), as a subclass of HLM. This model assumes that the intercept of models in the lowest level are varied among those models, and their slopes are fixed. The differences of intercepts were suspected affected by some variables in the upper level. These intercepts, therefore, are regressed against those upper level variables as predictors. The purpose of this paper would demonstrate a proven work of the proposed two level RIM of the modeling on per capita household expenditure in Maluku Utara, which has five characteristics in the first level and three characteristics of districts/cities in the second level. The per capita household expenditure data in the first level were captured by the three parameters Gamma distribution. The model, therefore, would be more complex due to interaction of many parameters for representing the hierarchical structure and distribution pattern of the data. To simplify the estimation processes of parameters, the computational Bayesian method couple with Markov Chain Monte Carlo (MCMC) algorithm and its Gibbs Sampling are employed.

  3. Decreased resting-state brain activity complexity in schizophrenia characterized by both increased regularity and randomness.

    PubMed

    Yang, Albert C; Hong, Chen-Jee; Liou, Yin-Jay; Huang, Kai-Lin; Huang, Chu-Chung; Liu, Mu-En; Lo, Men-Tzung; Huang, Norden E; Peng, Chung-Kang; Lin, Ching-Po; Tsai, Shih-Jen

    2015-06-01

    Schizophrenia is characterized by heterogeneous pathophysiology. Using multiscale entropy (MSE) analysis, which enables capturing complex dynamics of time series, we characterized MSE patterns of blood-oxygen-level-dependent (BOLD) signals across different time scales and determined whether BOLD activity in patients with schizophrenia exhibits increased complexity (increased entropy in all time scales), decreased complexity toward regularity (decreased entropy in all time scales), or decreased complexity toward uncorrelated randomness (high entropy in short time scales followed by decayed entropy as the time scale increases). We recruited 105 patients with schizophrenia with an age of onset between 18 and 35 years and 210 age- and sex-matched healthy volunteers. Results showed that MSE of BOLD signals in patients with schizophrenia exhibited two routes of decreased BOLD complexity toward either regular or random patterns. Reduced BOLD complexity toward regular patterns was observed in the cerebellum and temporal, middle, and superior frontal regions, and reduced BOLD complexity toward randomness was observed extensively in the inferior frontal, occipital, and postcentral cortices as well as in the insula and middle cingulum. Furthermore, we determined that the two types of complexity change were associated differently with psychopathology; specifically, the regular type of BOLD complexity change was associated with positive symptoms of schizophrenia, whereas the randomness type of BOLD complexity was associated with negative symptoms of the illness. These results collectively suggested that resting-state dynamics in schizophrenia exhibit two routes of pathologic change toward regular or random patterns, which contribute to the differences in syndrome domains of psychosis in patients with schizophrenia. © 2015 Wiley Periodicals, Inc.

  4. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    PubMed

    Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J; Munch, Stephan; Skaug, Hans J

    2014-09-01

    The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and L∞ (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  5. Refining Time-Activity Classification of Human Subjects Using the Global Positioning System.

    PubMed

    Hu, Maogui; Li, Wei; Li, Lianfa; Houston, Douglas; Wu, Jun

    2016-01-01

    Detailed spatial location information is important in accurately estimating personal exposure to air pollution. Global Position System (GPS) has been widely used in tracking personal paths and activities. Previous researchers have developed time-activity classification models based on GPS data, most of them were developed for specific regions. An adaptive model for time-location classification can be widely applied to air pollution studies that use GPS to track individual level time-activity patterns. Time-activity data were collected for seven days using GPS loggers and accelerometers from thirteen adult participants from Southern California under free living conditions. We developed an automated model based on random forests to classify major time-activity patterns (i.e. indoor, outdoor-static, outdoor-walking, and in-vehicle travel). Sensitivity analysis was conducted to examine the contribution of the accelerometer data and the supplemental spatial data (i.e. roadway and tax parcel data) to the accuracy of time-activity classification. Our model was evaluated using both leave-one-fold-out and leave-one-subject-out methods. Maximum speeds in averaging time intervals of 7 and 5 minutes, and distance to primary highways with limited access were found to be the three most important variables in the classification model. Leave-one-fold-out cross-validation showed an overall accuracy of 99.71%. Sensitivities varied from 84.62% (outdoor walking) to 99.90% (indoor). Specificities varied from 96.33% (indoor) to 99.98% (outdoor static). The exclusion of accelerometer and ambient light sensor variables caused a slight loss in sensitivity for outdoor walking, but little loss in overall accuracy. However, leave-one-subject-out cross-validation showed considerable loss in sensitivity for outdoor static and outdoor walking conditions. The random forests classification model can achieve high accuracy for the four major time-activity categories. The model also performed well with just GPS, road and tax parcel data. However, caution is warranted when generalizing the model developed from a small number of subjects to other populations.

  6. Neutral null models for diversity in serial transfer evolution experiments.

    PubMed

    Harpak, Arbel; Sella, Guy

    2014-09-01

    Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  7. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    NASA Astrophysics Data System (ADS)

    Yin, Yanshu; Feng, Wenjie

    2017-12-01

    In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  8. A framework for the identification of long-term social avoidance in longitudinal datasets

    PubMed Central

    Levengood, Alexis; Foroughirad, Vivienne; Mann, Janet; Krzyszczyk, Ewa

    2017-01-01

    Animal sociality is of significant interest to evolutionary and behavioural ecologists, with efforts focused on the patterns, causes and fitness outcomes of social preference. However, individual social patterns are the consequence of both attraction to (preference for) and avoidance of conspecifics. Despite this, social avoidance has received far less attention than social preference. Here, we detail the necessary steps to generate a spatially explicit, iterative null model which can be used to identify non-random social avoidance in longitudinal studies of social animals. We specifically identify and detail parameters which will influence the validity of the model. To test the usability of this model, we applied it to two longitudinal studies of social animals (Eastern water dragons (Intellegama lesueurii) and bottlenose dolphins (Tursiops aduncus)) to identify the presence of social avoidances. Using this model allowed us to identify the presence of social avoidances in both species. We hope that the framework presented here inspires interest in addressing this critical gap in our understanding of animal sociality, in turn allowing for a more holistic understanding of social interactions, relationships and structure. PMID:28879006

  9. A framework for the identification of long-term social avoidance in longitudinal datasets.

    PubMed

    Strickland, Kasha; Levengood, Alexis; Foroughirad, Vivienne; Mann, Janet; Krzyszczyk, Ewa; Frère, Celine H

    2017-08-01

    Animal sociality is of significant interest to evolutionary and behavioural ecologists, with efforts focused on the patterns, causes and fitness outcomes of social preference. However, individual social patterns are the consequence of both attraction to (preference for) and avoidance of conspecifics. Despite this, social avoidance has received far less attention than social preference. Here, we detail the necessary steps to generate a spatially explicit, iterative null model which can be used to identify non-random social avoidance in longitudinal studies of social animals. We specifically identify and detail parameters which will influence the validity of the model. To test the usability of this model, we applied it to two longitudinal studies of social animals (Eastern water dragons ( Intellegama lesueurii ) and bottlenose dolphins ( Tursiops aduncus )) to identify the presence of social avoidances. Using this model allowed us to identify the presence of social avoidances in both species. We hope that the framework presented here inspires interest in addressing this critical gap in our understanding of animal sociality, in turn allowing for a more holistic understanding of social interactions, relationships and structure.

  10. Numerical experiments with model monophyletic and paraphyletic taxa

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Kendrick, D. C.; Sepkoski JJ, J. r. (Principal Investigator)

    1993-01-01

    The problem of how accurately paraphyletic taxa versus monophyletic (i.e., holophyletic) groups (clades) capture underlying species patterns of diversity and extinction is explored with Monte Carlo simulations. Phylogenies are modeled as stochastic trees. Paraphyletic taxa are defined in an arbitrary manner by randomly choosing progenitors and clustering all descendants not belonging to other taxa. These taxa are then examined to determine which are clades, and the remaining paraphyletic groups are dissected to discover monophyletic subgroups. Comparisons of diversity patterns and extinction rates between modeled taxa and lineages indicate that paraphyletic groups can adequately capture lineage information under a variety of conditions of diversification and mass extinction. This suggests that these groups constitute more than mere "taxonomic noise" in this context. But, strictly monophyletic groups perform somewhat better, especially with regard to mass extinctions. However, when low levels of paleontologic sampling are simulated, the veracity of clades deteriorates, especially with respect to diversity, and modeled paraphyletic taxa often capture more information about underlying lineages. Thus, for studies of diversity and taxic evolution in the fossil record, traditional paleontologic genera and families need not be rejected in favor of cladistically-defined taxa.

  11. On the distribution of interspecies correlation for Markov models of character evolution on Yule trees.

    PubMed

    Mulder, Willem H; Crawford, Forrest W

    2015-01-07

    Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Random technique to encode complex valued holograms with on axis reconstruction onto phase-only displays.

    PubMed

    Luis Martínez Fuentes, Jose; Moreno, Ignacio

    2018-03-05

    A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.

  13. The contribution of competition to tree mortality in old-growth coniferous forests

    USGS Publications Warehouse

    Das, A.; Battles, J.; Stephenson, N.L.; van Mantgem, P.J.

    2011-01-01

    Competition is a well-documented contributor to tree mortality in temperate forests, with numerous studies documenting a relationship between tree death and the competitive environment. Models frequently rely on competition as the only non-random mechanism affecting tree mortality. However, for mature forests, competition may cease to be the primary driver of mortality.We use a large, long-term dataset to study the importance of competition in determining tree mortality in old-growth forests on the western slope of the Sierra Nevada of California, U.S.A. We make use of the comparative spatial configuration of dead and live trees, changes in tree spatial pattern through time, and field assessments of contributors to an individual tree's death to quantify competitive effects.Competition was apparently a significant contributor to tree mortality in these forests. Trees that died tended to be in more competitive environments than trees that survived, and suppression frequently appeared as a factor contributing to mortality. On the other hand, based on spatial pattern analyses, only three of 14 plots demonstrated compelling evidence that competition was dominating mortality. Most of the rest of the plots fell within the expectation for random mortality, and three fit neither the random nor the competition model. These results suggest that while competition is often playing a significant role in tree mortality processes in these forests it only infrequently governs those processes. In addition, the field assessments indicated a substantial presence of biotic mortality agents in trees that died.While competition is almost certainly important, demographics in these forests cannot accurately be characterized without a better grasp of other mortality processes. In particular, we likely need a better understanding of biotic agents and their interactions with one another and with competition. ?? 2011.

  14. Planform structure of turbulent Rayleigh-Benard convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theerthan, S.A.; Arakeri, J.H.

    The planform structure of turbulent Rayleigh-Benard convection is obtained from visualizing a liquid crystal sheet stuck to the bottom hot surface. The bottom plate of the convection cell is Plexiglas and the top plate is glass. Water is the test liquid and the Rayleigh number is 4 [times] 10[sup 7]. The planform pattern reveals randomly moving hot streaks surrounded by cold regions suggesting that turbulent Rayleigh-Benard convection is dominated by quasi-two-dimensional randomly moving plumes. Simultaneous temperature traces from two vertically separated thermocouples indicate that these plumes may be inclined forward in the direction of horizontal motion. The periodic eruption ofmore » thermals observed by Sparrow et al and which forms the basis of Howard's model is not observed.« less

  15. A Design Principle for an Autonomous Post-translational Pattern Formation.

    PubMed

    Sugai, Shuhei S; Ode, Koji L; Ueda, Hiroki R

    2017-04-25

    Previous autonomous pattern-formation models often assumed complex molecular and cellular networks. This theoretical study, however, shows that a system composed of one substrate with multisite phosphorylation and a pair of kinase and phosphatase can generate autonomous spatial information, including complex stripe patterns. All (de-)phosphorylation reactions are described with a generic Michaelis-Menten scheme, and all species freely diffuse without pre-existing gradients. Computational simulation upon >23,000,000 randomly generated parameter sets revealed the design motifs of cyclic reaction and enzyme sequestration by slow-diffusing substrates. These motifs constitute short-range positive and long-range negative feedback loops to induce Turing instability. The width and height of spatial patterns can be controlled independently by distinct reaction-diffusion processes. Therefore, multisite reversible post-translational modification can be a ubiquitous source for various patterns without requiring other complex regulations such as autocatalytic regulation of enzymes and is applicable to molecular mechanisms for inducing subcellular localization of proteins driven by post-translational modifications. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  16. Improving teacher perceptions of parent involvement patterns: Findings from a group randomized trial.

    PubMed

    Herman, Keith C; Reinke, Wendy M

    2017-03-01

    For children with the most serious and persistent academic and behavior problems, parent involvement in education, particularly teacher perceptions of involvement, is essential to avert their expected long-term negative outcomes. Despite the widespread interest in and perceived importance of parent involvement in education, however, few experimental studies have evaluated programs and practices to promote it. In this group randomized trial, we examined the effects of the Incredible Years Teacher Classroom Management program (IY TCM) on teacher perceptions of contact and comfort with parents. One hundred five classrooms with 1818 students were randomly assigned to an IY TCM or to a control, business as usual condition. Measures of key constructs included teacher ratings of parent and student behaviors, direct observations in the classroom, and a standardized academic achievement test. Latent transition analysis (LTA) was used to identify patterns of involvement over time and to determine if intervention condition predicted postintervention patterns and transitions. Four patterns of involvement were identified at baseline and at follow-up; parents of students with academic and behavior problems were most likely to be in classes with the least adaptive involvement patterns. Intervention status predicted group membership at follow-up. Specifically, intervention classroom parents were significantly more likely to transition to more adaptive teacher-rated parenting profiles at follow-up compared to control classroom parents. This is the first randomized trial we are aware of that has found that teacher training can alter teacher perceptions of parent involvement patterns. Clinical implications for students with behavior and academic problems are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Examining drivers' eye glance patterns during distracted driving: Insights from scanning randomness and glance transition matrix.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-12-01

    Visual attention to the driving environment is of great importance for road safety. Eye glance behavior has been used as an indicator of distracted driving. This study examined and quantified drivers' glance patterns and features during distracted driving. Data from an existing naturalistic driving study were used. Entropy rate was calculated and used to assess the randomness associated with drivers' scanning patterns. A glance-transition proportion matrix was defined to quantity visual search patterns transitioning among four main eye glance locations while driving (i.e., forward on-road, phone, mirrors and others). All measurements were calculated within a 5s time window under both cell phone and non-cell phone use conditions. Results of the glance data analyses showed different patterns between distracted and non-distracted driving, featured by a higher entropy rate value and highly biased attention transferring between forward and phone locations during distracted driving. Drivers in general had higher number of glance transitions, and their on-road glance duration was significantly shorter during distracted driving when compared to non-distracted driving. Results suggest that drivers have a higher scanning randomness/disorder level and shift their main attention from surrounding areas towards phone area when engaging in visual-manual tasks. Drivers' visual search patterns during visual-manual distraction with a high scanning randomness and a high proportion of eye glance transitions towards the location of the phone provide insight into driver distraction detection. This will help to inform the design of in-vehicle human-machine interface/systems. Copyright © 2017. Published by Elsevier Ltd.

  18. Protein subcellular location pattern classification in cellular images using latent discriminative models.

    PubMed

    Li, Jieyue; Xiong, Liang; Schneider, Jeff; Murphy, Robert F

    2012-06-15

    Knowledge of the subcellular location of a protein is crucial for understanding its functions. The subcellular pattern of a protein is typically represented as the set of cellular components in which it is located, and an important task is to determine this set from microscope images. In this article, we address this classification problem using confocal immunofluorescence images from the Human Protein Atlas (HPA) project. The HPA contains images of cells stained for many proteins; each is also stained for three reference components, but there are many other components that are invisible. Given one such cell, the task is to classify the pattern type of the stained protein. We first randomly select local image regions within the cells, and then extract various carefully designed features from these regions. This region-based approach enables us to explicitly study the relationship between proteins and different cell components, as well as the interactions between these components. To achieve these two goals, we propose two discriminative models that extend logistic regression with structured latent variables. The first model allows the same protein pattern class to be expressed differently according to the underlying components in different regions. The second model further captures the spatial dependencies between the components within the same cell so that we can better infer these components. To learn these models, we propose a fast approximate algorithm for inference, and then use gradient-based methods to maximize the data likelihood. In the experiments, we show that the proposed models help improve the classification accuracies on synthetic data and real cellular images. The best overall accuracy we report in this article for classifying 942 proteins into 13 classes of patterns is about 84.6%, which to our knowledge is the best so far. In addition, the dependencies learned are consistent with prior knowledge of cell organization. http://murphylab.web.cmu.edu/software/.

  19. Identification and Simulation of Subsurface Soil patterns using hidden Markov random fields and remote sensing and geophysical EMI data sets

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Wellmann, Florian; Verweij, Elizabeth; von Hebel, Christian; van der Kruk, Jan

    2017-04-01

    Lateral and vertical spatial heterogeneity of subsurface properties such as soil texture and structure influences the available water and resource supply for crop growth. High-resolution mapping of subsurface structures using non-invasive geo-referenced geophysical measurements, like electromagnetic induction (EMI), enables a characterization of 3D soil structures, which have shown correlations to remote sensing information of the crop states. The benefit of EMI is that it can return 3D subsurface information, however the spatial dimensions are limited due to the labor intensive measurement procedure. Although active and passive sensors mounted on air- or space-borne platforms return 2D images, they have much larger spatial dimensions. Combining both approaches provides us with a potential pathway to extend the detailed 3D geophysical information to a larger area by using remote sensing information. In this study, we aim at extracting and providing insights into the spatial and statistical correlation of the geophysical and remote sensing observations of the soil/vegetation continuum system. To this end, two key points need to be addressed: 1) how to detect and recognize the geometric patterns (i.e., spatial heterogeneity) from multiple data sets, and 2) how to quantitatively describe the statistical correlation between remote sensing information and geophysical measurements. In the current study, the spatial domain is restricted to shallow depths up to 3 meters, and the geostatistical database contains normalized difference vegetation index (NDVI) derived from RapidEye satellite images and apparent electrical conductivities (ECa) measured from multi-receiver EMI sensors for nine depths of exploration ranging from 0-2.7 m. The integrated data sets are mapped into both the physical space (i.e. the spatial domain) and feature space (i.e. a two-dimensional space framed by the NDVI and the ECa data). Hidden Markov Random Fields (HMRF) are employed to model the underlying heterogeneities in spatial domain and finite Gaussian mixture models are adopted to quantitatively describe the statistical patterns in terms of center vectors and covariance matrices in feature space. A recently developed parallel stochastic clustering algorithm is adopted to implement the HMRF models and the Markov chain Monte Carlo based Bayesian inference. Certain spatial patterns such as buried paleo-river channels covered by shallow sediments are investigated as typical examples. The results indicate that the geometric patterns of the subsurface heterogeneity can be represented and quantitatively characterized by HMRF. Furthermore, the statistical patterns of the NDVI and the EMI data from the soil/vegetation-continuum system can be inferred and analyzed in a quantitative manner.

  20. Examination of Cognitive Function During Six Months of Calorie Restriction: Results of a Randomized Controlled Trial

    PubMed Central

    Martin, Corby K.; Anton, Stephen D.; Han, Hongmei; York-Crowe, Emily; Redman, Leanne M.; Ravussin, Eric; Williamson, Donald A.

    2009-01-01

    Background Calorie restriction increases longevity in many organisms, and calorie restriction or its mimetic might increase longevity in humans. It is unclear if calorie restriction/dieting contributes to cognitive impairment. During this randomized controlled trial, the effect of 6 months of calorie restriction on cognitive functioning was tested. Methods Participants (n = 48) were randomized to one of four groups: (1) control (weight maintenance), (2) calorie restriction (CR; 25% restriction), (3) CR plus structured exercise (CR + EX, 12.5% restriction plus 12.5% increased energy expenditure via exercise), or (4) low-calorie diet (LCD; 890 kcal/d diet until 15% weight loss, followed by weight maintenance). Cognitive tests (verbal memory, visual memory, attention/concentration) were conducted at baseline and months 3 and 6. Mixed linear models tested if cognitive function changed significantly from baseline to months 3 and 6, and if this change differed by group. Correlation analysis was used to determine if average daily energy deficit (quantified from change in body energy stores) was associated with change in cognitive test performance for the three dieting groups combined. Results No consistent pattern of verbal memory, visual retention/memory, or attention/concentration deficits emerged during the trial. Daily energy deficit was not significantly associated with change in cognitive test performance. Conclusions This randomized controlled trial suggests that calorie restriction/dieting was not associated with a consistent pattern of cognitive impairment. These conclusions must be interpreted in the context of study limitations, namely small sample size and limited statistical power. Previous reports of cognitive impairment might reflect sampling biases or information processing biases. PMID:17518698

  1. Spatio-temporal conditional inference and hypothesis tests for neural ensemble spiking precision

    PubMed Central

    Harrison, Matthew T.; Amarasingham, Asohan; Truccolo, Wilson

    2014-01-01

    The collective dynamics of neural ensembles create complex spike patterns with many spatial and temporal scales. Understanding the statistical structure of these patterns can help resolve fundamental questions about neural computation and neural dynamics. Spatio-temporal conditional inference (STCI) is introduced here as a semiparametric statistical framework for investigating the nature of precise spiking patterns from collections of neurons that is robust to arbitrarily complex and nonstationary coarse spiking dynamics. The main idea is to focus statistical modeling and inference, not on the full distribution of the data, but rather on families of conditional distributions of precise spiking given different types of coarse spiking. The framework is then used to develop families of hypothesis tests for probing the spatio-temporal precision of spiking patterns. Relationships among different conditional distributions are used to improve multiple hypothesis testing adjustments and to design novel Monte Carlo spike resampling algorithms. Of special note are algorithms that can locally jitter spike times while still preserving the instantaneous peri-stimulus time histogram (PSTH) or the instantaneous total spike count from a group of recorded neurons. The framework can also be used to test whether first-order maximum entropy models with possibly random and time-varying parameters can account for observed patterns of spiking. STCI provides a detailed example of the generic principle of conditional inference, which may be applicable in other areas of neurostatistical analysis. PMID:25380339

  2. Bayesian spatial transformation models with applications in neuroimaging data

    PubMed Central

    Miranda, Michelle F.; Zhu, Hongtu; Ibrahim, Joseph G.

    2013-01-01

    Summary The aim of this paper is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. Our STMs include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov Random Field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. PMID:24128143

  3. Stronger tests of mechanisms underlying geographic gradients of biodiversity: insights from the dimensionality of biodiversity.

    PubMed

    Stevens, Richard D; Tello, J Sebastián; Gavilanez, María Mercedes

    2013-01-01

    Inference involving diversity gradients typically is gathered by mechanistic tests involving single dimensions of biodiversity such as species richness. Nonetheless, because traits such as geographic range size, trophic status or phenotypic characteristics are tied to a particular species, mechanistic effects driving broad diversity patterns should manifest across numerous dimensions of biodiversity. We develop an approach of stronger inference based on numerous dimensions of biodiversity and apply it to evaluate one such putative mechanism: the mid-domain effect (MDE). Species composition of 10,000-km(2) grid cells was determined by overlaying geographic range maps of 133 noctilionoid bat taxa. We determined empirical diversity gradients in the Neotropics by calculating species richness and three indices each of phylogenetic, functional and phenetic diversity for each grid cell. We also created 1,000 simulated gradients of each examined metric of biodiversity based on a MDE model to estimate patterns expected if species distributions were randomly placed within the Neotropics. For each simulation run, we regressed the observed gradient onto the MDE-expected gradient. If a MDE drives empirical gradients, then coefficients of determination from such an analysis should be high, the intercept no different from zero and the slope no different than unity. Species richness gradients predicted by the MDE fit empirical patterns. The MDE produced strong spatially structured gradients of taxonomic, phylogenetic, functional and phenetic diversity. Nonetheless, expected values generated from the MDE for most dimensions of biodiversity exhibited poor fit to most empirical patterns. The MDE cannot account for most empirical patterns of biodiversity. Fuller understanding of latitudinal gradients will come from simultaneous examination of relative effects of random, environmental and historical mechanisms to better understand distribution and abundance of the current biota.

  4. Stronger Tests of Mechanisms Underlying Geographic Gradients of Biodiversity: Insights from the Dimensionality of Biodiversity

    PubMed Central

    Stevens, Richard D.; Tello, J. Sebastián; Gavilanez, María Mercedes

    2013-01-01

    Inference involving diversity gradients typically is gathered by mechanistic tests involving single dimensions of biodiversity such as species richness. Nonetheless, because traits such as geographic range size, trophic status or phenotypic characteristics are tied to a particular species, mechanistic effects driving broad diversity patterns should manifest across numerous dimensions of biodiversity. We develop an approach of stronger inference based on numerous dimensions of biodiversity and apply it to evaluate one such putative mechanism: the mid-domain effect (MDE). Species composition of 10,000-km2 grid cells was determined by overlaying geographic range maps of 133 noctilionoid bat taxa. We determined empirical diversity gradients in the Neotropics by calculating species richness and three indices each of phylogenetic, functional and phenetic diversity for each grid cell. We also created 1,000 simulated gradients of each examined metric of biodiversity based on a MDE model to estimate patterns expected if species distributions were randomly placed within the Neotropics. For each simulation run, we regressed the observed gradient onto the MDE-expected gradient. If a MDE drives empirical gradients, then coefficients of determination from such an analysis should be high, the intercept no different from zero and the slope no different than unity. Species richness gradients predicted by the MDE fit empirical patterns. The MDE produced strong spatially structured gradients of taxonomic, phylogenetic, functional and phenetic diversity. Nonetheless, expected values generated from the MDE for most dimensions of biodiversity exhibited poor fit to most empirical patterns. The MDE cannot account for most empirical patterns of biodiversity. Fuller understanding of latitudinal gradients will come from simultaneous examination of relative effects of random, environmental and historical mechanisms to better understand distribution and abundance of the current biota. PMID:23451099

  5. Day-roost tree selection by northern long-eared bats—What do non-roost tree comparisons and one year of data really tell us?

    USGS Publications Warehouse

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  6. Day-roost tree selection by northern long-eared bats - What do non-roost tree comparisons and one year of data really tell us?

    USGS Publications Warehouse

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  7. Topology and the universe

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III

    1998-09-01

    Topology may play an important role in cosmology in several different ways. First, Einstein's field equations tell us about the local geometry of the universe but not about its topology. Therefore, the universe may be multiply connected. Inflation predicts that the fluctuations that made clusters and groups of galaxies arose from random quantum fluctuations in the early universe. These should be Gaussian random phase. This can be tested by quantitatively measuring the topology of large-scale structure in the universe using the genus statistic. If the original fluctuations were Gaussian random phase then the structure we see today should have a spongelike topology. A number of studies by our group and others have shown that this is indeed the case. Future tests using the Sloan Digital Sky Survey should be possible. Microwave background fluctuations should also exhibit a characteristic symmetric pattern of hot and cold spots. The COBE data are consistent with this pattern and the MAP and PLANCK satellites should provide a definitive test. If the original inflationary state was metastable then it should decay by making an infinite number of open inflationary bubble universes. This model makes a specific prediction for the power spectrum of fluctuations in the microwave background which can be checked by the MAP and PLANCK satellites. Finally, Gott and Li have proposed how a multiply connected cosmology with an early epoch of closed timelike curves might allow the universe to be its own mother.

  8. Needle Tip Position and Bevel Direction Have No Effect in the Fluoroscopic Epidural Spreading Pattern in Caudal Epidural Injections: A Randomized Trial

    PubMed Central

    Kwon, Won Kyoung; Kim, Ah Na; Lee, Pil Moo; Park, Cheol Hwan; Kim, Jae Hun

    2016-01-01

    Background. Caudal epidural steroid injections (CESIs) are an effective treatment for pain. If the injection spreads in a specific pattern depending on the needle position or bevel direction, it would be possible to inject the agent into a specific and desired area. Objectives. We conducted a prospective randomized trial to determine if the needle position and bevel direction have any effect on the epidural spreading pattern in CESI. Methods. Demographic data of the patient were collected. During CESI, the needle position (middle or lateral) and direction (ventral or dorsal) were randomly allocated. Following fluoroscope-guided injection of 4 mL contrast media and 10 mL of injectates, the epidural spreading patterns (ventral or dorsal, bilateral or lateral) were imaged. Results. In the 210 CESIs performed, the needle tip position and bevel direction did not influence the epidural spreading patterns at L4-5 and L5-S1 disc levels. A history of Lumbar spine surgery was associated with a significantly limited spread to each disc level. A midline needle tip position was more effective than the lateral position in spreading to the distant disc levels. Conclusions. Neither the needle tip position nor the bevel direction affected the epidural drug spreading pattern during CESI. PMID:27445609

  9. Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements

    NASA Technical Reports Server (NTRS)

    Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.

    2016-01-01

    The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under-matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a constraint imposed upon the others, thereby precluding the achievement of their trivial solutions. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. The resulting pattern, along with randomly generated patterns, is subjected to numerical deformations and analyzed with DIC software. The optimal pattern is shown to outperform randomly generated patterns.

  10. Normal and tumoral melanocytes exhibit q-Gaussian random search patterns.

    PubMed

    da Silva, Priscila C A; Rosembach, Tiago V; Santos, Anésia A; Rocha, Márcio S; Martins, Marcelo L

    2014-01-01

    In multicellular organisms, cell motility is central in all morphogenetic processes, tissue maintenance, wound healing and immune surveillance. Hence, failures in its regulation potentiates numerous diseases. Here, cell migration assays on plastic 2D surfaces were performed using normal (Melan A) and tumoral (B16F10) murine melanocytes in random motility conditions. The trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy. The statistics of these trajectories was analyzed by building velocity and turn angle distributions, as well as velocity autocorrelations and the scaling of mean-squared displacements. We find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales. Moreover, these melanocytes move with non-Gaussian velocity distributions. This major finding indicates that amongst those animal cells supposedly migrating through Lévy walks, some of them can instead perform q-Gaussian walks. Furthermore, our results reveal that B16F10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts. Finally, a q-Gaussian random walk model was proposed to account for these melanocytic migratory traits. Simulations based on this model correctly describe the crossover to super-diffusivity in the cell migration tracks.

  11. Modeling Elevation and Aspect Controls on Emerging Ecohydrologic Processes and Ecosystem Patterns Using the Component-based Landlab Framework

    NASA Astrophysics Data System (ADS)

    Nudurupati, S. S.; Istanbulluoglu, E.; Adams, J. M.; Hobley, D. E. J.; Gasparini, N. M.; Tucker, G. E.; Hutton, E. W. H.

    2014-12-01

    Topography plays a commanding role on the organization of ecohydrologic processes and resulting vegetation patterns. In southwestern United States, climate conditions lead to terrain aspect- and elevation-controlled ecosystems, with mesic north-facing and xeric south-facing vegetation types; and changes in biodiversity as a function of elevation from shrublands in low desert elevations, to mixed grass/shrublands in mid elevations, and forests at high elevations and ridge tops. These observed patterns have been attributed to differences in topography-mediated local soil moisture availability, micro-climatology, and life history processes of plants that control chances of plant establishment and survival. While ecohydrologic models represent local vegetation dynamics in sufficient detail up to sub-hourly time scales, plant life history and competition for space and resources has not been adequately represented in models. In this study we develop an ecohydrologic cellular automata model within the Landlab component-based modeling framework. This model couples local vegetation dynamics (biomass production, death) and plant establishment and competition processes for resources and space. This model is used to study the vegetation organization in a semiarid New Mexico catchment where elevation and hillslope aspect play a defining role on plant types. Processes that lead to observed plant types across the landscape are examined by initializing the domain with randomly assigned plant types and systematically changing model parameters that couple plant response with soil moisture dynamics. Climate perturbation experiments are conducted to examine the plant response in space and time. Understanding the inherently transient ecohydrologic systems is critical to improve predictions of climate change impacts on ecosystems.

  12. Reproductive pair correlations and the clustering of organisms.

    PubMed

    Young, W R; Roberts, A J; Stuhne, G

    2001-07-19

    Clustering of organisms can be a consequence of social behaviour, or of the response of individuals to chemical and physical cues. Environmental variability can also cause clustering: for example, marine turbulence transports plankton and produces chlorophyll concentration patterns in the upper ocean. Even in a homogeneous environment, nonlinear interactions between species can result in spontaneous pattern formation. Here we show that a population of independent, random-walking organisms ('brownian bugs'), reproducing by binary division and dying at constant rates, spontaneously aggregates. Using an individual-based model, we show that clusters form out of spatially homogeneous initial conditions without environmental variability, predator-prey interactions, kinesis or taxis. The clustering mechanism is reproductively driven-birth must always be adjacent to a living organism. This clustering can overwhelm diffusion and create non-poissonian correlations between pairs (parent and offspring) or organisms, leading to the emergence of patterns.

  13. Recognition of building group patterns in topographic maps based on graph partitioning and random forest

    NASA Astrophysics Data System (ADS)

    He, Xianjin; Zhang, Xinchang; Xin, Qinchuan

    2018-02-01

    Recognition of building group patterns (i.e., the arrangement and form exhibited by a collection of buildings at a given mapping scale) is important to the understanding and modeling of geographic space and is hence essential to a wide range of downstream applications such as map generalization. Most of the existing methods develop rigid rules based on the topographic relationships between building pairs to identify building group patterns and thus their applications are often limited. This study proposes a method to identify a variety of building group patterns that allow for map generalization. The method first identifies building group patterns from potential building clusters based on a machine-learning algorithm and further partitions the building clusters with no recognized patterns based on the graph partitioning method. The proposed method is applied to the datasets of three cities that are representative of the complex urban environment in Southern China. Assessment of the results based on the reference data suggests that the proposed method is able to recognize both regular (e.g., the collinear, curvilinear, and rectangular patterns) and irregular (e.g., the L-shaped, H-shaped, and high-density patterns) building group patterns well, given that the correctness values are consistently nearly 90% and the completeness values are all above 91% for three study areas. The proposed method shows promises in automated recognition of building group patterns that allows for map generalization.

  14. Adherence to Mediterranean diet and risk of developing cognitive disorders: An updated systematic review and meta-analysis of prospective cohort studies

    PubMed Central

    Wu, Lei; Sun, Dali

    2017-01-01

    Recent articles have presented inconsistent findings on the impact of Mediterranean diet in the occurrence of cognitive disorders; therefore, we performed an updated systematic review and meta-analysis to evaluate the potential association and dose-response pattern with accumulating evidence. We searched the PubMed and the Embase for the records relevant to this topic. A generic inverse-variance method was used to pool the outcome data for continuous variable, and categories of high vs. low, median vs. low of Mediterranean diet score with a random-effects model. Generalized least-squares trend estimation model was used to estimate the potential dose-response patterns of Mediterranean diet score on incident cognitive disorders. We identified 9 cohort studies involving 34,168 participants. Compared with the lowest category, the pooled analysis showed that the highest Mediterranean diet score was inversely associated with the developing of cognitive disorders, and the pooled RR (95% CI) was 0.79 (0.70, 0.90). Mediterranean diet score of the median category was not significantly associated with cognitive disorders. Dose-response analysis indicated a trend of an approximately linear relationship of the Mediterranean diet score with the incident risk of cognitive disorders. Further studies of randomized controlled trials are warranted to confirm the observed association in different populations. PMID:28112268

  15. A Western Dietary Pattern Increases Prostate Cancer Risk: A Systematic Review and Meta-Analysis.

    PubMed

    Fabiani, Roberto; Minelli, Liliana; Bertarelli, Gaia; Bacci, Silvia

    2016-10-12

    Dietary patterns were recently applied to examine the relationship between eating habits and prostate cancer (PC) risk. While the associations between PC risk with the glycemic index and Mediterranean score have been reviewed, no meta-analysis is currently available on dietary patterns defined by "a posteriori" methods. A literature search was carried out (PubMed, Web of Science) to identify studies reporting the relationship between dietary patterns and PC risk. Relevant dietary patterns were selected and the risks estimated were calculated by a random-effect model. Multivariable-adjusted odds ratios (ORs), for a first-percentile increase in dietary pattern score, were combined by a dose-response meta-analysis. Twelve observational studies were included in the meta-analysis which identified a "Healthy pattern" and a "Western pattern". The Healthy pattern was not related to PC risk (OR = 0.96; 95% confidence interval (CI): 0.88-1.04) while the Western pattern significantly increased it (OR = 1.34; 95% CI: 1.08-1.65). In addition, the "Carbohydrate pattern", which was analyzed in four articles, was positively associated with a higher PC risk (OR = 1.64; 95% CI: 1.35-2.00). A significant linear trend between the Western ( p = 0.011) pattern, the Carbohydrate ( p = 0.005) pattern, and the increment of PC risk was observed. The small number of studies included in the meta-analysis suggests that further investigation is necessary to support these findings.

  16. Behavioral effects of environmental enrichment on harbor seals (Phoca vitulina concolor) and gray seals (Hafichoerus grypus)

    USGS Publications Warehouse

    Hunter, S.A.; Bay, M.S.; Martin, M.L.; Hatfield, J.S.

    2002-01-01

    Zoos and aquariums have been incorporating environmental enrichment into their animal care programs for the past 30 years to increase mental stimulation and promote natural behaviors. However, most attempts to document the effects of enrichment on animal behavior have focused on terrestrial mammals. Staff at the National Aquarium in Baltimore conducted an investigation of the behavioral effects of enrichment on the seven harbor seals and two gray seals housed in the aquarium's outdoor seal exhibit. We expected that enrichment would change the amount of time the animals spent engaged in specific behaviors. The behaviors recorded were: resting in water, resting hauled out, maintenance, breeding display, breeding behavior, aggression, pattern swimming, random swimming, exploration, and out of sight. Activity levels (random swimming and exploration) were expected to increase, while stereotypic behaviors (pattern swimming) were expected to decrease. The frequency and duration of behaviors were documented for 90 hr in both the control phase (without enrichment) and the experimental phase (with enrichment). Statistically significant differences (P<0.05) in the time spent in pattern swimming, random swimming, exploration, and out of sight were observed between the two phases. With enrichment, pattern swimming and out of sight decreased, while random swimming and exploration behavior increased. These findings demonstrate that enrichment can promote behaviors (random swimming and exploration) that are likely to be normal for phocids in the wild, and that may contribute to the behavioral complexity of these seals in captivity.

  17. Behavioral effects of environmental enrichment on harbor seals (Phoca vitulina concolor) and gray seals (Halichoerus grypus)

    USGS Publications Warehouse

    Hunter, S.A.; Bay, M.S.; Martin, M.L.; Hatfield, J.S.

    2002-01-01

    Zoos and aquariums have been incorporating environmental enrichment into their animal care programs for the past 30 years to increase mental stimulation and promote natural behaviors. However, most attempts to document the effects of enrichment on animal behavior have focused on terrestrial mammals. Staff at the National Aquarium in Baltimore conducted an investigation of the behavioral effects of enrichment on the seven harbor seals and two gray seals housed in the aquarium's outdoor seal exhibit. We expected that enrichment would change the amount of time the animals spent engaged in specific behaviors. The behaviors recorded were: resting in water, resting hauled out, maintenance, breeding display, breeding behavior, aggression, pattern swimming, random swimming, exploration, and out of sight. Activity levels (random swimming and exploration) were expected to increase, while stereotypic behaviors (pattern swimming) were expected to decrease. The frequency and duration of behaviors were documented for 90 hr in both the control phase (without enrichment) and the experimental phase (with enrichment). Statistically significant differences (P < 0.05) in the time spent in pattern swimming, random swimming, exploration, and out of sight were observed between the two phases. With enrichment, pattern swimming and out of sight decreased, while random swimming and exploration behavior increased. These findings demonstrate that enrichment can promote behaviors (random swimming and exploration) that are likely to be normal for phocids in the wild, and that may contribute to the behavioral complexity of these seals in captivity. ?? 2002 Wiley-Liss, Inc.

  18. Identifying key demographic parameters of a small island-associated population of Indo-Pacific bottlenose dolphins (Reunion, Indian Ocean).

    PubMed

    Dulau, Violaine; Estrade, Vanessa; Fayan, Jacques

    2017-01-01

    Photo-identification surveys of Indo-Pacific bottlenose dolphins were conducted from 2009 to 2014 off Reunion Island (55°E33'/21°S07'), in the Indian Ocean. Robust Design models were applied to produce the most reliable estimate of population abundance and survival rate, while accounting for temporary emigration from the survey area (west coast). The sampling scheme consisted of a five-month (June-October) sampling period in each year of the study. The overall population size at Reunion was estimated to be 72 individuals (SE = 6.17, 95%CI = 61-85), based on a random temporary emigration (γ") of 0.096 and a proportion of 0.70 (SE = 0.03) distinct individuals. The annual survival rate was 0.93 (±0.018 SE, 95%CI = 0.886-0.958) and was constant over time and between sexes. Models considering gender groups indicated different movement patterns between males and females. Males showed null or quasi-null temporary emigration (γ" = γ' < 0.01), while females showed a random temporary emigration (γ") of 0.10, suggesting that a small proportion of females was outside the survey area during each primary sampling period. Sex-specific temporary migration patterns were consistent with movement and residency patterns observed in other areas. The Robust Design approach provided an appropriate sampling scheme for deriving island-associated population parameters, while allowing to restrict survey effort both spatially (i.e. west coast only) and temporally (five months per year). Although abundance and survival were stable over the six years, the small population size of fewer than 100 individuals suggested that this population is highly vulnerable. Priority should be given to reducing any potential impact of human activity on the population and its habitat.

  19. Growth dynamics explain the development of spatiotemporal burst activity of young cultured neuronal networks in detail.

    PubMed

    Gritsun, Taras A; le Feber, Joost; Rutten, Wim L C

    2012-01-01

    A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP) synapses (so, no long-term potentiation, LTP, or depression, LTD, was included). However, elevated pre-phases (burst leaders) and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.

  20. Selective investment promotes cooperation in public goods game

    NASA Astrophysics Data System (ADS)

    Li, Jing; Wu, Te; Zeng, Gang; Wang, Long

    2012-08-01

    Most previous investigations on spatial Public Goods Game assume that individuals treat neighbors equivalently, which is in sharp contrast with realistic situations, where bias is ubiquitous. We construct a model to study how a selective investment mechanism affects the evolution of cooperation. Cooperators selectively contribute to just a fraction among their neighbors. According to the interaction result, the investment network can be adapted. On selecting investees, three patterns are considered. In the random pattern, cooperators choose their investees among the neighbors equiprobably. In the social-preference pattern, cooperators tend to invest to individuals possessing large social ties. In the wealth-preference pattern, cooperators are more likely to invest to neighbors with higher payoffs. Our result shows robustness of selective investment mechanism that boosts emergence and maintenance of cooperation. Cooperation is more or less hampered under the latter two patterns, and we prove the anti-social-preference or anti-wealth-preference pattern of selecting investees can accelerate cooperation to some extent. Furthermore, the theoretical analysis of our mechanism on double-star networks coincides with simulation results. We hope our finding could shed light on better understanding of the emergence of cooperation among adaptive populations.

  1. Pattern does not equal process: what does patch occupancy really tell us about metapopulation dynamics?

    PubMed

    Clinchy, Michael; Haydon, Daniel T; Smith, Andrew T

    2002-04-01

    Patch occupancy surveys are commonly used to parameterize metapopulation models. If isolation predicts patch occupancy, this is generally attributed to a balance between distance-dependent recolonization and spatially independent extinctions. We investigated whether similar patterns could also be generated by a process of spatially correlated extinctions following a unique colonization event (analogous to nonequilibrium processes in island biogeography). We simulated effects of spatially correlated extinctions on patterns of patch occupancy among pikas (Ochotona princeps) at Bodie, California, using randomly located extinction disks to represent the likely effects of predation. Our simulations produced similar patterns to those cited as evidence of balanced metapopulation dynamics. Simulations using a variety of disk sizes and patch configurations confirmed that our results are potentially applicable to a broad range of species and sites. Analyses of the observed patterns of patch occupancy at Bodie revealed little evidence of rescue effects and strong evidence that most recolonizations are ephemeral in nature. Persistence will be overestimated if static or declining patterns of patch occupancy are mistakenly attributed to dynamically stable metapopulation processes. Consequently, simple patch occupancy surveys should not be considered as substitutes for detailed experimental tests of hypothesized population processes, particularly when conservation concerns are involved.

  2. Synaptic Scaling in Combination with Many Generic Plasticity Mechanisms Stabilizes Circuit Connectivity

    PubMed Central

    Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Wörgötter, Florentin

    2011-01-01

    Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks. PMID:22203799

  3. Network approach to patterns in stratocumulus clouds

    NASA Astrophysics Data System (ADS)

    Glassmeier, Franziska; Feingold, Graham

    2017-10-01

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.

  4. Evolving building blocks of rhythm: how human cognition creates music via cultural transmission.

    PubMed

    Ravignani, Andrea; Thompson, Bill; Grossi, Thomas; Delgado, Tania; Kirby, Simon

    2018-03-06

    Why does musical rhythm have the structure it does? Musical rhythm, in all its cross-cultural diversity, exhibits commonalities across world cultures. Traditionally, music research has been split into two fields. Some scientists focused on musicality, namely the human biocognitive predispositions for music, with an emphasis on cross-cultural similarities. Other scholars investigated music, seen as a cultural product, focusing on the variation in world musical cultures. Recent experiments found deep connections between music and musicality, reconciling these opposing views. Here, we address the question of how individual cognitive biases affect the process of cultural evolution of music. Data from two experiments are analyzed using two complementary techniques. In the experiments, participants hear drumming patterns and imitate them. These patterns are then given to the same or another participant to imitate. The structure of these initially random patterns is tracked along experimental "generations." Frequentist statistics show how participants' biases are amplified by cultural transmission, making drumming patterns more structured. Structure is achieved faster in transmission within rather than between participants. A Bayesian model approximates the motif structures participants learned and created. Our data and models suggest that individual biases for musicality may shape the cultural transmission of musical rhythm. © 2018 New York Academy of Sciences.

  5. Network approach to patterns in stratocumulus clouds.

    PubMed

    Glassmeier, Franziska; Feingold, Graham

    2017-10-03

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth's climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis's Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.

  6. Network approach to patterns in stratocumulus clouds

    PubMed Central

    Feingold, Graham

    2017-01-01

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav–Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes. PMID:28904097

  7. Animal-to-animal variability in the phasing of the crustacean cardiac motor pattern: an experimental and computational analysis

    PubMed Central

    Williams, Alex H.; Kwiatkowski, Molly A.; Mortimer, Adam L.; Marder, Eve; Zeeman, Mary Lou

    2013-01-01

    The cardiac ganglion (CG) of Homarus americanus is a central pattern generator that consists of two oscillatory groups of neurons: “small cells” (SCs) and “large cells” (LCs). We have shown that SCs and LCs begin their bursts nearly simultaneously but end their bursts at variable phases. This variability contrasts with many other central pattern generator systems in which phase is well maintained. To determine both the consequences of this variability and how CG phasing is controlled, we modeled the CG as a pair of Morris-Lecar oscillators coupled by electrical and excitatory synapses and constructed a database of 15,000 simulated networks using random parameter sets. These simulations, like our experimental results, displayed variable phase relationships, with the bursts beginning together but ending at variable phases. The model suggests that the variable phasing of the pattern has important implications for the functional role of the excitatory synapses. In networks in which the two oscillators had similar duty cycles, the excitatory coupling functioned to increase cycle frequency. In networks with disparate duty cycles, it functioned to decrease network frequency. Overall, we suggest that the phasing of the CG may vary without compromising appropriate motor output and that this variability may critically determine how the network behaves in response to manipulations. PMID:23446690

  8. Overlay improvement by exposure map based mask registration optimization

    NASA Astrophysics Data System (ADS)

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density could also achieve under 5nm performance. We assume mask registration excluding random error is mostly induced by charge accumulation during mask writing, which may be calculated from surrounding exposed pattern density. Multi-loading test mask registration result shows that with x direction writing sequence, mask registration behavior in x direction is mainly related to sequence direction, but mask registration in y direction would be highly impacted by pattern density distribution map. It proves part of mask registration error is due to charge issue from nearby environment. If exposure sequence is chip by chip for normal multi chip layout case, mask registration of both x and y direction would be impacted analogously, which has also been proved by real data. Therefore, we try to set up a simple model to predict the mask registration error based on mask exposure map, and correct it with the given POSCOR (position correction) file for advanced mask writing if needed.

  9. Multi-INT Complex Event Processing using Approximate, Incremental Graph Pattern Search

    DTIC Science & Technology

    2012-06-01

    graph pattern search and SPARQL queries . Total execution time for 10 executions each of 5 random pattern searches in synthetic data sets...01/11 1000 10000 100000 RDF triples Time (secs) 10 20 Graph pattern algorithm SPARQL queries Initial Performance Comparisons 09/18/11 2011 Thrust Area

  10. Compensation for Lithography Induced Process Variations during Physical Design

    NASA Astrophysics Data System (ADS)

    Chin, Eric Yiow-Bing

    This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.

  11. The incidence of the different sources of noise on the uncertainty in radiochromic film dosimetry using single channel and multichannel methods

    NASA Astrophysics Data System (ADS)

    González-López, Antonio; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen

    2017-11-01

    The influence of the various sources of noise on the uncertainty in radiochromic film (RCF) dosimetry using single channel and multichannel methods is investigated in this work. These sources of noise are extracted from pixel value (PV) readings and dose maps. Pieces of an RCF were each irradiated to different uniform doses, ranging from 0 to 1092 cGy. Then, the pieces were read at two resolutions (72 and 150 ppp) with two flatbed scanners: Epson 10000XL and Epson V800, representing two states of technology. Noise was extracted as described in ISO 15739 (2013), separating its distinct constituents: random noise and fixed pattern (FP) noise. Regarding the PV maps, FP noise is the main source of noise for both models of digitizer. Also, the standard deviation of the random noise in the 10000XL model is almost twice that of the V800 model. In the dose maps, the FP noise is smaller in the multichannel method than in the single channel ones. However, random noise is higher in this method, throughout the dose range. In the multichannel method, FP noise is reduced, as a consequence of this method’s ability to eliminate channel independent perturbations. However, the random noise increases, because the dose is calculated as a linear combination of the doses obtained by the single channel methods. The values of the coefficients of this linear combination are obtained in the present study, and the root of the sum of their squares is shown to range between 0.9 and 1.9 over the dose range studied. These results indicate the random noise to play a fundamental role in the uncertainty of RCF dosimetry: low levels of random noise are required in the digitizer to fully exploit the advantages of the multichannel dosimetry method. This is particularly important for measuring high doses at high spatial resolutions.

  12. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    PubMed

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  13. Addressing the Challenges of Obtaining Functional Outcomes in Traumatic Brain Injury Research: Missing Data Patterns, Timing of Follow-Up, and Three Prognostic Models

    PubMed Central

    Morrison, Laurie J.; Devlin, Sean M.; Bulger, Eileen M.; Brasel, Karen J.; Sheehan, Kellie; Minei, Joseph P.; Kerby, Jeffrey D.; Tisherman, Samuel A.; Rizoli, Sandro; Karmy-Jones, Riyad; van Heest, Rardi; Newgard, Craig D.

    2014-01-01

    Abstract Traumatic brain injury (TBI) is common and debilitating. Randomized trials of interventions for TBI ideally assess effectiveness by using long-term functional neurological outcomes, but such outcomes are difficult to obtain and costly. If there is little change between functional status at hospital discharge versus 6 months, then shorter-term outcomes may be adequate for use in future clinical trials. Using data from a previously published multi-center, randomized, placebo-controlled TBI clinical trial, we evaluated patterns of missing outcome data, changes in functional status between hospital discharge and 6 months, and three prognostic models to predict long-term functional outcome from covariates available at hospital discharge (functional measures, demographics, and injury characteristics). The Resuscitation Outcomes Consortium Hypertonic Saline trial enrolled 1282 TBI patients, obtaining the primary outcome of 6-month Glasgow Outcome Score Extended (GOSE) for 85% of patients, but missing the primary outcome for the remaining 15%. Patients with missing outcomes had less-severe injuries, higher neurological function at discharge (GOSE), and shorter hospital stays than patients whose GOSE was obtained. Of 1066 (83%) patients whose GOSE was obtained both at hospital discharge and at 6-months, 71% of patients had the same dichotomized functional status (severe disability/death vs. moderate/no disability) after 6 months as at discharge, 28% had an improved functional status, and 1% had worsened. Performance was excellent (C-statistic between 0.88 and 0.91) for all three prognostic models and calibration adequate for two models (p values, 0.22 and 0.85). Our results suggest that multiple imputation of the standard 6-month GOSE may be reasonable in TBI research when the primary outcome cannot be obtained through other means. PMID:24552494

  14. Addressing the challenges of obtaining functional outcomes in traumatic brain injury research: missing data patterns, timing of follow-up, and three prognostic models.

    PubMed

    Zelnick, Leila R; Morrison, Laurie J; Devlin, Sean M; Bulger, Eileen M; Brasel, Karen J; Sheehan, Kellie; Minei, Joseph P; Kerby, Jeffrey D; Tisherman, Samuel A; Rizoli, Sandro; Karmy-Jones, Riyad; van Heest, Rardi; Newgard, Craig D

    2014-06-01

    Traumatic brain injury (TBI) is common and debilitating. Randomized trials of interventions for TBI ideally assess effectiveness by using long-term functional neurological outcomes, but such outcomes are difficult to obtain and costly. If there is little change between functional status at hospital discharge versus 6 months, then shorter-term outcomes may be adequate for use in future clinical trials. Using data from a previously published multi-center, randomized, placebo-controlled TBI clinical trial, we evaluated patterns of missing outcome data, changes in functional status between hospital discharge and 6 months, and three prognostic models to predict long-term functional outcome from covariates available at hospital discharge (functional measures, demographics, and injury characteristics). The Resuscitation Outcomes Consortium Hypertonic Saline trial enrolled 1282 TBI patients, obtaining the primary outcome of 6-month Glasgow Outcome Score Extended (GOSE) for 85% of patients, but missing the primary outcome for the remaining 15%. Patients with missing outcomes had less-severe injuries, higher neurological function at discharge (GOSE), and shorter hospital stays than patients whose GOSE was obtained. Of 1066 (83%) patients whose GOSE was obtained both at hospital discharge and at 6-months, 71% of patients had the same dichotomized functional status (severe disability/death vs. moderate/no disability) after 6 months as at discharge, 28% had an improved functional status, and 1% had worsened. Performance was excellent (C-statistic between 0.88 and 0.91) for all three prognostic models and calibration adequate for two models (p values, 0.22 and 0.85). Our results suggest that multiple imputation of the standard 6-month GOSE may be reasonable in TBI research when the primary outcome cannot be obtained through other means.

  15. Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.

    PubMed

    Kilic, Niyazi; Hosgormez, Erkan

    2016-03-01

    Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.

  16. Detecting Beer Intake by Unique Metabolite Patterns.

    PubMed

    Gürdeniz, Gözde; Jensen, Morten Georg; Meier, Sebastian; Bech, Lene; Lund, Erik; Dragsted, Lars Ove

    2016-12-02

    Evaluation of the health related effects of beer intake is hampered by the lack of accurate tools for assessing intakes (biomarkers). Therefore, we identified plasma and urine metabolites associated with recent beer intake by untargeted metabolomics and established a characteristic metabolite pattern representing raw materials and beer production as a qualitative biomarker of beer intake. In a randomized, crossover, single-blinded meal study (MSt1), 18 participants were given, one at a time, four different test beverages: strong, regular, and nonalcoholic beers and a soft drink. Four participants were assigned to have two additional beers (MSt2). In addition to plasma and urine samples, test beverages, wort, and hops extract were analyzed by UPLC-QTOF. A unique metabolite pattern reflecting beer metabolome, including metabolites derived from beer raw material (i.e., N-methyl tyramine sulfate and the sum of iso-α-acids and tricyclohumols) and the production process (i.e., pyro-glutamyl proline and 2-ethyl malate), was selected to establish a compliance biomarker model for detection of beer intake based on MSt1. The model predicted the MSt2 samples collected before and up to 12 h after beer intake correctly (AUC = 1). A biomarker model including four metabolites representing both beer raw materials and production steps provided a specific and accurate tool for measurement of beer consumption.

  17. Insights in connecting phenotypes in bacteria to coevolutionary information

    NASA Astrophysics Data System (ADS)

    Cheng, Ryan; Morcos, Faruck; Hayes, Ryan; Helm, Rodney; Levine, Herbert; Onuchic, Jose

    It has long been known that protein sequences are far from random. These sequences have been evolutionarily selected to maintain their ability to fold into stable, three-dimensional folded structures as well as their ability to form macromolecular assemblies, perform catalytic functions, etc. For these reasons, there exist quantifiable mutational patterns in the collection of sequence data for a protein family arising from the need to maintain favorable residue-residue interactions to facilitate folding as well as cellular function. Here, we focus on studying the correlated mutational patterns that give rise to interaction specificity in bacterial two-component signaling (TCS) systems. TCS proteins have evolved to be able to preferentially bind and transfer a phosphate group to their signaling partner while avoiding phosphotransfer with non-partners. We infer a Potts model Hamiltonian governing the correlated mutational patterns that are observed in the sequence data of TCS partners and apply this model to recently published in vivo mutational data. Our findings further support the notion that statistical models built from sequence data can be used to predict bacterial phenotypes as well as engineer interaction specificity between non-partner TCS proteins. This research has been supported by the NSF INSPIRE Award (MCB-1241332) and by the CTBP sponsored by the NSF (Grant PHY- 1427654).

  18. Using ecological null models to assess the potential for marine protected area networks to protect biodiversity.

    PubMed

    Semmens, Brice X; Auster, Peter J; Paddack, Michelle J

    2010-01-27

    Marine protected area (MPA) networks have been proposed as a principal method for conserving biological diversity, yet patterns of diversity may ultimately complicate or compromise the development of such networks. We show how a series of ecological null models can be applied to assemblage data across sites in order to identify non-random biological patterns likely to influence the effectiveness of MPA network design. We use fish census data from Caribbean fore-reefs as a test system and demonstrate that: 1) site assemblages were nested, such that species found on sites with relatively few species were subsets of those found on sites with relatively many species, 2) species co-occurred across sites more than expected by chance once species-habitat associations were accounted for, and 3) guilds were most evenly represented at the richest sites and richness among all guilds was correlated (i.e., species and trophic diversity were closely linked). These results suggest that the emerging Caribbean marine protected area network will likely be successful at protecting regional diversity even if planning is largely constrained by insular, inventory-based design efforts. By recasting ecological null models as tests of assemblage patterns likely to influence management action, we demonstrate how these classic tools of ecological theory can be brought to bear in applied conservation problems.

  19. Dynamic defense and network randomization for computer systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Adrian R.; Stout, William M. S.; Hamlet, Jason R.

    The various technologies presented herein relate to determining a network attack is taking place, and further to adjust one or more network parameters such that the network becomes dynamically configured. A plurality of machine learning algorithms are configured to recognize an active attack pattern. Notification of the attack can be generated, and knowledge gained from the detected attack pattern can be utilized to improve the knowledge of the algorithms to detect a subsequent attack vector(s). Further, network settings and application communications can be dynamically randomized, wherein artificial diversity converts control systems into moving targets that help mitigate the early reconnaissancemore » stages of an attack. An attack(s) based upon a known static address(es) of a critical infrastructure network device(s) can be mitigated by the dynamic randomization. Network parameters that can be randomized include IP addresses, application port numbers, paths data packets navigate through the network, application randomization, etc.« less

  20. Smoking Patterns and Stimulus Control in Intermittent and Daily Smokers

    PubMed Central

    Shiffman, Saul; Dunbar, Michael S.; Li, Xiaoxue; Scholl, Sarah M.; Tindle, Hilary A.; Anderson, Stewart J.; Ferguson, Stuart G.

    2014-01-01

    Intermittent smokers (ITS) – who smoke less than daily – comprise an increasing proportion of adult smokers. Their smoking patterns challenge theoretical models of smoking motivation, which emphasize regular and frequent smoking to maintain nicotine levels and avoid withdrawal, but yet have gone largely unexamined. We characterized smoking patterns among 212 ITS (smoking 4–27 days per month) compared to 194 daily smokers (DS; smoking 5–30 cigarettes daily) who monitored situational antecedents of smoking using ecological momentary assessment. Subjects recorded each cigarette on an electronic diary, and situational variables were assessed in a random subset (n = 21,539 smoking episodes); parallel assessments were obtained by beeping subjects at random when they were not smoking (n = 26,930 non-smoking occasions). Compared to DS, ITS' smoking was more strongly associated with being away from home, being in a bar, drinking alcohol, socializing, being with friends and acquaintances, and when others were smoking. Mood had only modest effects in either group. DS' and ITS' smoking were substantially and equally suppressed by smoking restrictions, although ITS more often cited self-imposed restrictions. ITS' smoking was consistently more associated with environmental cues and contexts, especially those associated with positive or “indulgent” smoking situations. Stimulus control may be an important influence in maintaining smoking and making quitting difficult among ITS. PMID:24599056

  1. Phylogenetic patterns of climatic, habitat and trophic niches in a European avian assemblage

    PubMed Central

    Pearman, Peter B; Lavergne, Sébastien; Roquet, Cristina; Wüest, Rafael; Zimmermann, Niklaus E; Thuiller, Wilfried

    2014-01-01

    Aim The origins of ecological diversity in continental species assemblages have long intrigued biogeographers. We apply phylogenetic comparative analyses to disentangle the evolutionary patterns of ecological niches in an assemblage of European birds. We compare phylogenetic patterns in trophic, habitat and climatic niche components. Location Europe. Methods From polygon range maps and handbook data we inferred the realized climatic, habitat and trophic niches of 405 species of breeding birds in Europe. We fitted Pagel's lambda and kappa statistics, and conducted analyses of disparity through time to compare temporal patterns of ecological diversification on all niche axes together. All observed patterns were compared with expectations based on neutral (Brownian) models of niche divergence. Results In this assemblage, patterns of phylogenetic signal (lambda) suggest that related species resemble each other less in regard to their climatic and habitat niches than they do in their trophic niche. Kappa estimates show that ecological divergence does not gradually increase with divergence time, and that this punctualism is stronger in climatic niches than in habitat and trophic niches. Observed niche disparity markedly exceeds levels expected from a Brownian model of ecological diversification, thus providing no evidence for past phylogenetic niche conservatism in these multivariate niches. Levels of multivariate disparity are greatest for the climatic niche, followed by disparity of the habitat and the trophic niches. Main conclusions Phylogenetic patterns in the three niche components differ within this avian assemblage. Variation in evolutionary rates (degree of gradualism, constancy through the tree) and/or non-random macroecological sampling probably lead here to differences in the phylogenetic structure of niche components. Testing hypotheses on the origin of these patterns requires more complete phylogenetic trees of the birds, and extended ecological data on different niche components for all bird species. PMID:24790525

  2. Spatial statistical analysis of basal stem root disease under natural field epidemic of oil palm

    NASA Astrophysics Data System (ADS)

    Kamu, Assis; Phin, Chong Khim; Seman, Idris Abu; Wan, Hoong Hak; Mun, Ho Chong

    2015-02-01

    Oil palm or scientifically known as Elaeis guineensis Jacq. is the most important commodity crop in Malaysia and has greatly contributed to the economy growth of the country. As far as disease is concerned in the industry, Basal Stem Rot (BSR) caused by Ganoderma boninence remains the most important disease. BSR disease is the most widely studied with information available for oil palm disease in Malaysia. However, there is still limited study on the spatial as well as temporal pattern or distribution of the disease especially under natural field epidemic condition in oil palm plantation. The objective of this study is to spatially identify the pattern of BSR disease under natural field epidemic using two geospatial analytical techniques, which are quadrat analysis for the first order properties of partial pattern analysis and nearest-neighbor analysis (NNA) for the second order properties of partial pattern analysis. Two study sites were selected with different age of tree. Both sites are located in Tawau, Sabah and managed by the same company. The results showed that at least one of the point pattern analysis used which is NNA (i.e. the second order properties of partial pattern analysis) has confirmed the disease is complete spatial randomness. This suggests the spread of the disease is not from tree to tree and the age of palm does not play a significance role in determining the spatial pattern of the disease. From the spatial pattern of the disease, it would help in the disease management program and for the industry in the future. The statistical modelling is expected to help in identifying the right model to estimate the yield loss of oil palm due to BSR disease in the future.

  3. Ten-Year Employment Patterns of Working Age Individuals After Moderate to Severe Traumatic Brain Injury: A National Institute on Disability and Rehabilitation Research Traumatic Brain Injury Model Systems Study.

    PubMed

    Cuthbert, Jeffrey P; Pretz, Christopher R; Bushnik, Tamara; Fraser, Robert T; Hart, Tessa; Kolakowsky-Hayner, Stephanie A; Malec, James F; O'Neil-Pirozzi, Therese M; Sherer, Mark

    2015-12-01

    To describe the 10-year patterns of employment for individuals of working age discharged from a Traumatic Brain Injury Model Systems (TBIMS) center between 1989 and 2009. Secondary data analysis. Inpatient rehabilitation centers. Patients aged 16 to 55 years (N=3618) who were not retired at injury, received inpatient rehabilitation at a TBIMS center, were discharged alive between 1989 and 2009, and had at least 3 completed follow-up interviews at postinjury years 1, 2, 5, and 10. Not applicable. Employment. Patterns of employment were generated using a generalized linear mixed model, where these patterns were transformed into temporal trajectories of probability of employment via random effects modeling. Covariates demonstrating significant relations to growth parameters that govern the trajectory patterns were similar to those noted in previous cross-sectional research and included age, sex, race/ethnicity, education, preinjury substance misuse, preinjury vocational status, and days of posttraumatic amnesia. The calendar year in which the injury occurred also greatly influenced trajectories. An interactive tool was developed to provide visualization of all postemployment trajectories, with many showing decreasing probabilities of employment between 5 and 10 years postinjury. These results highlight that postinjury employment after moderate to severe traumatic brain injury (TBI) is a dynamic process, with varied patterns of employment for individuals with specific characteristics. The overall decline in trajectories of probability of employment between 5 and 10 years postinjury suggests that moderate to severe TBI may have unfavorable chronic effects and that employment outcome is highly influenced by national labor market forces. Additional research targeting the underlying drivers of the decline between 5 and 10 years postinjury is recommended, as are interventions that target influencing factors. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Using a spatially-distributed hydrologic biogeochemistry model with nitrogen transport to study the spatial variation of carbon stocks and fluxes in a Critical Zone Observatory

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Eissenstat, D. M.; He, Y.; Davis, K. J.

    2017-12-01

    Most current biogeochemical models are 1-D and represent one point in space. Therefore, they cannot resolve topographically driven land surface heterogeneity (e.g., lateral water flow, soil moisture, soil temperature, solar radiation) or the spatial pattern of nutrient availability. A spatially distributed forest biogeochemical model with nitrogen transport, Flux-PIHM-BGC, has been developed by coupling a 1-D mechanistic biogeochemical model Biome-BGC (BBGC) with a spatially distributed land surface hydrologic model, Flux-PIHM, and adding an advection dominated nitrogen transport module. Flux-PIHM is a coupled physically based model, which incorporates a land-surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model, and is augmented by adding a topographic solar radiation module. Flux-PIHM is able to represent the link between groundwater and the surface energy balance, as well as land surface heterogeneities caused by topography. In the coupled Flux-PIHM-BGC model, each Flux-PIHM model grid couples a 1-D BBGC model, while nitrogen is transported among model grids via surface and subsurface water flow. In each grid, Flux-PIHM provides BBGC with soil moisture, soil temperature, and solar radiation, while BBGC provides Flux-PIHM with spatially-distributed leaf area index. The coupled Flux-PIHM-BGC model has been implemented at the Susquehanna/Shale Hills Critical Zone Observatory. The model-predicted aboveground vegetation carbon and soil carbon distributions generally agree with the macro patterns observed within the watershed. The importance of abiotic variables (including soil moisture, soil temperature, solar radiation, and soil mineral nitrogen) in predicting aboveground carbon distribution is calculated using a random forest. The result suggests that the spatial pattern of aboveground carbon is controlled by the distribution of soil mineral nitrogen. A Flux-PIHM-BGC simulation without the nitrogen transport module is also executed. The model without nitrogen transport fails in predicting the spatial patterns of vegetation carbon, which indicates the importance of having a nitrogen transport module in spatially distributed ecohydrologic modeling.

  5. Dinucleotide controlled null models for comparative RNA gene prediction.

    PubMed

    Gesell, Tanja; Washietl, Stefan

    2008-05-27

    Comparative prediction of RNA structures can be used to identify functional noncoding RNAs in genomic screens. It was shown recently by Babak et al. [BMC Bioinformatics. 8:33] that RNA gene prediction programs can be biased by the genomic dinucleotide content, in particular those programs using a thermodynamic folding model including stacking energies. As a consequence, there is need for dinucleotide-preserving control strategies to assess the significance of such predictions. While there have been randomization algorithms for single sequences for many years, the problem has remained challenging for multiple alignments and there is currently no algorithm available. We present a program called SISSIz that simulates multiple alignments of a given average dinucleotide content. Meeting additional requirements of an accurate null model, the randomized alignments are on average of the same sequence diversity and preserve local conservation and gap patterns. We make use of a phylogenetic substitution model that includes overlapping dependencies and site-specific rates. Using fast heuristics and a distance based approach, a tree is estimated under this model which is used to guide the simulations. The new algorithm is tested on vertebrate genomic alignments and the effect on RNA structure predictions is studied. In addition, we directly combined the new null model with the RNAalifold consensus folding algorithm giving a new variant of a thermodynamic structure based RNA gene finding program that is not biased by the dinucleotide content. SISSIz implements an efficient algorithm to randomize multiple alignments preserving dinucleotide content. It can be used to get more accurate estimates of false positive rates of existing programs, to produce negative controls for the training of machine learning based programs, or as standalone RNA gene finding program. Other applications in comparative genomics that require randomization of multiple alignments can be considered. SISSIz is available as open source C code that can be compiled for every major platform and downloaded here: http://sourceforge.net/projects/sissiz.

  6. A feature-based developmental model of the infant brain in structural MRI.

    PubMed

    Toews, Matthew; Wells, William M; Zöllei, Lilla

    2012-01-01

    In this paper, anatomical development is modeled as a collection of distinctive image patterns localized in space and time. A Bayesian posterior probability is defined over a random variable of subject age, conditioned on data in the form of scale-invariant image features. The model is automatically learned from a large set of images exhibiting significant variation, used to discover anatomical structure related to age and development, and fit to new images to predict age. The model is applied to a set of 230 infant structural MRIs of 92 subjects acquired at multiple sites over an age range of 8-590 days. Experiments demonstrate that the model can be used to identify age-related anatomical structure, and to predict the age of new subjects with an average error of 72 days.

  7. Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.

    PubMed

    Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael

    2011-10-01

    We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.

  8. Synchronization in Random Pulse Oscillator Networks

    NASA Astrophysics Data System (ADS)

    Brown, Kevin; Hermundstad, Ann

    Motivated by synchronization phenomena in neural systems, we study synchronization of random networks of coupled pulse oscillators. We begin by considering binomial random networks whose nodes have intrinsic linear dynamics. We quantify order in the network spiking dynamics using a new measure: the normalized Lev-Zimpel complexity (LZC) of the nodes' spike trains. Starting from a globally-synchronized state, we see two broad classes of behaviors. In one (''temporally random''), the LZC is high and nodes spike independently with no coherent pattern. In another (''temporally regular''), the network does not globally synchronize but instead forms coherent, repeating population firing patterns with low LZC. No topological feature of the network reliably predicts whether an individual network will show temporally random or regular behavior; however, we find evidence that degree heterogeneity in binomial networks has a strong effect on the resulting state. To confirm these findings, we generate random networks with independently-adjustable degree mean and variance. We find that the likelihood of temporally-random behavior increases as degree variance increases. Our results indicate the subtle and complex relationship between network structure and dynamics.

  9. Nestedness of desert bat assemblages: species composition patterns in insular and terrestrial landscapes.

    PubMed

    Frick, Winifred F; Hayes, John P; Heady, Paul A

    2009-01-01

    Nested patterns of community composition exist when species at depauperate sites are subsets of those occurring at sites with more species. Nested subset analysis provides a framework for analyzing species occurrences to determine non-random patterns in community composition and potentially identify mechanisms that may shape faunal assemblages. We examined nested subset structure of desert bat assemblages on 20 islands in the southern Gulf of California and at 27 sites along the Baja California peninsula coast, the presumable source pool for the insular faunas. Nested structure was analyzed using a conservative null model that accounts for expected variation in species richness and species incidence across sites (fixed row and column totals). Associations of nestedness and island traits, such as size and isolation, as well as species traits related to mobility, were assessed to determine the potential role of differential extinction and immigration abilities as mechanisms of nestedness. Bat faunas were significantly nested in both the insular and terrestrial landscape and island size was significantly correlated with nested structure, such that species on smaller islands tended to be subsets of species on larger islands, suggesting that differential extinction vulnerabilities may be important in shaping insular bat faunas. The role of species mobility and immigration abilities is less clearly associated with nestedness in this system. Nestedness in the terrestrial landscape is likely due to stochastic processes related to random placement of individuals and this may also influence nested patterns on islands, but additional data on abundances will be necessary to distinguish among these potential mechanisms.

  10. A Linear Stochastic Dynamical Model of ENSO. Part II: Analysis.

    NASA Astrophysics Data System (ADS)

    Thompson, C. J.; Battisti, D. S.

    2001-02-01

    In this study the behavior of a linear, intermediate model of ENSO is examined under stochastic forcing. The model was developed in a companion paper (Part I) and is derived from the Zebiak-Cane ENSO model. Four variants of the model are used whose stabilities range from slightly damped to moderately damped. Each model is run as a simulation while being perturbed by noise that is uncorrelated (white) in space and time. The statistics of the model output show the moderately damped models to be more realistic than the slightly damped models. The moderately damped models have power spectra that are quantitatively quite similar to observations, and a seasonal pattern of variance that is qualitatively similar to observations. All models produce ENSOs that are phase locked to the annual cycle, and all display the `spring barrier' characteristic in their autocorrelation patterns, though in the models this `barrier' occurs during the summer and is less intense than in the observations (inclusion of nonlinear effects is shown to partially remedy this deficiency). The more realistic models also show a decadal variability in the lagged autocorrelation pattern that is qualitatively similar to observations.Analysis of the models shows that the greatest part of the variability comes from perturbations that project onto the first singular vector, which then grow rapidly into the ENSO mode. Essentially, the model output represents many instances of the ENSO mode, with random phase and amplitude, stimulated by the noise through the optimal transient growth of the singular vectors.The limit of predictability for each model is calculated and it is shown that the more realistic (moderately damped) models have worse potential predictability (9-15 months) than the deterministic chaotic models that have been studied widely in the literature. The predictability limits are strongly correlated with the stability of the models' ENSO mode-the more highly damped models having much shorter limits of predictability. A comparison of the two most realistic models shows that even though these models have similar statistics, they have very different predictability limits. The models have a strong seasonal dependence to their predictability limits.The results of this study (with the companion paper) suggest that the linear, stable dynamical model of ENSO is indeed a plausible hypothesis for the observed ENSO. With very reasonable levels of stochastic forcing, the model produces realistic levels of variance, has a realistic spectrum, and qualitatively reproduces the observed seasonal pattern of variance, the autocorrelation pattern, and the ENSO-like decadal variability.

  11. Mechanical Stress Induces Remodeling of Vascular Networks in Growing Leaves

    PubMed Central

    Bar-Sinai, Yohai; Julien, Jean-Daniel; Sharon, Eran; Armon, Shahaf; Nakayama, Naomi; Adda-Bedia, Mokhtar; Boudaoud, Arezki

    2016-01-01

    Differentiation into well-defined patterns and tissue growth are recognized as key processes in organismal development. However, it is unclear whether patterns are passively, homogeneously dilated by growth or whether they remodel during tissue expansion. Leaf vascular networks are well-fitted to investigate this issue, since leaves are approximately two-dimensional and grow manyfold in size. Here we study experimentally and computationally how vein patterns affect growth. We first model the growing vasculature as a network of viscoelastic rods and consider its response to external mechanical stress. We use the so-called texture tensor to quantify the local network geometry and reveal that growth is heterogeneous, resembling non-affine deformations in composite materials. We then apply mechanical forces to growing leaves after veins have differentiated, which respond by anisotropic growth and reorientation of the network in the direction of external stress. External mechanical stress appears to make growth more homogeneous, in contrast with the model with viscoelastic rods. However, we reconcile the model with experimental data by incorporating randomness in rod thickness and a threshold in the rod growth law, making the rods viscoelastoplastic. Altogether, we show that the higher stiffness of veins leads to their reorientation along external forces, along with a reduction in growth heterogeneity. This process may lead to the reinforcement of leaves against mechanical stress. More generally, our work contributes to a framework whereby growth and patterns are coordinated through the differences in mechanical properties between cell types. PMID:27074136

  12. Colony patterning and collective hyphal growth of filamentous fungi

    NASA Astrophysics Data System (ADS)

    Matsuura, Shu

    2002-11-01

    Colony morphology of wild and mutant strains of Aspergillus nidulans at various nutrient and agar levels was investigated. Two types of colony patterning were found for these strains. One type produced uniform colonies at all nutrient and agar levels tested, and the other exhibited morphological change into disordered ramified colonies at low nutrient levels. Both types showed highly condensed compact colonies at high nutrient levels on low agar media that was highly diffusive. Disordered colonies were found to develop with low hyphal extension rates at low nutrient levels. To understand basic pattern selection rules, a colony model with three parameters, i.e., the initial nutrient level and the step length of nutrient random walk as the external parameters, and the frequency of nutrient uptake as an internal parameter, was constructed. At low nutrient levels, with decreasing nutrient uptake frequency under diffusive conditions, the model colony exhibited onsets of disordered ramification. Further, in the growth process of A. nidulans, reduction of hyphal extension rate due to a population effect of hyphae was found when hyphae form three-dimensional dense colonies, as compared to the case in which hyphal growth was restricted into two-dimensional space. A hyphal population effect was introduced in the colony model. Thickening of colony periphery due to the population effect became distinctive as the nutrient diffusion effect was raised at high nutrient levels with low hyphal growth rate. It was considered that colony patterning and onset of disorder were strongly governed by the combination of nutrient diffusion and hyphal growth rate.

  13. Super-resolution photoacoustic microscopy using joint sparsity

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Haltmeier, M.; Berer, T.; Leiss-Holzinger, E.; Murray, T. W.

    2017-07-01

    We present an imaging method that uses the random optical speckle patterns that naturally emerge as light propagates through strongly scattering media as a structured illumination source for photoacoustic imaging. Our approach, termed blind structured illumination photoacoustic microscopy (BSIPAM), was inspired by recent work in fluorescence microscopy where super-resolution imaging was demonstrated using multiple unknown speckle illumination patterns. We extend this concept to the multiple scattering domain using photoacoustics (PA), with the speckle pattern serving to generate ultrasound. The optical speckle pattern that emerges as light propagates through diffuse media provides structured illumination to an object placed behind a scattering wall. The photoacoustic signal produced by such illumination is detected using a focused ultrasound transducer. We demonstrate through both simulation and experiment, that by acquiring multiple photoacoustic images, each produced by a different random and unknown speckle pattern, an image of an absorbing object can be reconstructed with a spatial resolution far exceeding that of the ultrasound transducer. We experimentally and numerically demonstrate a gain in resolution of more than a factor of two by using multiple speckle illuminations. The variations in the photoacoustic signals generated with random speckle patterns are utilized in BSIPAM using a novel reconstruction algorithm. Exploiting joint sparsity, this algorithm is capable of reconstructing the absorbing structure from measured PA signals with a resolution close to the speckle size. Another way to excite random excitation for photoacoustic imaging are small absorbing particles, including contrast agents, which flow through small vessels. For such a set-up, the joint-sparsity is generated by the fact that all the particles move in the same vessels. Structured illumination in that case is not necessary.

  14. Multiple imputation for assessment of exposures to drinking water contaminants: evaluation with the Atrazine Monitoring Program.

    PubMed

    Jones, Rachael M; Stayner, Leslie T; Demirtas, Hakan

    2014-10-01

    Drinking water may contain pollutants that harm human health. The frequency of pollutant monitoring may occur quarterly, annually, or less frequently, depending upon the pollutant, the pollutant concentration, and community water system. However, birth and other health outcomes are associated with narrow time-windows of exposure. Infrequent monitoring impedes linkage between water quality and health outcomes for epidemiological analyses. To evaluate the performance of multiple imputation to fill in water quality values between measurements in community water systems (CWSs). The multiple imputation method was implemented in a simulated setting using data from the Atrazine Monitoring Program (AMP, 2006-2009 in five Midwestern states). Values were deleted from the AMP data to leave one measurement per month. Four patterns reflecting drinking water monitoring regulations were used to delete months of data in each CWS: three patterns were missing at random and one pattern was missing not at random. Synthetic health outcome data were created using a linear and a Poisson exposure-response relationship with five levels of hypothesized association, respectively. The multiple imputation method was evaluated by comparing the exposure-response relationships estimated based on multiply imputed data with the hypothesized association. The four patterns deleted 65-92% months of atrazine observations in AMP data. Even with these high rates of missing information, our procedure was able to recover most of the missing information when the synthetic health outcome was included for missing at random patterns and for missing not at random patterns with low-to-moderate exposure-response relationships. Multiple imputation appears to be an effective method for filling in water quality values between measurements. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Graph theoretical model of a sensorimotor connectome in zebrafish.

    PubMed

    Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan

    2012-01-01

    Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.

  16. Hot-spot model for accretion disc variability as random process. II. Mathematics of the power-spectrum break frequency

    NASA Astrophysics Data System (ADS)

    Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.

    2013-08-01

    Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.

  17. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data.

    PubMed

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-03-01

    Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568-575).

  18. Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data

    PubMed Central

    Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J

    2018-01-01

    Abstract Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568–575). PMID:29165557

  19. Threshold model of cascades in empirical temporal networks

    NASA Astrophysics Data System (ADS)

    Karimi, Fariba; Holme, Petter

    2013-08-01

    Threshold models try to explain the consequences of social influence like the spread of fads and opinions. Along with models of epidemics, they constitute a major theoretical framework of social spreading processes. In threshold models on static networks, an individual changes her state if a certain fraction of her neighbors has done the same. When there are strong correlations in the temporal aspects of contact patterns, it is useful to represent the system as a temporal network. In such a system, not only contacts but also the time of the contacts are represented explicitly. In many cases, bursty temporal patterns slow down disease spreading. However, as we will see, this is not a universal truth for threshold models. In this work we propose an extension of Watts’s classic threshold model to temporal networks. We do this by assuming that an agent is influenced by contacts which lie a certain time into the past. I.e., the individuals are affected by contacts within a time window. In addition to thresholds in the fraction of contacts, we also investigate the number of contacts within the time window as a basis for influence. To elucidate the model’s behavior, we run the model on real and randomized empirical contact datasets.

  20. In Search of Meaning: Are School Rampage Shootings Random and Senseless Violence?

    PubMed

    Madfis, Eric

    2017-01-02

    This article discusses Joel Best's ( 1999 ) notion of random violence and applies his concepts of pointlessness, patternlessness, and deterioration to the reality about multiple-victim school shootings gleaned from empirical research about the phenomenon. Best describes how violence is rarely random, as scholarship reveals myriad observable patterns, lots of discernable motives and causes, and often far too much fear-mongering over how bad society is getting and how violent we are becoming. In contrast, it is vital that the media, scholars, and the public better understand crime patterns, criminal motivations, and the causes of fluctuating crime rates. As an effort toward such progress, this article reviews the academic literature on school rampage shootings and explores the extent to which these attacks are and are not random acts of violence.

  1. Social patterns revealed through random matrix theory

    NASA Astrophysics Data System (ADS)

    Sarkar, Camellia; Jalan, Sarika

    2014-11-01

    Despite the tremendous advancements in the field of network theory, very few studies have taken weights in the interactions into consideration that emerge naturally in all real-world systems. Using random matrix analysis of a weighted social network, we demonstrate the profound impact of weights in interactions on emerging structural properties. The analysis reveals that randomness existing in particular time frame affects the decisions of individuals rendering them more freedom of choice in situations of financial security. While the structural organization of networks remains the same throughout all datasets, random matrix theory provides insight into the interaction pattern of individuals of the society in situations of crisis. It has also been contemplated that individual accountability in terms of weighted interactions remains as a key to success unless segregation of tasks comes into play.

  2. Network Sampling and Classification:An Investigation of Network Model Representations

    PubMed Central

    Airoldi, Edoardo M.; Bai, Xue; Carley, Kathleen M.

    2011-01-01

    Methods for generating a random sample of networks with desired properties are important tools for the analysis of social, biological, and information networks. Algorithm-based approaches to sampling networks have received a great deal of attention in recent literature. Most of these algorithms are based on simple intuitions that associate the full features of connectivity patterns with specific values of only one or two network metrics. Substantive conclusions are crucially dependent on this association holding true. However, the extent to which this simple intuition holds true is not yet known. In this paper, we examine the association between the connectivity patterns that a network sampling algorithm aims to generate and the connectivity patterns of the generated networks, measured by an existing set of popular network metrics. We find that different network sampling algorithms can yield networks with similar connectivity patterns. We also find that the alternative algorithms for the same connectivity pattern can yield networks with different connectivity patterns. We argue that conclusions based on simulated network studies must focus on the full features of the connectivity patterns of a network instead of on the limited set of network metrics for a specific network type. This fact has important implications for network data analysis: for instance, implications related to the way significance is currently assessed. PMID:21666773

  3. All-optical animation projection system with rotating fieldstone.

    PubMed

    Ishii, Yuko; Takayama, Yoshihisa; Kodate, Kashiko

    2007-06-11

    A simple and compact rewritable holographic memory system using a fieldstone of Ulexite is proposed. The role of the fieldstone is to impose random patterns on the reference beam to record plural images with the random-reference multiplexing scheme. The operations for writing and reading holograms are carried out by simply rotating the fieldstone in one direction. One of the features of this approach is found in a way to generate random patterns without computer drawings. The experimental study confirms that our system enables the smooth readout of the stored images one after another so that the series of reproduced images are projected as an animation.

  4. All-optical animation projection system with rotating fieldstone

    NASA Astrophysics Data System (ADS)

    Ishii, Yuko; Takayama, Yoshihisa; Kodate, Kashiko

    2007-06-01

    A simple and compact rewritable holographic memory system using a fieldstone of Ulexite is proposed. The role of the fieldstone is to impose random patterns on the reference beam to record plural images with the random-reference multiplexing scheme. The operations for writing and reading holograms are carried out by simply rotating the fieldstone in one direction. One of the features of this approach is found in a way to generate random patterns without computer drawings. The experimental study confirms that our system enables the smooth readout of the stored images one after another so that the series of reproduced images are projected as an animation.

  5. Use of Biometrics within Sub-Saharan Refugee Communities

    DTIC Science & Technology

    2013-12-01

    fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is

  6. Cryptotomography: reconstructing 3D Fourier intensities from randomly oriented single-shot diffraction patterns (CXIDB ID 9)

    DOE Data Explorer

    Loh, Ne-Te Duane

    2011-08-01

    These 2000 single-shot diffraction patterns include were either background-scattering only or hits (background-scattering plus diffraction signal from sub-micron ellipsoidal particles at random, undetermined orientations). Candidate hits were identified by eye, and the remainder were presumed as background. 54 usable, background-subtracted hits in this set (procedure in referenced article) were used to reconstruct the 3D diffraction intensities of the average ellipsoidal particle.

  7. Identification of cultivars and validation of genetic relationships in Mangifera indica L. using RAPD markers.

    PubMed

    Schnell, R J; Ronning, C M; Knight, R J

    1995-02-01

    Twenty-five accessions of mango were examined for random amplified polymorphic DNA (RAPD) genetic markers with 80 10-mer random primers. Of the 80 primers screened, 33 did not amplify, 19 were monomorphic, and 28 gave reproducible, polymorphic DNA amplification patterns. Eleven primers were selected from the 28 for the study. The number of bands generated was primer- and genotype-dependent, and ranged from 1 to 10. No primer gave unique banding patterns for each of the 25 accessions; however, ten different combinations of 2 primer banding patterns produced unique fingerprints for each accession. A maternal half-sib (MHS) family was included among the 25 accessions to see if genetic relationships could be detected. RAPD data were used to generate simple matching coefficients, which were analyzed phenetically and by means of principal coordinate analysis (PCA). The MHS clustered together in both the phenetic and the PCA while the randomly selected accessions were scattered with no apparent pattern. The uses of RAPD analysis for Mangifera germ plasm classification and clonal identification are discussed.

  8. Quantitative analysis of skin flap blood flow in the rat using laser Doppler velocimetry.

    PubMed Central

    Marks, N J

    1985-01-01

    Two experiments carried out on rat skin flaps are described, where microvascular flow has been measured noninvasively by a laser Doppler velocimeter. Using this technique it is possible to define the limits of an axial pattern flap in terms of microvascular flow; this was found to increase when the flap is elevated. 'Random-pattern' perfusion is defined by a fall in flow. This recovers sequentially along the flap, and at a constant rate at all sites. A differential in microvascular perfusion is thus maintained along a random-pattern flap for at least the first postoperative week. In a second experiment it is shown that there appears to be a linear relationship between the reduction in skin blood flow in a random-pattern flap and the distance from the base at which the measurements are made. It is suggested that these data support the view that the blood flow in a skin flap recovers primarily from its base rather than via peripheral neovascularization, and that this is due to vascular collaterals opening within the flap rather than to a relaxation of sympathetic tone. PMID:3156992

  9. The Lévy flight paradigm: random search patterns and mechanisms.

    PubMed

    Reynolds, A M; Rhodes, C J

    2009-04-01

    Over recent years there has been an accumulation of evidence from a variety of experimental, theoretical, and field studies that many organisms use a movement strategy approximated by Lévy flights when they are searching for resources. Lévy flights are random movements that can maximize the efficiency of resource searches in uncertain environments. This is a highly significant finding because it suggests that Lévy flights provide a rigorous mathematical basis for separating out evolved, innate behaviors from environmental influences. We discuss recent developments in random-search theory, as well as the many different experimental and data collection initiatives that have investigated search strategies. Methods for trajectory construction and robust data analysis procedures are presented. The key to prediction and understanding does, however, lie in the elucidation of mechanisms underlying the observed patterns. We discuss candidate neurological, olfactory, and learning mechanisms for the emergence of Lévy flight patterns in some organisms, and note that convergence of behaviors along such different evolutionary pathways is not surprising given the energetic efficiencies that Lévy flight movement patterns confer.

  10. A bioavailable strontium isoscape for Western Europe: A machine learning approach

    PubMed Central

    von Holstein, Isabella C. C.; Laffoon, Jason E.; Willmes, Malte; Liu, Xiao-Ming; Davies, Gareth R.

    2018-01-01

    Strontium isotope ratios (87Sr/86Sr) are gaining considerable interest as a geolocation tool and are now widely applied in archaeology, ecology, and forensic research. However, their application for provenance requires the development of baseline models predicting surficial 87Sr/86Sr variations (“isoscapes”). A variety of empirically-based and process-based models have been proposed to build terrestrial 87Sr/86Sr isoscapes but, in their current forms, those models are not mature enough to be integrated with continuous-probability surface models used in geographic assignment. In this study, we aim to overcome those limitations and to predict 87Sr/86Sr variations across Western Europe by combining process-based models and a series of remote-sensing geospatial products into a regression framework. We find that random forest regression significantly outperforms other commonly used regression and interpolation methods, and efficiently predicts the multi-scale patterning of 87Sr/86Sr variations by accounting for geological, geomorphological and atmospheric controls. Random forest regression also provides an easily interpretable and flexible framework to integrate different types of environmental auxiliary variables required to model the multi-scale patterning of 87Sr/86Sr variability. The method is transferable to different scales and resolutions and can be applied to the large collection of geospatial data available at local and global levels. The isoscape generated in this study provides the most accurate 87Sr/86Sr predictions in bioavailable strontium for Western Europe (R2 = 0.58 and RMSE = 0.0023) to date, as well as a conservative estimate of spatial uncertainty by applying quantile regression forest. We anticipate that the method presented in this study combined with the growing numbers of bioavailable 87Sr/86Sr data and satellite geospatial products will extend the applicability of the 87Sr/86Sr geo-profiling tool in provenance applications. PMID:29847595

  11. Astronaut activity in weightlessness and unsupported space

    NASA Technical Reports Server (NTRS)

    Ivanov, Y. A.; Popov, V. A.; Kachaturyants, L. S.

    1975-01-01

    For the purpose of study of the performance ability of a human operator in prolonged weightless conditions was studied by the following methods: (1) psychophysiological analysis of certain operations; (2) the dynamic characteristics of a man, included in a model control system, with direct and delayed feedback; (3) evaluation of the singularities of analysis and quality of the working memory, in working with outlines of patterned and random lines; and (4) biomechanical analysis of spatial orientation and motor activity in unsupported space.

  12. Characterizing Ship Navigation Patterns Using Automatic Identification System (AIS) Data in the Baltic Sea

    DTIC Science & Technology

    in the Saint Petersburg area. We use three random forest models, that differ in their use of past information , to predict a vessels next port of visit...network where past information is used to more accurately predict the future state. The transitional probabilities change when predictor variables are...added that reach deeper into the past. Our findings suggest that successful prediction of the movement of a vessel depends on having accurate information on its recent history.

  13. Random Boolean networks for autoassociative memory: Optimization and sequential learning

    NASA Astrophysics Data System (ADS)

    Sherrington, D.; Wong, K. Y. M.

    Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.

  14. Regular Wave Propagation Out of Noise in Chemical Active Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, S.; Sendina-Nadal, I.; Perez-Munuzuri, V.

    2001-08-13

    A pacemaker, regularly emitting chemical waves, is created out of noise when an excitable photosensitive Belousov-Zhabotinsky medium, strictly unable to autonomously initiate autowaves, is forced with a spatiotemporal patterned random illumination. These experimental observations are also reproduced numerically by using a set of reaction-diffusion equations for an activator-inhibitor model, and further analytically interpreted in terms of genuine coupling effects arising from parametric fluctuations. Within the same framework we also address situations of noise-sustained propagation in subexcitable media.

  15. Hot spots in the microwave sky

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Juszkiewicz, Roman

    1987-01-01

    Tha assumption that the cosmic background fluctuations can be approximated as a random Gaussian field implies specific predictions for the radiation temperature pattern. Using this assumption, the abundances and angular sizes are calculated for regions of various levels of brightness expected to appear in the sky. Different observational strategies are assessed in the context of these results. Calculations for both large-angle and small-angle anisotropy generated by scale-invariant fluctuations in a flat universe are presented. Also discussed are simple generalizations to open cosmological models.

  16. Determining Individual Variation in Growth and Its Implication for Life-History and Population Processes Using the Empirical Bayes Method

    PubMed Central

    Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J.; Munch, Stephan; Skaug, Hans J.

    2014-01-01

    The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish. PMID:25211603

  17. Suspicious activity recognition in infrared imagery using Hidden Conditional Random Fields for outdoor perimeter surveillance

    NASA Astrophysics Data System (ADS)

    Rogotis, Savvas; Ioannidis, Dimosthenis; Tzovaras, Dimitrios; Likothanassis, Spiros

    2015-04-01

    The aim of this work is to present a novel approach for automatic recognition of suspicious activities in outdoor perimeter surveillance systems based on infrared video processing. Through the combination of size, speed and appearance based features, like the Center-Symmetric Local Binary Patterns, short-term actions are identified and serve as input, along with user location, for modeling target activities using the theory of Hidden Conditional Random Fields. HCRFs are used to directly link a set of observations to the most appropriate activity label and as such to discriminate high risk activities (e.g. trespassing) from zero risk activities (e.g loitering outside the perimeter). Experimental results demonstrate the effectiveness of our approach in identifying suspicious activities for video surveillance systems.

  18. Bayesian spatial transformation models with applications in neuroimaging data.

    PubMed

    Miranda, Michelle F; Zhu, Hongtu; Ibrahim, Joseph G

    2013-12-01

    The aim of this article is to develop a class of spatial transformation models (STM) to spatially model the varying association between imaging measures in a three-dimensional (3D) volume (or 2D surface) and a set of covariates. The proposed STM include a varying Box-Cox transformation model for dealing with the issue of non-Gaussian distributed imaging data and a Gaussian Markov random field model for incorporating spatial smoothness of the imaging data. Posterior computation proceeds via an efficient Markov chain Monte Carlo algorithm. Simulations and real data analysis demonstrate that the STM significantly outperforms the voxel-wise linear model with Gaussian noise in recovering meaningful geometric patterns. Our STM is able to reveal important brain regions with morphological changes in children with attention deficit hyperactivity disorder. © 2013, The International Biometric Society.

  19. Investigating Factorial Invariance of Latent Variables Across Populations When Manifest Variables Are Missing Completely

    PubMed Central

    Widaman, Keith F.; Grimm, Kevin J.; Early, Dawnté R.; Robins, Richard W.; Conger, Rand D.

    2013-01-01

    Difficulties arise in multiple-group evaluations of factorial invariance if particular manifest variables are missing completely in certain groups. Ad hoc analytic alternatives can be used in such situations (e.g., deleting manifest variables), but some common approaches, such as multiple imputation, are not viable. At least 3 solutions to this problem are viable: analyzing differing sets of variables across groups, using pattern mixture approaches, and a new method using random number generation. The latter solution, proposed in this article, is to generate pseudo-random normal deviates for all observations for manifest variables that are missing completely in a given sample and then to specify multiple-group models in a way that respects the random nature of these values. An empirical example is presented in detail comparing the 3 approaches. The proposed solution can enable quantitative comparisons at the latent variable level between groups using programs that require the same number of manifest variables in each group. PMID:24019738

  20. Random catalytic reaction networks

    NASA Astrophysics Data System (ADS)

    Stadler, Peter F.; Fontana, Walter; Miller, John H.

    1993-03-01

    We study networks that are a generalization of replicator (or Lotka-Volterra) equations. They model the dynamics of a population of object types whose binary interactions determine the specific type of interaction product. Such a system always reduces its dimension to a subset that contains production pathways for all of its members. The network equation can be rewritten at a level of collectives in terms of two basic interaction patterns: replicator sets and cyclic transformation pathways among sets. Although the system contains well-known cases that exhibit very complicated dynamics, the generic behavior of randomly generated systems is found (numerically) to be extremely robust: convergence to a globally stable rest point. It is easy to tailor networks that display replicator interactions where the replicators are entire self-sustaining subsystems, rather than structureless units. A numerical scan of random systems highlights the special properties of elementary replicators: they reduce the effective interconnectedness of the system, resulting in enhanced competition, and strong correlations between the concentrations.

  1. Boundary lubrication of heterogeneous surfaces and the onset of cavitation in frictional contacts

    PubMed Central

    Savio, Daniele; Pastewka, Lars; Gumbsch, Peter

    2016-01-01

    Surfaces can be slippery or sticky depending on surface chemistry and roughness. We demonstrate in atomistic simulations that regular and random slip patterns on a surface lead to pressure excursions within a lubricated contact that increase quadratically with decreasing contact separation. This is captured well by a simple hydrodynamic model including wall slip. We predict with this model that pressure changes for larger length scales and realistic frictional conditions can easily reach cavitation thresholds and significantly change the load-bearing capacity of a contact. Cavitation may therefore be the norm, not the exception, under boundary lubrication conditions. PMID:27051871

  2. Spatial structures in a simple model of population dynamics for parasite-host interactions

    NASA Astrophysics Data System (ADS)

    Dong, J. J.; Skinner, B.; Breecher, N.; Schmittmann, B.; Zia, R. K. P.

    2015-08-01

    Spatial patterning can be crucially important for understanding the behavior of interacting populations. Here we investigate a simple model of parasite and host populations in which parasites are random walkers that must come into contact with a host in order to reproduce. We focus on the spatial arrangement of parasites around a single host, and we derive using analytics and numerical simulations the necessary conditions placed on the parasite fecundity and lifetime for the population's long-term survival. We also show that the parasite population can be pushed to extinction by a large drift velocity, but, counterintuitively, a small drift velocity generally increases the parasite population.

  3. Sectoral transitions - modeling the development from agrarian to service economies

    NASA Astrophysics Data System (ADS)

    Lutz, Raphael; Spies, Michael; Reusser, Dominik E.; Kropp, Jürgen P.; Rybski, Diego

    2013-04-01

    We consider the sectoral composition of a country's GDP, i.e the partitioning into agrarian, industrial, and service sectors. Exploring a simple system of differential equations we characterise the transfer of GDP shares between the sectors in the course of economic development. The model fits for the majority of countries providing 4 country-specific parameters. Relating the agrarian with the industrial sector, a data collapse over all countries and all years supports the applicability of our approach. Depending on the parameter ranges, country development exhibits different transfer properties. Most countries follow 3 of 8 characteristic paths. The types are not random but show distinct geographic and development patterns.

  4. Task 1, Fractal characteristics of drainage patterns observed in the Appalachian Valley and Ridge and Plateau provinces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, T.; Dominic, J.; Halverson, J.

    1996-04-10

    Drainage patterns observed in the Appalachian Valley and Ridge and Plateau provinces exhibit distinctly different patterns. The patterns appear to be controlled by varying influences of local structural and lithologic variability. Drainage patterns in the Valley and Ridge study area can be classified as a combination of dendritic and trellis arrangements. The patterns vary over short distances in both the strike and dip directions. In the Granny Creek area of the Appalachian Plateau drainage patterns are predominantly dendritic. The possibility that these drainage patterns have fractal characteristics was evaluated by box-counting. Results obtained from box counting do not yield amore » well defined fractal regime in either areas. In the Valley and Ridge a space-filling, or random regime (D=2) is observed for boxes with side-lengths of 300 meters and greater. Below 300 meters, large changes in D occur between consecutively smaller box sizes. From side lengths of 300 to 150m, 150 to 75m, and 75 to 38m, D is measured at 1.77, 1.39, and 1.08 respectively. For box sizes less than 38m the fractal dimension is 1 or less. While the l0g-log response of the box counting data is nonlinear and does not define a fractal regime, the curves offer the possibility of characterizing non-fractal patterns. The rate at which D drops outside the random regime correlates to drainage density. D in areas with a smaller density of drainage segments fell toward saturation (D=1) more abruptly. The break-away point from the random regime and the transition to the saturated regime may provide useful information about the relative lengths of stream segments.« less

  5. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  6. Systematic Onset of Periodic Patterns in Random Disk Packings

    NASA Astrophysics Data System (ADS)

    Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.

    2018-04-01

    We report evidence of a surprising systematic onset of periodic patterns in very tall piles of disks deposited randomly between rigid walls. Independently of the pile width, periodic structures are always observed in monodisperse deposits containing up to 1 07 disks. The probability density function of the lengths of disordered transient phases that precede the onset of periodicity displays an approximately exponential tail. These disordered transients may become very large when the channel width grows without bound. For narrow channels, the probability density of finding periodic patterns of a given period displays a series of discrete peaks, which, however, are washed out completely when the channel width grows.

  7. Characterizing pixel and point patterns with a hyperuniformity disorder length.

    PubMed

    Chieco, A T; Dreyfus, R; Durian, D J

    2017-09-01

    We introduce the concept of a "hyperuniformity disorder length" h that controls the variance of volume fraction fluctuations for randomly placed windows of fixed size. In particular, fluctuations are determined by the average number of particles within a distance h from the boundary of the window. We first compute special expectations and bounds in d dimensions, and then illustrate the range of behavior of h versus window size L by analyzing several different types of simulated two-dimensional pixel patterns-where particle positions are stored as a binary digital image in which pixels have value zero if empty and one if they contain a particle. The first are random binomial patterns, where pixels are randomly flipped from zero to one with probability equal to area fraction. These have long-ranged density fluctuations, and simulations confirm the exact result h=L/2. Next we consider vacancy patterns, where a fraction f of particles on a lattice are randomly removed. These also display long-range density fluctuations, but with h=(L/2)(f/d) for small f, and h=L/2 for f→1. And finally, for a hyperuniform system with no long-range density fluctuations, we consider "Einstein patterns," where each particle is independently displaced from a lattice site by a Gaussian-distributed amount. For these, at large L,h approaches a constant equal to about half the root-mean-square displacement in each dimension. Then we turn to gray-scale pixel patterns that represent simulated arrangements of polydisperse particles, where the volume of a particle is encoded in the value of its central pixel. And we discuss the continuum limit of point patterns, where pixel size vanishes. In general, we thus propose to quantify particle configurations not just by the scaling of the density fluctuation spectrum but rather by the real-space spectrum of h(L) versus L. We call this approach "hyperuniformity disorder length spectroscopy".

  8. Forecasting of Seasonal Rainfall using ENSO and IOD teleconnection with Classification Models

    NASA Astrophysics Data System (ADS)

    De Silva, T.; Hornberger, G. M.

    2017-12-01

    Seasonal to annual forecasts of precipitation patterns are very important for water infrastructure management. In particular, such forecasts can be used to inform decisions about the operation of multipurpose reservoir systems in the face of changing climate conditions. Success in making useful forecasts often is achieved by considering climate teleconnections such as the El-Nino-Southern Oscillation (ENSO), Indian Ocean Dipole (IOD) as related to sea surface temperature variations. We present an analysis to explore the utility of using rainfall relationships in Sri Lanka with ENSO and IOD to predict rainfall to the Mahaweli, river basin. Forecasting of rainfall as classes - above normal, normal, and below normal - can be useful for water resource management decision making. Quadratic discrimination analysis (QDA) and random forest models are used to identify the patterns of rainfall classes with respect to ENSO and IOD indices. These models can be used to forecast the likelihood of areal rainfall anomalies using predicted climate indices. Results can be used for decisions regarding allocation of water for agriculture and electricity generation within the Mahaweli project of Sri Lanka.

  9. Contact stiffness of regularly patterned multi-asperity interfaces

    NASA Astrophysics Data System (ADS)

    Li, Shen; Yao, Quanzhou; Li, Qunyang; Feng, Xi-Qiao; Gao, Huajian

    2018-02-01

    Contact stiffness is a fundamental mechanical index of solid surfaces and relevant to a wide range of applications. Although the correlation between contact stiffness, contact size and load has long been explored for single-asperity contacts, our understanding of the contact stiffness of rough interfaces is less clear. In this work, the contact stiffness of hexagonally patterned multi-asperity interfaces is studied using a discrete asperity model. We confirm that the elastic interaction among asperities is critical in determining the mechanical behavior of rough contact interfaces. More importantly, in contrast to the common wisdom that the interplay of asperities is solely dictated by the inter-asperity spacing, we show that the number of asperities in contact (or equivalently, the apparent size of contact) also plays an indispensable role. Based on the theoretical analysis, we propose a new parameter for gauging the closeness of asperities. Our theoretical model is validated by a set of experiments. To facilitate the application of the discrete asperity model, we present a general equation for contact stiffness estimation of regularly rough interfaces, which is further proved to be applicable for interfaces with single-scale random roughness.

  10. Describing spatial pattern in stream networks: A practical approach

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  11. A geostatistical approach for describing spatial pattern in stream networks

    USGS Publications Warehouse

    Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.

    2005-01-01

    The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.

  12. In silico prediction of Tetrahymena pyriformis toxicity for diverse industrial chemicals with substructure pattern recognition and machine learning methods.

    PubMed

    Cheng, Feixiong; Shen, Jie; Yu, Yue; Li, Weihua; Liu, Guixia; Lee, Philip W; Tang, Yun

    2011-03-01

    There is an increasing need for the rapid safety assessment of chemicals by both industries and regulatory agencies throughout the world. In silico techniques are practical alternatives in the environmental hazard assessment. It is especially true to address the persistence, bioaccumulative and toxicity potentials of organic chemicals. Tetrahymena pyriformis toxicity is often used as a toxic endpoint. In this study, 1571 diverse unique chemicals were collected from the literature and composed of the largest diverse data set for T. pyriformis toxicity. Classification predictive models of T. pyriformis toxicity were developed by substructure pattern recognition and different machine learning methods, including support vector machine (SVM), C4.5 decision tree, k-nearest neighbors and random forest. The results of a 5-fold cross-validation showed that the SVM method performed better than other algorithms. The overall predictive accuracies of the SVM classification model with radial basis functions kernel was 92.2% for the 5-fold cross-validation and 92.6% for the external validation set, respectively. Furthermore, several representative substructure patterns for characterizing T. pyriformis toxicity were also identified via the information gain analysis methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Test of “Facilitation” vs. “Proximal Process” Moderator Models for the Effects of Multisystemic Therapy on Adolescents with Severe Conduct Problem

    PubMed Central

    Weiss, Bahr; Han, Susan S.; Tran, Nam T.; Gallop, Robert; Ngo, Victoria K.

    2014-01-01

    The present study identified moderators of Multisystemic Therapy’s (MST) effects on adolescent conduct problems, considering “facilitation” and “proximal process” moderation models. The sample included 164 adolescents (mean age=14.6 years; 83% male) randomly assigned to receive MST or services as usual; parent, youth, and teacher reports of adolescent functioning were obtained. A number of significant moderators were identified. Proximal process moderation patterns were identified (e.g., families with parents with lower levels of adaptive child discipline skills gained more from MST), but the majority of significant interactions showed a facilitation moderation pattern with, for instance, higher levels of adaptive functioning in families and parents appearing to facilitate MST (i.e., greater benefits from MST were found for these families). This facilitation pattern may reflect such families being more capable of and/or more motivated to use the resources provided by MST. It is suggested that factors consistently identified as facilitation moderators may serve as useful foci for MST’s strength-based levers of change approach. Other implications of these findings for individualized treatment also are discussed. PMID:25387903

  14. Heterogeneity in Early Responses in ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial).

    PubMed

    Dhruva, Sanket S; Huang, Chenxi; Spatz, Erica S; Coppi, Andreas C; Warner, Frederick; Li, Shu-Xia; Lin, Haiqun; Xu, Xiao; Furberg, Curt D; Davis, Barry R; Pressel, Sara L; Coifman, Ronald R; Krumholz, Harlan M

    2017-07-01

    Randomized trials of hypertension have seldom examined heterogeneity in response to treatments over time and the implications for cardiovascular outcomes. Understanding this heterogeneity, however, is a necessary step toward personalizing antihypertensive therapy. We applied trajectory-based modeling to data on 39 763 study participants of the ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) to identify distinct patterns of systolic blood pressure (SBP) response to randomized medications during the first 6 months of the trial. Two trajectory patterns were identified: immediate responders (85.5%), on average, had a decreasing SBP, whereas nonimmediate responders (14.5%), on average, had an initially increasing SBP followed by a decrease. Compared with those randomized to chlorthalidone, participants randomized to amlodipine (odds ratio, 1.20; 95% confidence interval [CI], 1.10-1.31), lisinopril (odds ratio, 1.88; 95% CI, 1.73-2.03), and doxazosin (odds ratio, 1.65; 95% CI, 1.52-1.78) had higher adjusted odds ratios associated with being a nonimmediate responder (versus immediate responder). After multivariable adjustment, nonimmediate responders had a higher hazard ratio of stroke (hazard ratio, 1.49; 95% CI, 1.21-1.84), combined cardiovascular disease (hazard ratio, 1.21; 95% CI, 1.11-1.31), and heart failure (hazard ratio, 1.48; 95% CI, 1.24-1.78) during follow-up between 6 months and 2 years. The SBP response trajectories provided superior discrimination for predicting downstream adverse cardiovascular events than classification based on difference in SBP between the first 2 measurements, SBP at 6 months, and average SBP during the first 6 months. Our findings demonstrate heterogeneity in response to antihypertensive therapies and show that chlorthalidone is associated with more favorable initial response than the other medications. © 2017 American Heart Association, Inc.

  15. Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.

    PubMed

    Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong

    2018-03-01

    The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Predicting assemblages and species richness of endemic fish in the upper Yangtze River.

    PubMed

    He, Yongfeng; Wang, Jianwei; Lek-Ang, Sithan; Lek, Sovan

    2010-09-01

    The present work describes the ability of two modeling methods, Classification and Regression Tree (CART) and Random Forest (RF), to predict endemic fish assemblages and species richness in the upper Yangtze River, and then to identify the determinant environmental factors contributing to the models. The models included 24 predictor variables and 2 response variables (fish assemblage and species richness) for a total of 46 site units. The predictive quality of the modeling approaches was judged with a leave-one-out validation procedure. There was an average success of 60.9% and 71.7% to assign each site unit to the correct assemblage of fish, and 73% and 84% to explain the variance in species richness, by using CART and RF models, respectively. RF proved to be better than CART in terms of accuracy and efficiency in ecological applications. In any case, the mixed models including both land cover and river characteristic variables were more powerful than either individual one in explaining the endemic fish distribution pattern in the upper Yangtze River. For instance, altitude, slope, length, discharge, runoff, farmland and alpine and sub-alpine meadow played important roles in driving the observed endemic fish assemblage structure, while farmland, slope grassland, discharge, runoff, altitude and drainage area in explaining the observed patterns of endemic species richness. Therefore, the various effects of human activity on natural aquatic ecosystems, in particular, the flow modification of the river and the land use changes may have a considerable effect on the endemic fish distribution patterns on a regional scale. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Layout decomposition of self-aligned double patterning for 2D random logic patterning

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.

    2011-04-01

    Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.

  18. Directed assembly of gold nanowires on silicon via reorganization and simultaneous fusion of randomly distributed gold nanoparticles.

    PubMed

    Reinhardt, Hendrik M; Bücker, Kerstin; Hampp, Norbert A

    2015-05-04

    Laser-induced reorganization and simultaneous fusion of nanoparticles is introduced as a versatile concept for pattern formation on surfaces. The process takes advantage of a phenomenon called laser-induced periodic surface structures (LIPSS) which originates from periodically alternating photonic fringe patterns in the near-field of solids. Associated photonic fringe patterns are shown to reorganize randomly distributed gold nanoparticles on a silicon wafer into periodic gold nanostructures. Concomitant melting due to optical heating facilitates the formation of continuous structures such as periodic gold nanowire arrays. Generated patterns can be converted into secondary structures using directed assembly or self-organization. This includes for example the rotation of gold nanowire arrays by arbitrary angles or their fragmentation into arrays of aligned gold nanoparticles.

  19. Pattern of brain injury and depressed heart rate variability in newborns with hypoxic ischemic encephalopathy.

    PubMed

    Metzler, Marina; Govindan, Rathinaswamy; Al-Shargabi, Tareq; Vezina, Gilbert; Andescavage, Nickie; Wang, Yunfei; du Plessis, Adre; Massaro, An N

    2017-09-01

    BackgroundDecreased heart rate variability (HRV) is a measure of autonomic dysfunction and brain injury in newborns with hypoxic ischemic encephalopathy (HIE). This study aimed to characterize the relationship between HRV and brain injury pattern using magnetic resonance imaging (MRI) in newborns with HIE undergoing therapeutic hypothermia.MethodsHRV metrics were quantified in the time domain (α S , α L , and root mean square at short (RMS S ) and long (RMS L ) timescales) and frequency domain (relative low-(LF) and high-frequency (HF) power) over 24-27 h of life. The brain injury pattern shown by MRI was classified as no injury, pure cortical/white matter injury, mixed watershed/mild basal ganglia injury, predominant basal ganglia or global injury, and death. HRV metrics were compared across brain injury pattern groups using a random-effects mixed model.ResultsData from 74 infants were analyzed. Brain injury pattern was significantly associated with the degree of HRV suppression. Specifically, negative associations were observed between the pattern of brain injury and RMS S (estimate -0.224, SE 0.082, P=0.006), RMS L (estimate -0.189, SE 0.082, P=0.021), and LF power (estimate -0.044, SE 0.016, P=0.006).ConclusionDegree of HRV depression is related to the pattern of brain injury. HRV monitoring may provide insights into the pattern of brain injury at the bedside.

  20. PATTERN OF BRAIN INJURY AND DEPRESSED HEART RATE VARIABILITY IN NEWBORNS WITH HYPOXIC ISCHEMIC ENCEPHALOPATHY

    PubMed Central

    Metzler, Marina; Govindan, Rathinaswamy; Al-Shargabi, Tareq; Vezina, Gilbert; Andescavage, Nickie; Wang, Yunfei; du Plessis, Adre; Massaro, An N

    2017-01-01

    Background Decreased heart rate variability (HRV) is a measure of autonomic dysfunction and brain injury in newborns with hypoxic ischemic encephalopathy (HIE). This study aimed to characterize the relationship between HRV and brain injury pattern by MRI in newborns with HIE undergoing therapeutic hypothermia. Methods HRV metrics were quantified in the time domain (αS, αL, and root mean square at short [RMSS] and long [RMSL] time scales) and frequency domain (relative low-[LF] and high-frequency [HF] power) during the time period 24–27 hours of life. Brain injury pattern by MRI was classified as no injury, pure cortical/white matter injury, mixed watershed/mild basal nuclei injury, predominant basal nuclei or global injury, and died. HRV metrics were compared across brain injury pattern groups using a random effects mixed model. Results Data from 74 infants were analyzed. Brain injury pattern was significantly associated with degree of HRV suppression. Specifically, negative associations were observed between pattern of brain injury and RMSS (estimate −0.224, SE 0.082, p=0.006), RMSL (estimate −0.189, SE 0.082, p=0.021), and LF power (estimate −0.044, SE 0.016, p=0.006). Conclusion Degree of HRV depression is related to pattern of brain injury. HRV monitoring may provide insights into pattern of brain injury at the bedside. PMID:28376079

Top