Sample records for distributed processing model

  1. Hierarchical species distribution models

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  2. Negative Binomial Process Count and Mixture Modeling.

    PubMed

    Zhou, Mingyuan; Carin, Lawrence

    2015-02-01

    The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.

  3. Does lake size matter? Combining morphology and process modeling to examine the contribution of lake classes to population-scale processes

    USGS Publications Warehouse

    Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.

    2014-01-01

    With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.

  4. Multi-model comparison on the effects of climate change on tree species in the eastern U.S.: results from an enhanced niche model and process-based ecosystem and landscape models

    Treesearch

    Louis R. Iverson; Frank R. Thompson; Stephen Matthews; Matthew Peters; Anantha Prasad; William D. Dijak; Jacob Fraser; Wen J. Wang; Brice Hanberry; Hong He; Maria Janowiak; Patricia Butler; Leslie Brandt; Chris Swanston

    2016-01-01

    Context. Species distribution models (SDM) establish statistical relationships between the current distribution of species and key attributes whereas process-based models simulate ecosystem and tree species dynamics based on representations of physical and biological processes. TreeAtlas, which uses DISTRIB SDM, and Linkages and LANDIS PRO, process...

  5. Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.

    PubMed

    Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J

    2018-05-24

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

  6. Comparison of a species distribution model and a process model from a hierarchical perspective to quantify effects of projected climate change on tree species

    Treesearch

    Jeffrey E. Schneiderman; Hong S. He; Frank R. Thompson; William D. Dijak; Jacob S. Fraser

    2015-01-01

    Tree species distribution and abundance are affected by forces operating across a hierarchy of ecological scales. Process and species distribution models have been developed emphasizing forces at different scales. Understanding model agreement across hierarchical scales provides perspective on prediction uncertainty and ultimately enables policy makers and managers to...

  7. Red mud flocculation process in alumina production

    NASA Astrophysics Data System (ADS)

    Fedorova, E. R.; Firsov, A. Yu

    2018-05-01

    The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.

  8. Transient Properties of Probability Distribution for a Markov Process with Size-dependent Additive Noise

    NASA Astrophysics Data System (ADS)

    Yamada, Yuhei; Yamazaki, Yoshihiro

    2018-04-01

    This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.

  9. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  10. Lindley frailty model for a class of compound Poisson processes

    NASA Astrophysics Data System (ADS)

    Kadilar, Gamze Özel; Ata, Nihal

    2013-10-01

    The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.

  11. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates.

    PubMed

    Wagner, Peter J

    2012-02-23

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.

  12. A "total parameter estimation" method in the varification of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Wang, M.; Qin, D.; Wang, H.

    2011-12-01

    Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.

  13. Modeling the VARTM Composite Manufacturing Process

    NASA Technical Reports Server (NTRS)

    Song, Xiao-Lan; Loos, Alfred C.; Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal

    2004-01-01

    A comprehensive simulation model of the Vacuum Assisted Resin Transfer Modeling (VARTM) composite manufacturing process has been developed. For isothermal resin infiltration, the model incorporates submodels which describe cure of the resin and changes in resin viscosity due to cure, resin flow through the reinforcement preform and distribution medium and compaction of the preform during the infiltration. The accuracy of the model was validated by measuring the flow patterns during resin infiltration of flat preforms. The modeling software was used to evaluate the effects of the distribution medium on resin infiltration of a flat preform. Different distribution medium configurations were examined using the model and the results were compared with data collected during resin infiltration of a carbon fabric preform. The results of the simulations show that the approach used to model the distribution medium can significantly effect the predicted resin infiltration times. Resin infiltration into the preform can be accurately predicted only when the distribution medium is modeled correctly.

  14. The model of drugs distribution dynamics in biological tissue

    NASA Astrophysics Data System (ADS)

    Ginevskij, D. A.; Izhevskij, P. V.; Sheino, I. N.

    2017-09-01

    The dose distribution by Neutron Capture Therapy follows the distribution of 10B in the tissue. The modern models of pharmacokinetics of drugs describe the processes occurring in conditioned "chambers" (blood-organ-tumor), but fail to describe the spatial distribution of the drug in the tumor and in normal tissue. The mathematical model of the spatial distribution dynamics of drugs in the tissue, depending on the concentration of the drug in the blood, was developed. The modeling method is the representation of the biological structure in the form of a randomly inhomogeneous medium in which the 10B distribution occurs. The parameters of the model, which cannot be determined rigorously in the experiment, are taken as the quantities subject to the laws of the unconnected random processes. The estimates of 10B distribution preparations in the tumor and healthy tissue, inside/outside the cells, are obtained.

  15. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  16. Effects of species biological traits and environmental heterogeneity on simulated tree species distribution shifts under climate change

    Treesearch

    Wen J. Wang; Hong S. He; Frank R. Thompson; Martin A. Spetich; Jacob S. Fraser

    2018-01-01

    Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are notwell represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts.We investigate how species biological...

  17. A distributed computing model for telemetry data processing

    NASA Astrophysics Data System (ADS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-05-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  18. A distributed computing model for telemetry data processing

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Scott, Kevin L.; Weismuller, Steven P.

    1994-01-01

    We present a new approach to distributing processed telemetry data among spacecraft flight controllers within the control centers at NASA's Johnson Space Center. This approach facilitates the development of application programs which integrate spacecraft-telemetered data and ground-based synthesized data, then distributes this information to flight controllers for analysis and decision-making. The new approach combines various distributed computing models into one hybrid distributed computing model. The model employs both client-server and peer-to-peer distributed computing models cooperating to provide users with information throughout a diverse operations environment. Specifically, it provides an attractive foundation upon which we are building critical real-time monitoring and control applications, while simultaneously lending itself to peripheral applications in playback operations, mission preparations, flight controller training, and program development and verification. We have realized the hybrid distributed computing model through an information sharing protocol. We shall describe the motivations that inspired us to create this protocol, along with a brief conceptual description of the distributed computing models it employs. We describe the protocol design in more detail, discussing many of the program design considerations and techniques we have adopted. Finally, we describe how this model is especially suitable for supporting the implementation of distributed expert system applications.

  19. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  20. Modelling rate distributions using character compatibility: implications for morphological evolution among fossil invertebrates

    PubMed Central

    Wagner, Peter J.

    2012-01-01

    Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266

  1. How can model comparison help improving species distribution models?

    PubMed

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  2. How Can Model Comparison Help Improving Species Distribution Models?

    PubMed Central

    Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle

    2013-01-01

    Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779

  3. Caveats for correlative species distribution modeling

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Stohlgren, Thomas J.; Kumar, Sunil; Morisette, Jeffrey T.; Holcombe, Tracy R.

    2015-01-01

    Correlative species distribution models are becoming commonplace in the scientific literature and public outreach products, displaying locations, abundance, or suitable environmental conditions for harmful invasive species, threatened and endangered species, or species of special concern. Accurate species distribution models are useful for efficient and adaptive management and conservation, research, and ecological forecasting. Yet, these models are often presented without fully examining or explaining the caveats for their proper use and interpretation and are often implemented without understanding the limitations and assumptions of the model being used. We describe common pitfalls, assumptions, and caveats of correlative species distribution models to help novice users and end users better interpret these models. Four primary caveats corresponding to different phases of the modeling process, each with supporting documentation and examples, include: (1) all sampling data are incomplete and potentially biased; (2) predictor variables must capture distribution constraints; (3) no single model works best for all species, in all areas, at all spatial scales, and over time; and (4) the results of species distribution models should be treated like a hypothesis to be tested and validated with additional sampling and modeling in an iterative process.

  4. Probabilistic properties of injection induced seismicity - implications for the seismic hazard analysis

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw; Urban, Pawel; Kwiatek, Grzegorz; Martinez-Garzón, Particia

    2017-04-01

    Injection induced seismicity (IIS) is an undesired dynamic rockmass response to massive fluid injections. This includes reactions, among others, to hydro-fracturing for shale gas exploitation. Complexity and changeability of technological factors that induce IIS, may result in significant deviations of the observed distributions of seismic process parameters from the models, which perform well in natural, tectonic seismic processes. Classic formulations of probabilistic seismic hazard analysis in natural seismicity assume the seismic marked point process to be a stationary Poisson process, whose marks - magnitudes are governed by a Gutenberg-Richter born exponential distribution. It is well known that the use of an inappropriate earthquake occurrence model and/or an inappropriate of magnitude distribution model leads to significant systematic errors of hazard estimates. It is therefore of paramount importance to check whether the mentioned, commonly used in natural seismicity assumptions on the seismic process, can be safely used in IIS hazard problems or not. Seismicity accompanying shale gas operations is widely studied in the framework of the project "Shale Gas Exploration and Exploitation Induced Risks" (SHEER). Here we present results of SHEER project investigations of such seismicity from Oklahoma and of a proxy of such seismicity - IIS data from The Geysers geothermal field. We attempt to answer to the following questions: • Do IIS earthquakes follow the Gutenberg-Richter distribution law, so that the magnitude distribution can be modelled by an exponential distribution? • Is the occurrence process of IIS earthquakes Poissonian? Is it segmentally Poissonian? If yes, how are these segments linked to cycles of technological operations? Statistical tests indicate that the Gutenberg-Richter relation born exponential distribution model for magnitude is, in general, inappropriate. The magnitude distribution can be complex, multimodal, with no ready-to-use functional model. In this connection, we recommend to use in hazard analyses non-parametric, kernel estimators of magnitude distribution. The earthquake occurrence process of IIS is not a Poisson process. When earthquakes' occurrences are influenced by a multitude of inducing factors, the interevent time distribution can be modelled by the Weibull distribution supporting a negative ageing property of the process. When earthquake occurrences are due to a specific injection activity, the earthquake rate directly depends on the injection rate and responds immediately to the changes of the injection rate. Furthermore, this response is not limited only to correlated variations of the seismic activity but it also concerns significant changes of the shape of interevent time distribution. Unlike the event rate, the shape of magnitude distribution does not exhibit correlation with the injection rate. This work was supported within SHEER: "Shale Gas Exploration and Exploitation Induced Risks" project funded from Horizon 2020 - R&I Framework Programme, call H2020-LCE 16-2014-1 and within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.

  5. A Framework for Distributed Problem Solving

    NASA Astrophysics Data System (ADS)

    Leone, Joseph; Shin, Don G.

    1989-03-01

    This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.

  6. The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process

    NASA Astrophysics Data System (ADS)

    Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko

    2012-06-01

    A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.

  7. The Evolution of Grain Size Distribution in Explosive Rock Fragmentation - Sequential Fragmentation Theory Revisited

    NASA Astrophysics Data System (ADS)

    Scheu, B.; Fowler, A. C.

    2015-12-01

    Fragmentation is a ubiquitous phenomenon in many natural and engineering systems. It is the process by which an initially competent medium, solid or liquid, is broken up into a population of constituents. Examples occur in collisions and impacts of asteroids/meteorites, explosion driven fragmentation of munitions on a battlefield, as well as of magma in a volcanic conduit causing explosive volcanic eruptions and break-up of liquid drops. Besides the mechanism of fragmentation the resulting frequency-size distribution of the generated constituents is of central interest. Initially their distributions were fitted empirically using lognormal, Rosin-Rammler and Weibull distributions (e.g. Brown & Wohletz 1995). The sequential fragmentation theory (Brown 1989, Wohletz at al. 1989, Wohletz & Brown 1995) and the application of fractal theory to fragmentation products (Turcotte 1986, Perfect 1997, Perugini & Kueppers 2012) attempt to overcome this shortcoming by providing a more physical basis for the applied distribution. Both rely on an at least partially scale-invariant and thus self-similar random fragmentation process. Here we provide a stochastic model for the evolution of grain size distribution during the explosion process. Our model is based on laboratory experiments in which volcanic rock samples explode naturally when rapidly depressurized from initial pressures of several MPa to ambient conditions. The physics governing this fragmentation process has been successfully modelled and the observed fragmentation pattern could be numerically reproduced (Fowler et al. 2010). The fragmentation of these natural rocks leads to grain size distributions which vary depending on the experimental starting conditions. Our model provides a theoretical description of these different grain size distributions. Our model combines a sequential model of the type outlined by Turcotte (1986), but generalized to cater for the explosive process appropriate here, in particular by including in the description of the fracturing events in which the rock fragments, with a recipe for the production of fines, as observed in the experiments. To our knowledge, this implementation of a deterministic fracturing process into a stochastic (sequential) model is unique, further it provides the model with some forecasting power.

  8. A Neurally Plausible Parallel Distributed Processing Model of Event-Related Potential Word Reading Data

    ERIC Educational Resources Information Center

    Laszlo, Sarah; Plaut, David C.

    2012-01-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between…

  9. Land transportation model for supply chain manufacturing industries

    NASA Astrophysics Data System (ADS)

    Kurniawan, Fajar

    2017-12-01

    Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.

  10. Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale

    NASA Astrophysics Data System (ADS)

    Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-08-01

    This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.

  11. The social architecture of capitalism

    NASA Astrophysics Data System (ADS)

    Wright, Ian

    2005-02-01

    A dynamic model of the social relations between workers and capitalists is introduced. The model self-organises into a dynamic equilibrium with statistical properties that are in close qualitative and in many cases quantitative agreement with a broad range of known empirical distributions of developed capitalism, including the power-law firm size distribution, the Laplace firm and GDP growth distribution, the lognormal firm demises distribution, the exponential recession duration distribution, the lognormal-Pareto income distribution, and the gamma-like firm rate-of-profit distribution. Normally these distributions are studied in isolation, but this model unifies and connects them within a single causal framework. The model also generates business cycle phenomena, including fluctuating wage and profit shares in national income about values consistent with empirical studies. The generation of an approximately lognormal-Pareto income distribution and an exponential-Pareto wealth distribution demonstrates that the power-law regime of the income distribution can be explained by an additive process on a power-law network that models the social relation between employers and employees organised in firms, rather than a multiplicative process that models returns to investment in financial markets. A testable consequence of the model is the conjecture that the rate-of-profit distribution is consistent with a parameter-mix of a ratio of normal variates with means and variances that depend on a firm size parameter that is distributed according to a power-law.

  12. RockFall analyst: A GIS extension for three-dimensional and spatially distributed rockfall hazard modeling

    NASA Astrophysics Data System (ADS)

    Lan, Hengxing; Derek Martin, C.; Lim, C. H.

    2007-02-01

    Geographic information system (GIS) modeling is used in combination with three-dimensional (3D) rockfall process modeling to assess rockfall hazards. A GIS extension, RockFall Analyst (RA), which is capable of effectively handling large amounts of geospatial information relative to rockfall behaviors, has been developed in ArcGIS using ArcObjects and C#. The 3D rockfall model considers dynamic processes on a cell plane basis. It uses inputs of distributed parameters in terms of raster and polygon features created in GIS. Two major components are included in RA: particle-based rockfall process modeling and geostatistics-based rockfall raster modeling. Rockfall process simulation results, 3D rockfall trajectories and their velocity features either for point seeders or polyline seeders are stored in 3D shape files. Distributed raster modeling, based on 3D rockfall trajectories and a spatial geostatistical technique, represents the distribution of spatial frequency, the flying and/or bouncing height, and the kinetic energy of falling rocks. A distribution of rockfall hazard can be created by taking these rockfall characteristics into account. A barrier analysis tool is also provided in RA to aid barrier design. An application of these modeling techniques to a case study is provided. The RA has been tested in ArcGIS 8.2, 8.3, 9.0 and 9.1.

  13. A model for the distributed storage and processing of large arrays

    NASA Technical Reports Server (NTRS)

    Mehrota, P.; Pratt, T. W.

    1983-01-01

    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.

  14. A gentle introduction to quantile regression for ecologists

    USGS Publications Warehouse

    Cade, B.S.; Noon, B.R.

    2003-01-01

    Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.

  15. Monte Carlo based toy model for fission process

    NASA Astrophysics Data System (ADS)

    Kurniadi, R.; Waris, A.; Viridi, S.

    2014-09-01

    There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.

  16. Assessing Performance Tradeoffs in Undersea Distributed Sensor Networks

    DTIC Science & Technology

    2006-09-01

    time. We refer to this process as track - before - detect (see [5] for a description), since the final determination of a target presence is not made until...expressions for probability of successful search and probability of false search for modeling the track - before - detect process. We then describe a numerical...random manner (randomly sampled from a uniform distribution). II. SENSOR NETWORK PERFORMANCE MODELS We model the process of track - before - detect by

  17. Design of distributed systems of hydrolithosphere processes management. A synthesis of distributed management systems

    NASA Astrophysics Data System (ADS)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.

  18. Managing distribution changes in time series prediction

    NASA Astrophysics Data System (ADS)

    Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.

    2006-07-01

    When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.

  19. Multiphysics Modeling and Simulations of Mil A46100 Armor-Grade Martensitic Steel Gas Metal Arc Welding Process

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.; Montgomery, J. S.

    2013-10-01

    A multiphysics computational model has been developed for the conventional Gas Metal Arc Welding (GMAW) joining process and used to analyze butt-welding of MIL A46100, a prototypical high-hardness armor martensitic steel. The model consists of five distinct modules, each covering a specific aspect of the GMAW process, i.e., (a) dynamics of welding-gun behavior; (b) heat transfer from the electric arc and mass transfer from the electrode to the weld; (c) development of thermal and mechanical fields during the GMAW process; (d) the associated evolution and spatial distribution of the material microstructure throughout the weld region; and (e) the final spatial distribution of the as-welded material properties. To make the newly developed GMAW process model applicable to MIL A46100, the basic physical-metallurgy concepts and principles for this material have to be investigated and properly accounted for/modeled. The newly developed GMAW process model enables establishment of the relationship between the GMAW process parameters (e.g., open circuit voltage, welding current, electrode diameter, electrode-tip/weld distance, filler-metal feed speed, and gun travel speed), workpiece material chemistry, and the spatial distribution of as-welded material microstructure and properties. The predictions of the present GMAW model pertaining to the spatial distribution of the material microstructure and properties within the MIL A46100 weld region are found to be consistent with general expectations and prior observations.

  20. First-Passage-Time Distribution for Variable-Diffusion Processes

    NASA Astrophysics Data System (ADS)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  1. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  2. Working toward integrated models of alpine plant distribution.

    PubMed

    Carlson, Bradley Z; Randin, Christophe F; Boulangeat, Isabelle; Lavergne, Sébastien; Thuiller, Wilfried; Choler, Philippe

    2013-10-01

    Species distribution models (SDMs) have been frequently employed to forecast the response of alpine plants to global changes. Efforts to model alpine plant distribution have thus far been primarily based on a correlative approach, in which ecological processes are implicitly addressed through a statistical relationship between observed species occurrences and environmental predictors. Recent evidence, however, highlights the shortcomings of correlative SDMs, especially in alpine landscapes where plant species tend to be decoupled from atmospheric conditions in micro-topographic habitats and are particularly exposed to geomorphic disturbances. While alpine plants respond to the same limiting factors as plants found at lower elevations, alpine environments impose a particular set of scale-dependent and hierarchical drivers that shape the realized niche of species and that require explicit consideration in a modelling context. Several recent studies in the European Alps have successfully integrated both correlative and process-based elements into distribution models of alpine plants, but for the time being a single integrative modelling framework that includes all key drivers remains elusive. As a first step in working toward a comprehensive integrated model applicable to alpine plant communities, we propose a conceptual framework that structures the primary mechanisms affecting alpine plant distributions. We group processes into four categories, including multi-scalar abiotic drivers, gradient dependent species interactions, dispersal and spatial-temporal plant responses to disturbance. Finally, we propose a methodological framework aimed at developing an integrated model to better predict alpine plant distribution.

  3. The class of L ∩ D and its application to renewal reward process

    NASA Astrophysics Data System (ADS)

    Kamışlık, Aslı Bektaş; Kesemen, Tülay; Khaniyev, Tahir

    2018-01-01

    The class of L ∩ D is generated by intersection of two important subclasses of heavy tailed distributions: The long tailed distributions and dominated varying distributions. This class itself is also an important member of heavy tailed distributions and has some principal application areas especially in renewal, renewal reward and random walk processes. The aim of this study is to observe some well and less known results on renewal functions generated by the class of L ∩ D and apply them into a special renewal reward process which is known in the literature a semi Markovian inventory model of type (s, S). Especially we focused on Pareto distribution which belongs to the L ∩ D subclass of heavy tailed distributions. As a first step we obtained asymptotic results for renewal function generated by Pareto distribution from the class of L ∩ D using some well-known results by Embrechts and Omey [1]. Then we applied the results we obtained for Pareto distribution to renewal reward processes. As an application we investigate inventory model of type (s, S) when demands have Pareto distribution from the class of L ∩ D. We obtained asymptotic expansion for ergodic distribution function and finally we reached asymptotic expansion for nth order moments of distribution of this process.

  4. Single- versus dual-process models of lexical decision performance: insights from response time distributional analysis.

    PubMed

    Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M

    2006-12-01

    This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.

  5. A Single-Boundary Accumulator Model of Response Times in an Addition Verification Task

    PubMed Central

    Faulkenberry, Thomas J.

    2017-01-01

    Current theories of mathematical cognition offer competing accounts of the interplay between encoding and calculation in mental arithmetic. Additive models propose that manipulations of problem format do not interact with the cognitive processes used in calculation. Alternatively, interactive models suppose that format manipulations have a direct effect on calculation processes. In the present study, we tested these competing models by fitting participants' RT distributions in an arithmetic verification task with a single-boundary accumulator model (the shifted Wald distribution). We found that in addition to providing a more complete description of RT distributions, the accumulator model afforded a potentially more sensitive test of format effects. Specifically, we found that format affected drift rate, which implies that problem format has a direct impact on calculation processes. These data give further support for an interactive model of mental arithmetic. PMID:28769853

  6. The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew

    PubMed Central

    Der, Ricky; Plotkin, Joshua B.

    2014-01-01

    We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932

  7. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    PubMed

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  8. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    NASA Astrophysics Data System (ADS)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  9. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  10. Unified theory for stochastic modelling of hydroclimatic processes: Preserving marginal distributions, correlation structures, and intermittency

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael

    2018-05-01

    Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.

  11. Optimization of Gas Metal Arc Welding (GMAW) Process for Maximum Ballistic Limit in MIL A46100 Steel Welded All-Metal Armor

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.

    2015-01-01

    Our recently developed multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process has been upgraded with respect to its predictive capabilities regarding the process optimization for the attainment of maximum ballistic limit within the weld. The original model consists of six modules, each dedicated to handling a specific aspect of the GMAW process, i.e., (a) electro-dynamics of the welding gun; (b) radiation-/convection-controlled heat transfer from the electric arc to the workpiece and mass transfer from the filler metal consumable electrode to the weld; (c) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (d) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; (e) spatial distribution of the as-welded material mechanical properties; and (f) spatial distribution of the material ballistic limit. In the present work, the model is upgraded through the introduction of the seventh module in recognition of the fact that identification of the optimum GMAW process parameters relative to the attainment of the maximum ballistic limit within the weld region entails the use of advanced optimization and statistical sensitivity analysis methods and tools. The upgraded GMAW process model is next applied to the case of butt welding of MIL A46100 (a prototypical high-hardness armor-grade martensitic steel) workpieces using filler metal electrodes made of the same material. The predictions of the upgraded GMAW process model pertaining to the spatial distribution of the material microstructure and ballistic limit-controlling mechanical properties within the MIL A46100 butt weld are found to be consistent with general expectations and prior observations.

  12. A renewal jump-diffusion process with threshold dividend strategy

    NASA Astrophysics Data System (ADS)

    Li, Bo; Wu, Rong; Song, Min

    2009-06-01

    In this paper, we consider a jump-diffusion risk process with the threshold dividend strategy. Both the distributions of the inter-arrival times and the claims are assumed to be in the class of phase-type distributions. The expected discounted dividend function and the Laplace transform of the ruin time are discussed. Motivated by Asmussen [S. Asmussen, Stationary distributions for fluid flow models with or without Brownian noise, Stochastic Models 11 (1) (1995) 21-49], instead of studying the original process, we study the constructed fluid flow process and their closed-form formulas are obtained in terms of matrix expression. Finally, numerical results are provided to illustrate the computation.

  13. Modelling spatiotemporal distribution patterns of earthworms in order to indicate hydrological soil processes

    NASA Astrophysics Data System (ADS)

    Palm, Juliane; Klaus, Julian; van Schaik, Loes; Zehe, Erwin; Schröder, Boris

    2010-05-01

    Soils provide central ecosystem functions in recycling nutrients, detoxifying harmful chemicals as well as regulating microclimate and local hydrological processes. The internal regulation of these functions and therefore the development of healthy and fertile soils mainly depend on the functional diversity of plants and animals. Soil organisms drive essential processes such as litter decomposition, nutrient cycling, water dynamics, and soil structure formation. Disturbances by different soil management practices (e.g., soil tillage, fertilization, pesticide application) affect the distribution and abundance of soil organisms and hence influence regulating processes. The strong relationship between environmental conditions and soil organisms gives us the opportunity to link spatiotemporal distribution patterns of indicator species with the potential provision of essential soil processes on different scales. Earthworms are key organisms for soil function and affect, among other things, water dynamics and solute transport in soils. Through their burrowing activity, earthworms increase the number of macropores by building semi-permanent burrow systems. In the unsaturated zone, earthworm burrows act as preferential flow pathways and affect water infiltration, surface-, subsurface- and matrix flow as well as the transport of water and solutes into deeper soil layers. Thereby different ecological earthworm types have different importance. Deep burrowing anecic earthworm species (e.g., Lumbricus terrestris) affect the vertical flow and thus increase the risk of potential contamination of ground water with agrochemicals. In contrast, horizontal burrowing endogeic (e.g., Aporrectodea caliginosa) and epigeic species (e.g., Lumbricus rubellus) increase water conductivity and the diffuse distribution of water and solutes in the upper soil layers. The question which processes are more relevant is pivotal for soil management and risk assessment. Thus, finding relevant environmental predictors which explain the distribution and dynamics of different ecological earthworm types can help us to understand where or when these processes are relevant in the landscape. Therefore, we develop species distribution models which are a useful tool to predict spatiotemporal distributions of earthworm occurrence and abundance under changing environmental conditions. On field scale, geostatistical distribution maps have shown that the spatial distribution of earthworms depends on soil parameters such as food supply, soil moisture, bulk density but with different patterns for earthworm stages (adult, juvenile) and ecological types (anecic, endogeic, epigeic). On landscape scales, earthworm distribution seems to be strongly controlled by management/disturbance-related factors. Our study shows different modelling approaches for predicting distribution patterns of earthworms in the Weiherbach area, an agricultural site in Kraichtal (Baden-Württemberg, Germany). We carried out field studies on arable fields differing in soil management practices (conventional, conservational), soil properties (organic matter content, texture, soil moisture), and topography (slope, elevation) in order to identify predictors for earthworm occurrence, abundance and biomass. Our earthworm distribution models consider all ecological groups as well as different life stages, accounting for the fact that the activity of juveniles is sometimes different from those of adults. Within our BIOPORE-project it is our final goal to couple our distribution models with population dynamic models and a preferential flow model to an integrated ecohydrological model to analyse feedbacks between earthworm engineering and transport characteristics affecting the functioning of (agro-) ecosystems.

  14. Using a spatially-distributed hydrologic biogeochemistry model with a nitrogen transport module to study the spatial variation of carbon processes in a Critical Zone Observatory

    DOE PAGES

    Shi, Yuning; Eissenstat, David M.; He, Yuting; ...

    2018-05-12

    Terrestrial carbon processes are affected by soil moisture, soil temperature, nitrogen availability and solar radiation, among other factors. Most of the current ecosystem biogeochemistry models represent one point in space, and have limited characterization of hydrologic processes. Therefore these models can neither resolve the topographically driven spatial variability of water, energy, and nutrient, nor their effects on carbon processes. A spatially-distributed land surface hydrologic biogeochemistry model, Flux-PIHM-BGC, is developed by coupling the Biome-BGC model with a physically-based land surface hydrologic model, Flux-PIHM. In the coupled system, each Flux-PIHM model grid couples a 1-D Biome-BGC model. In addition, a topographic solarmore » radiation module and an advection-driven nitrogen transport module are added to represent the impact of topography on nutrient transport and solar energy distribution. Because Flux-PIHM is able to simulate lateral groundwater flow and represent the land surface heterogeneities caused by topography, Flux-PIHM-BGC is capable of simulating the complex interaction among water, energy, nutrient, and carbon in time and space. The Flux-PIHM-BGC model is tested at the Susquehanna/Shale Hills Critical Zone Observatory. Model results show that distributions of carbon and nitrogen stocks and fluxes are strongly affected by topography and landscape position, and tree growth is nitrogen limited. The predicted aboveground and soil carbon distributions generally agree with the macro patterns observed. Although the model underestimates the spatial variation, the predicted watershed average values are close to the observations. Lastly, the coupled Flux-PIHM-BGC model provides an important tool to study spatial variations in terrestrial carbon and nitrogen processes and their interactions with environmental factors, and to predict the spatial structure of the responses of ecosystems to climate change.« less

  15. Using a spatially-distributed hydrologic biogeochemistry model with a nitrogen transport module to study the spatial variation of carbon processes in a Critical Zone Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Yuning; Eissenstat, David M.; He, Yuting

    Terrestrial carbon processes are affected by soil moisture, soil temperature, nitrogen availability and solar radiation, among other factors. Most of the current ecosystem biogeochemistry models represent one point in space, and have limited characterization of hydrologic processes. Therefore these models can neither resolve the topographically driven spatial variability of water, energy, and nutrient, nor their effects on carbon processes. A spatially-distributed land surface hydrologic biogeochemistry model, Flux-PIHM-BGC, is developed by coupling the Biome-BGC model with a physically-based land surface hydrologic model, Flux-PIHM. In the coupled system, each Flux-PIHM model grid couples a 1-D Biome-BGC model. In addition, a topographic solarmore » radiation module and an advection-driven nitrogen transport module are added to represent the impact of topography on nutrient transport and solar energy distribution. Because Flux-PIHM is able to simulate lateral groundwater flow and represent the land surface heterogeneities caused by topography, Flux-PIHM-BGC is capable of simulating the complex interaction among water, energy, nutrient, and carbon in time and space. The Flux-PIHM-BGC model is tested at the Susquehanna/Shale Hills Critical Zone Observatory. Model results show that distributions of carbon and nitrogen stocks and fluxes are strongly affected by topography and landscape position, and tree growth is nitrogen limited. The predicted aboveground and soil carbon distributions generally agree with the macro patterns observed. Although the model underestimates the spatial variation, the predicted watershed average values are close to the observations. Lastly, the coupled Flux-PIHM-BGC model provides an important tool to study spatial variations in terrestrial carbon and nitrogen processes and their interactions with environmental factors, and to predict the spatial structure of the responses of ecosystems to climate change.« less

  16. Working toward integrated models of alpine plant distribution

    PubMed Central

    Carlson, Bradley Z.; Randin, Christophe F.; Boulangeat, Isabelle; Lavergne, Sébastien; Thuiller, Wilfried; Choler, Philippe

    2014-01-01

    Species distribution models (SDMs) have been frequently employed to forecast the response of alpine plants to global changes. Efforts to model alpine plant distribution have thus far been primarily based on a correlative approach, in which ecological processes are implicitly addressed through a statistical relationship between observed species occurrences and environmental predictors. Recent evidence, however, highlights the shortcomings of correlative SDMs, especially in alpine landscapes where plant species tend to be decoupled from atmospheric conditions in micro-topographic habitats and are particularly exposed to geomorphic disturbances. While alpine plants respond to the same limiting factors as plants found at lower elevations, alpine environments impose a particular set of scale-dependent and hierarchical drivers that shape the realized niche of species and that require explicit consideration in a modelling context. Several recent studies in the European Alps have successfully integrated both correlative and process-based elements into distribution models of alpine plants, but for the time being a single integrative modelling framework that includes all key drivers remains elusive. As a first step in working toward a comprehensive integrated model applicable to alpine plant communities, we propose a conceptual framework that structures the primary mechanisms affecting alpine plant distributions. We group processes into four categories, including multi-scalar abiotic drivers, gradient dependent species interactions, dispersal and spatial–temporal plant responses to disturbance. Finally, we propose a methodological framework aimed at developing an integrated model to better predict alpine plant distribution. PMID:24790594

  17. A fortran program for Monte Carlo simulation of oil-field discovery sequences

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Davis, J.C.

    1993-01-01

    We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.

  18. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  19. C. botulinum inactivation kinetics implemented in a computational model of a high-pressure sterilization process.

    PubMed

    Juliano, Pablo; Knoerzer, Kai; Fryer, Peter J; Versteeg, Cornelis

    2009-01-01

    High-pressure, high-temperature (HPHT) processing is effective for microbial spore inactivation using mild preheating, followed by rapid volumetric compression heating and cooling on pressure release, enabling much shorter processing times than conventional thermal processing for many food products. A computational thermal fluid dynamic (CTFD) model has been developed to model all processing steps, including the vertical pressure vessel, an internal polymeric carrier, and food packages in an axis-symmetric geometry. Heat transfer and fluid dynamic equations were coupled to four selected kinetic models for the inactivation of C. botulinum; the traditional first-order kinetic model, the Weibull model, an nth-order model, and a combined discrete log-linear nth-order model. The models were solved to compare the resulting microbial inactivation distributions. The initial temperature of the system was set to 90 degrees C and pressure was selected at 600 MPa, holding for 220 s, with a target temperature of 121 degrees C. A representation of the extent of microbial inactivation throughout all processing steps was obtained for each microbial model. Comparison of the models showed that the conventional thermal processing kinetics (not accounting for pressure) required shorter holding times to achieve a 12D reduction of C. botulinum spores than the other models. The temperature distribution inside the vessel resulted in a more uniform inactivation distribution when using a Weibull or an nth-order kinetics model than when using log-linear kinetics. The CTFD platform could illustrate the inactivation extent and uniformity provided by the microbial models. The platform is expected to be useful to evaluate models fitted into new C. botulinum inactivation data at varying conditions of pressure and temperature, as an aid for regulatory filing of the technology as well as in process and equipment design.

  20. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  1. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  2. Renewal processes based on generalized Mittag-Leffler waiting times

    NASA Astrophysics Data System (ADS)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  3. Incorporating uncertainty in predictive species distribution modelling.

    PubMed

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  4. A randomised approach for NARX model identification based on a multivariate Bernoulli distribution

    NASA Astrophysics Data System (ADS)

    Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.

    2017-04-01

    The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.

  5. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment

    PubMed Central

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso

    2016-01-01

    ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. PMID:27940547

  6. Modeling Stochastic Variability in the Numbers of Surviving Salmonella enterica, Enterohemorrhagic Escherichia coli, and Listeria monocytogenes Cells at the Single-Cell Level in a Desiccated Environment.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2017-02-15

    Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures because the presence of even one bacterium can cause foodborne disease. The results demonstrate that the variability in the numbers of surviving bacteria was described as a Poisson distribution by use of the model developed by use of the Poisson process. Description of the number of surviving bacteria as a probability distribution rather than as the point estimates used in a deterministic approach can provide a more realistic estimation of risk. The probability model should be useful for estimating the quantitative risk of bacterial survival during inactivation. Copyright © 2017 Koyama et al.

  7. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    NASA Astrophysics Data System (ADS)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  8. A convergent model for distributed processing of Big Sensor Data in urban engineering networks

    NASA Astrophysics Data System (ADS)

    Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.

    2017-01-01

    The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.

  9. Distributed parameterization of complex terrain

    NASA Astrophysics Data System (ADS)

    Band, Lawrence E.

    1991-03-01

    This paper addresses the incorporation of high resolution topography, soils and vegetation information into the simulation of land surface processes in atmospheric circulation models (ACM). Recent work has concentrated on detailed representation of one-dimensional exchange processes, implicitly assuming surface homogeneity over the atmospheric grid cell. Two approaches that could be taken to incorporate heterogeneity are the integration of a surface model over distributed, discrete portions of the landscape, or over a distribution function of the model parameters. However, the computational burden and parameter intensive nature of current land surface models in ACM limits the number of independent model runs and parameterizations that are feasible to accomplish for operational purposes. Therefore, simplications in the representation of the vertical exchange processes may be necessary to incorporate the effects of landscape variability and horizontal divergence of energy and water. The strategy is then to trade off the detail and rigor of point exchange calculations for the ability to repeat those calculations over extensive, complex terrain. It is clear the parameterization process for this approach must be automated such that large spatial databases collected from remotely sensed images, digital terrain models and digital maps can be efficiently summarized and transformed into the appropriate parameter sets. Ideally, the landscape should be partitioned into surface units that maximize between unit variance while minimizing within unit variance, although it is recognized that some level of surface heterogeneity will be retained at all scales. Therefore, the geographic data processing necessary to automate the distributed parameterization should be able to estimate or predict parameter distributional information within each surface unit.

  10. Baldovin-Stella stochastic volatility process and Wiener process mixtures

    NASA Astrophysics Data System (ADS)

    Peirano, P. P.; Challet, D.

    2012-08-01

    Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a powerful and consistent way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Lévy distributions and show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, we show that the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The basic Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.

  11. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A Decision Model for Supporting Task Allocation Processes in Global Software Development

    NASA Astrophysics Data System (ADS)

    Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter

    Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.

  13. Two stochastic models useful in petroleum exploration

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1972-01-01

    A model of the petroleum exploration process that tests empirically the hypothesis that at an early stage in the exploration of a basin, the process behaves like sampling without replacement is proposed along with a model of the spatial distribution of petroleum reserviors that conforms to observed facts. In developing the model of discovery, the following topics are discussed: probabilitistic proportionality, likelihood function, and maximum likelihood estimation. In addition, the spatial model is described, which is defined as a stochastic process generating values of a sequence or random variables in a way that simulates the frequency distribution of areal extent, the geographic location, and shape of oil deposits

  14. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  15. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  16. Exact solutions for network rewiring models

    NASA Astrophysics Data System (ADS)

    Evans, T. S.

    2007-03-01

    Evolving networks with a constant number of edges may be modelled using a rewiring process. These models are used to describe many real-world processes including the evolution of cultural artifacts such as family names, the evolution of gene variations, and the popularity of strategies in simple econophysics models such as the minority game. The model is closely related to Urn models used for glasses, quantum gravity and wealth distributions. The full mean field equation for the degree distribution is found and its exact solution and generating solution are given.

  17. A novel modeling approach to the mixing process in twin-screw extruders

    NASA Astrophysics Data System (ADS)

    Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy

    2014-05-01

    In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.

  18. Volcanic Ash Data Assimilation System for Atmospheric Transport Model

    NASA Astrophysics Data System (ADS)

    Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.

    2017-12-01

    The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.

  19. Modified allocation capacitated planning model in blood supply chain management

    NASA Astrophysics Data System (ADS)

    Mansur, A.; Vanany, I.; Arvitrida, N. I.

    2018-04-01

    Blood supply chain management (BSCM) is a complex process management that involves many cooperating stakeholders. BSCM involves four echelon processes, which are blood collection or procurement, production, inventory, and distribution. This research develops an optimization model of blood distribution planning. The efficiency of decentralization and centralization policies in a blood distribution chain are compared, by optimizing the amount of blood delivered from a blood center to a blood bank. This model is developed based on allocation problem of capacitated planning model. At the first stage, the capacity and the cost of transportation are considered to create an initial capacitated planning model. Then, the inventory holding and shortage costs are added to the model. These additional parameters of inventory costs lead the model to be more realistic and accurate.

  20. Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)

    Treesearch

    Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline

    2008-01-01

    The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...

  1. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  2. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  3. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  4. Effects of species biological traits and environmental heterogeneity on simulated tree species distribution shifts under climate change.

    PubMed

    Wang, Wen J; He, Hong S; Thompson, Frank R; Spetich, Martin A; Fraser, Jacob S

    2018-09-01

    Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are not well represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts. We investigate how species biological traits and environmental heterogeneity affect species distribution shifts. We used a species-specific, spatially explicit forest dynamic model LANDIS PRO, which incorporates site-scale tree species demography and competition, landscape-scale dispersal and disturbances, and regional-scale abiotic controls, to simulate the distribution shifts of four representative tree species with distinct biological traits in the central hardwood forest region of United States. Our results suggested that biological traits (e.g., dispersal capacity, maturation age) were important for determining tree species distribution shifts. Environmental heterogeneity, on average, reduced shift rates by 8% compared to perfect environmental conditions. The average distribution shift rates ranged from 24 to 200myear -1 under climate change scenarios, implying that many tree species may not able to keep up with climate change because of limited dispersal capacity, long generation time, and environmental heterogeneity. We suggest that climate-distribution models should include species demographic processes (e.g., fecundity, dispersal, colonization), biological traits (e.g., dispersal capacity, maturation age), and environmental heterogeneity (e.g., habitat fragmentation) to improve future predictions of species distribution shifts in response to changing climates. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. A road map for integrating eco-evolutionary processes into biodiversity models.

    PubMed

    Thuiller, Wilfried; Münkemüller, Tamara; Lavergne, Sébastien; Mouillot, David; Mouquet, Nicolas; Schiffers, Katja; Gravel, Dominique

    2013-05-01

    The demand for projections of the future distribution of biodiversity has triggered an upsurge in modelling at the crossroads between ecology and evolution. Despite the enthusiasm around these so-called biodiversity models, most approaches are still criticised for not integrating key processes known to shape species ranges and community structure. Developing an integrative modelling framework for biodiversity distribution promises to improve the reliability of predictions and to give a better understanding of the eco-evolutionary dynamics of species and communities under changing environments. In this article, we briefly review some eco-evolutionary processes and interplays among them, which are essential to provide reliable projections of species distributions and community structure. We identify gaps in theory, quantitative knowledge and data availability hampering the development of an integrated modelling framework. We argue that model development relying on a strong theoretical foundation is essential to inspire new models, manage complexity and maintain tractability. We support our argument with an example of a novel integrated model for species distribution modelling, derived from metapopulation theory, which accounts for abiotic constraints, dispersal, biotic interactions and evolution under changing environmental conditions. We hope such a perspective will motivate exciting and novel research, and challenge others to improve on our proposed approach. © 2013 John Wiley & Sons Ltd/CNRS.

  6. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    PubMed

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance <17%). The new model was applied to three different supply systems with different treatment processes and different characteristics. Acceptable predictions were obtained in the three distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  7. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  8. Thermal analysis of void cavity for heat pipe receiver under microgravity

    NASA Astrophysics Data System (ADS)

    Gui, Xiaohong; Song, Xiange; Nie, Baisheng

    2017-04-01

    Based on theoretical analysis of PCM (Phase Change Material) solidification process, the model of improved void cavity distribution tending to high temperature region is established. Numerical results are compared with NASA (National Aeronautics and Space Administration) results. Analysis results show that the outer wall temperature, the melting ratio of PCM and the temperature gradient of PCM canister, have great difference in different void cavity distribution. The form of void distribution has a great effect on the process of phase change. Based on simulation results under the model of improved void cavity distribution, phase change heat transfer process in thermal storage container is analyzed. The main goal of the improved designing for PCM canister is to take measures in reducing the concentration distribution of void cavity by adding some foam metal into phase change material.

  9. Internal Catchment Process Simulation in a Snow-Dominated Basin: Performance Evaluation with Spatiotemporally Variable Runoff Generation and Groundwater Dynamics

    NASA Astrophysics Data System (ADS)

    Kuras, P. K.; Weiler, M.; Alila, Y.; Spittlehouse, D.; Winkler, R.

    2006-12-01

    Hydrologic models have been increasingly used in forest hydrology to overcome the limitations of paired watershed experiments, where vegetative recovery and natural variability obscure the inferences and conclusions that can be drawn from such studies. Models, however, are also plagued by uncertainty stemming from a limited understanding of hydrological processes in forested catchments and parameter equifinality is a common concern. This has created the necessity to improve our understanding of how hydrological systems work, through the development of hydrological measures, analyses and models that address the question: are we getting the right answers for the right reasons? Hence, physically-based, spatially-distributed hydrologic models should be validated with high-quality experimental data describing multiple concurrent internal catchment processes under a range of hydrologic regimes. The distributed hydrology soil vegetation model (DHSVM) frequently used in forest management applications is an example of a process-based model used to address the aforementioned circumstances, and this study takes a novel approach at collectively examining the ability of a pre-calibrated model application to realistically simulate outlet flows along with the spatial-temporal variation of internal catchment processes including: continuous groundwater dynamics at 9 locations, stream and road network flow at 67 locations for six individual days throughout the freshet, and pre-melt season snow distribution. Model efficiency was improved over prior evaluations due to continuous efforts in improving the quality of meteorological data in the watershed. Road and stream network flows were very well simulated for a range of hydrological conditions, and the spatial distribution of the pre-melt season snowpack was in general agreement with observed values. The model was effective in simulating the spatial variability of subsurface flow generation, except at locations where strong stream-groundwater interactions existed, as the model is not capable of simulating such processes and subsurface flows always drain to the stream network. The model has proven overall to be quite capable in realistically simulating internal catchment processes in the watershed, which creates more confidence in future model applications exploring the effects of various forest management scenarios on the watershed's hydrological processes.

  10. A THREE-DIMENSIONAL MODEL ASSESSMENT OF THE GLOBAL DISTRIBUTION OF HEXACHLOROBENZENE

    EPA Science Inventory

    The distributions of persistent organic pollutants (POPs) in the global environment have been studied typically with box/fugacity models with simplified treatments of atmospheric transport processes1. Such models are incapable of simulating the complex three-dimensional mechanis...

  11. Temporal patterns of mental model convergence: implications for distributed teams interacting in electronic collaboration spaces.

    PubMed

    McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael

    2010-04-01

    Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.

  12. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less

  13. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  14. On the generation of log-Lévy distributions and extreme randomness

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2011-10-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.

  15. An object-oriented software approach for a distributed human tracking motion system

    NASA Astrophysics Data System (ADS)

    Micucci, Daniela L.

    2003-06-01

    Tracking is a composite job involving the co-operation of autonomous activities which exploit a complex information model and rely on a distributed architecture. Both information and activities must be classified and related in several dimensions: abstraction levels (what is modelled and how information is processed); topology (where the modelled entities are); time (when entities exist); strategy (why something happens); responsibilities (who is in charge of processing the information). A proper Object-Oriented analysis and design approach leads to a modular architecture where information about conceptual entities is modelled at each abstraction level via classes and intra-level associations, whereas inter-level associations between classes model the abstraction process. Both information and computation are partitioned according to level-specific topological models. They are also placed in a temporal framework modelled by suitable abstractions. Domain-specific strategies control the execution of the computations. Computational components perform both intra-level processing and intra-level information conversion. The paper overviews the phases of the analysis and design process, presents major concepts at each abstraction level, and shows how the resulting design turns into a modular, flexible and adaptive architecture. Finally, the paper sketches how the conceptual architecture can be deployed into a concrete distribute architecture by relying on an experimental framework.

  16. Ballistic-Failure Mechanisms in Gas Metal Arc Welds of Mil A46100 Armor-Grade Steel: A Computational Investigation

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Snipes, J. S.; Galgalikar, R.; Ramaswami, S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.

    2014-09-01

    In our recent work, a multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process was introduced. The model is of a modular type and comprises five modules, each designed to handle a specific aspect of the GMAW process, i.e.: (i) electro-dynamics of the welding-gun; (ii) radiation-/convection-controlled heat transfer from the electric-arc to the workpiece and mass transfer from the filler-metal consumable electrode to the weld; (iii) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (iv) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; and (v) spatial distribution of the as-welded material mechanical properties. In the present work, the GMAW process model has been upgraded with respect to its predictive capabilities regarding the spatial distribution of the mechanical properties controlling the ballistic-limit (i.e., penetration-resistance) of the weld. The model is upgraded through the introduction of the sixth module in the present work in recognition of the fact that in thick steel GMAW weldments, the overall ballistic performance of the armor may become controlled by the (often inferior) ballistic limits of its weld (fusion and heat-affected) zones. To demonstrate the utility of the upgraded GMAW process model, it is next applied to the case of butt-welding of a prototypical high-hardness armor-grade martensitic steel, MIL A46100. The model predictions concerning the spatial distribution of the material microstructure and ballistic-limit-controlling mechanical properties within the MIL A46100 butt-weld are found to be consistent with prior observations and general expectations.

  17. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  18. Mathematical model of whole-process calculation for bottom-blowing copper smelting

    NASA Astrophysics Data System (ADS)

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song

    2017-11-01

    The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.

  19. A Bayesian alternative for multi-objective ecohydrological model specification

    NASA Astrophysics Data System (ADS)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.

  20. Parallel Distributed Processing and Lexical-Semantic Effects in Visual Word Recognition: Are a Few Stages Necessary?

    ERIC Educational Resources Information Center

    Borowsky, Ron; Besner, Derek

    2006-01-01

    D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…

  1. Coordinating complex problem-solving among distributed intelligent agents

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  2. Statistical description of non-Gaussian samples in the F2 layer of the ionosphere during heliogeophysical disturbances

    NASA Astrophysics Data System (ADS)

    Sergeenko, N. P.

    2017-11-01

    An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.

  3. Application of simulation models for the optimization of business processes

    NASA Astrophysics Data System (ADS)

    Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří

    2016-06-01

    The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.

  4. Process-based modeling of species' responses to climate change - a proof of concept using western North American trees

    NASA Astrophysics Data System (ADS)

    Evans, M. E.; Merow, C.; Record, S.; Menlove, J.; Gray, A.; Cundiff, J.; McMahon, S.; Enquist, B. J.

    2013-12-01

    Current attempts to forecast how species' distributions will change in response to climate change suffer under a fundamental trade-off: between modeling many species superficially vs. few species in detail (between correlative vs. mechanistic models). The goals of this talk are two-fold: first, we present a Bayesian multilevel modeling framework, dynamic range modeling (DRM), for building process-based forecasts of many species' distributions at a time, designed to address the trade-off between detail and number of distribution forecasts. In contrast to 'species distribution modeling' or 'niche modeling', which uses only species' occurrence data and environmental data, DRMs draw upon demographic data, abundance data, trait data, occurrence data, and GIS layers of climate in a single framework to account for two processes known to influence range dynamics - demography and dispersal. The vision is to use extensive databases on plant demography, distributions, and traits - in the Botanical Information and Ecology Network, the Forest Inventory and Analysis database (FIA), and the International Tree Ring Data Bank - to develop DRMs for North American trees. Second, we present preliminary results from building the core submodel of a DRM - an integral projection model (IPM) - for a sample of dominant tree species in western North America. IPMs are used to infer demographic niches - i.e., the set of environmental conditions under which population growth rate is positive - and project population dynamics through time. Based on >550,000 data points derived from FIA for nine tree species in western North America, we show IPM-based models of their current and future distributions, and discuss how IPMs can be used to forecast future forest productivity, mortality patterns, and inform efforts at assisted migration.

  5. Numerical simulation of the laser welding process for the prediction of temperature distribution on welded aluminium aircraft components

    NASA Astrophysics Data System (ADS)

    Tsirkas, S. A.

    2018-03-01

    The present investigation is focused to the modelling of the temperature field in aluminium aircraft components welded by a CO2 laser. A three-dimensional finite element model has been developed to simulate the laser welding process and predict the temperature distribution in T-joint laser welded plates with fillet material. The simulation of the laser beam welding process was performed using a nonlinear heat transfer analysis, based on a keyhole formation model analysis. The model employs the technique of element ;birth and death; in order to simulate the weld fillet. Various phenomena associated with welding like temperature dependent material properties and heat losses through convection and radiation were accounted for in the model. The materials considered were 6056-T78 and 6013-T4 aluminium alloys, commonly used for aircraft components. The temperature distribution during laser welding process has been calculated numerically and validated by experimental measurements on different locations of the welded structure. The numerical results are in good agreement with the experimental measurements.

  6. Understanding interannual variability in the distribution of, and transport processes affecting, the early life stages of Todarodes pacificus using behavioral-hydrodynamic modeling approaches

    NASA Astrophysics Data System (ADS)

    Kim, Jung Jin; Stockhausen, William; Kim, Suam; Cho, Yang-Ki; Seo, Gwang-Ho; Lee, Joon-Soo

    2015-11-01

    To understand interannual variability in the distribution of the early life stages of Todarodes pacificus summer spawning population, and to identify the key transport processes influencing this variability, we used a coupled bio-physical model that combines an individual-based model (IBM) incorporating ontogenetic vertical migration for paralarval behavior and temperature-dependent survival process with a ROMS oceanographic model. Using the distribution of paralarvae observed in the northern East China Sea (ECS) during several field cruises as an end point, the spawning ground for the summer-spawning population was estimated to extend from southeast Jeju Island to the central ECS near 29°N by running the model backwards in time. Running the model forward, interannual variability in the distribution of paralarvae predicted by the model was consistent with that observed in several field surveys; surviving individuals in the northern ECS were substantially more abundant in late July 2006 than in 2007, in agreement with observed paralarval distributions. The total number of surviving individuals at 60 days after release based on the simulation throughout summer spawning period (June-August) was 20,329 for 2006, compared with 13,816 for 2007. The surviving individuals were mainly distributed in the East/Japan Sea (EJS), corresponding to a pathway following the nearshore branch of the Tsushima Warm Current flowing along the Japanese coast during both years. In contrast, the abundance of surviving individuals was extremely low in 2007 compared to 2006 on the Pacific side of Japan. Interannual variability in transport and survival processes made a substantial impact on not only the abundance of surviving paralarvae, but also on the flux of paralarvae to adjacent waters. Our simulation results for between-year variation in paralarval abundance coincide with recruitment (year n + 1) variability of T. pacificus in the field. The agreement between the simulation and field data indicates our model may be useful for predicting the recruitment of T. pacificus.

  7. Two-part models with stochastic processes for modelling longitudinal semicontinuous data: Computationally efficient inference and modelling the overall marginal mean.

    PubMed

    Yiu, Sean; Tom, Brian Dm

    2017-01-01

    Several researchers have described two-part models with patient-specific stochastic processes for analysing longitudinal semicontinuous data. In theory, such models can offer greater flexibility than the standard two-part model with patient-specific random effects. However, in practice, the high dimensional integrations involved in the marginal likelihood (i.e. integrated over the stochastic processes) significantly complicates model fitting. Thus, non-standard computationally intensive procedures based on simulating the marginal likelihood have so far only been proposed. In this paper, we describe an efficient method of implementation by demonstrating how the high dimensional integrations involved in the marginal likelihood can be computed efficiently. Specifically, by using a property of the multivariate normal distribution and the standard marginal cumulative distribution function identity, we transform the marginal likelihood so that the high dimensional integrations are contained in the cumulative distribution function of a multivariate normal distribution, which can then be efficiently evaluated. Hence, maximum likelihood estimation can be used to obtain parameter estimates and asymptotic standard errors (from the observed information matrix) of model parameters. We describe our proposed efficient implementation procedure for the standard two-part model parameterisation and when it is of interest to directly model the overall marginal mean. The methodology is applied on a psoriatic arthritis data set concerning functional disability.

  8. The value of oxygen-isotope data and multiple discharge records in calibrating a fully-distributed, physically-based rainfall-runoff model (CRUM3) to improve predictive capability

    NASA Astrophysics Data System (ADS)

    Neill, Aaron; Reaney, Sim

    2015-04-01

    Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.

  9. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  10. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  11. The Coalescent Process in Models with Selection

    PubMed Central

    Kaplan, N. L.; Darden, T.; Hudson, R. R.

    1988-01-01

    Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685

  12. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  13. Agent-Based Computing in Distributed Adversarial Planning

    DTIC Science & Technology

    2010-08-09

    plans. An agent is expected to agree to deviate from its optimal uncoordinated plan only if it improves its position. - process models for opponent...Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Improvements ...plan only if it improves its position. – process models for opponent modeling – We have analyzed the suitability of business process models for creating

  14. Modelling retention and dispersion mechanisms of bluefin tuna eggs and larvae in the northwest Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Mariani, Patrizio; MacKenzie, Brian R.; Iudicone, Daniele; Bozec, Alexandra

    2010-07-01

    Knowledge of early life history of most fish species in the Mediterranean Sea is sparse and processes affecting their recruitment are poorly understood. This is particularly true for bluefin tuna, Thunnus thynnus, even though this species is one of the world’s most valued fish species. Here we develop, apply and validate an individually based coupled biological-physical oceanographic model of fish early life history in the Mediterranean Sea. We first validate the general structure of the coupled model with a 12-day Lagrangian drift study of anchovy ( Engraulis encrasicolus) larvae in the Catalan Sea. The model reproduced the drift and growth of anchovy larvae as they drifted along the Catalan coast and yielded similar patterns as those observed in the field. We then applied the model to investigate transport and retention processes affecting the spatial distribution of bluefin tuna eggs and larvae during 1999-2003, and we compared modelled distributions with available field data collected in 2001 and 2003. Modelled and field distributions generally coincided and were patchy at mesoscales (10s-100s km); larvae were most abundant in eddies and along frontal zones. We also identified probable locations of spawning bluefin tuna using hydrographic backtracking procedures; these locations were situated in a major salinity frontal zone and coincided with distributions of an electronically tagged bluefin tuna and commercial bluefin tuna fishing vessels. Moreover, we hypothesized that mesoscale processes are responsible for the aggregation and dispersion mechanisms in the area and showed that these processes were significantly correlated to atmospheric forcing processes over the NW Mediterranean Sea. Interannual variations in average summer air temperature can reduce the intensity of ocean mesoscale processes in the Balearic area and thus potentially affect bluefin tuna larvae. These modelling approaches can increase understanding of bluefin tuna recruitment processes and eventually contribute to management of bluefin tuna fisheries.

  15. Linguistic Ambiguity in a Connectionist Model for Grammatical Studies.

    ERIC Educational Resources Information Center

    Angelica, Julia; Ney, James W.

    1995-01-01

    Discusses the evolution of the connectionist model of language processing, focusing on the parallel distributed processing (PDP) model proposed by Rumelhart and others (1986) that explains the microstructure of cognition in terms of interactive activation between elementary input, output, and intermediate processing units linked by weighted…

  16. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  17. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  18. On the dangers of model complexity without ecological justification in species distribution modeling

    Treesearch

    David M. Bell; Daniel R. Schlaepfer

    2016-01-01

    Although biogeographic patterns are the product of complex ecological processes, the increasing com-plexity of correlative species distribution models (SDMs) is not always motivated by ecological theory,but by model fit. The validity of model projections, such as shifts in a species’ climatic niche, becomesquestionable particularly during extrapolations, such as for...

  19. Modeling of First-Passage Processes in Financial Markets

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-Ichi; Hino, Hikaru; Sazuka, Naoya; Scalas, Enrico

    2010-03-01

    In this talk, we attempt to make a microscopic modeling the first-passage process (or the first-exit process) of the BUND future by minority game with market history. We find that the first-passage process of the minority game with appropriate history length generates the same properties as the BTP future (the middle and long term Italian Government bonds with fixed interest rates), namely, both first-passage time distributions have a crossover at some specific time scale as is the case for the Mittag-Leffler function. We also provide a macroscopic (or a phenomenological) modeling of the first-passage process of the BTP future and show analytically that the first-passage time distribution of a simplest mixture of the normal compound Poisson processes does not have such a crossover.

  20. Statistical distributions of earthquake numbers: consequence of branching process

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2010-03-01

    We discuss various statistical distributions of earthquake numbers. Previously, we derived several discrete distributions to describe earthquake numbers for the branching model of earthquake occurrence: these distributions are the Poisson, geometric, logarithmic and the negative binomial (NBD). The theoretical model is the `birth and immigration' population process. The first three distributions above can be considered special cases of the NBD. In particular, a point branching process along the magnitude (or log seismic moment) axis with independent events (immigrants) explains the magnitude/moment-frequency relation and the NBD of earthquake counts in large time/space windows, as well as the dependence of the NBD parameters on the magnitude threshold (magnitude of an earthquake catalogue completeness). We discuss applying these distributions, especially the NBD, to approximate event numbers in earthquake catalogues. There are many different representations of the NBD. Most can be traced either to the Pascal distribution or to the mixture of the Poisson distribution with the gamma law. We discuss advantages and drawbacks of both representations for statistical analysis of earthquake catalogues. We also consider applying the NBD to earthquake forecasts and describe the limits of the application for the given equations. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrence, the NBD has two parameters. The second parameter can be used to characterize clustering or overdispersion of a process. We determine the parameter values and their uncertainties for several local and global catalogues, and their subdivisions in various time intervals, magnitude thresholds, spatial windows, and tectonic categories. The theoretical model of how the clustering parameter depends on the corner (maximum) magnitude can be used to predict future earthquake number distribution in regions where very large earthquakes have not yet occurred.

  1. Don Quixote Pond: A Small Scale Model of Weathering and Salt Accumulation

    NASA Technical Reports Server (NTRS)

    Englert, P.; Bishop, J. L.; Patel, S. N.; Gibson, E. K.; Koeberl, C.

    2015-01-01

    The formation of Don Quixote Pond in the North Fork of Wright Valley, Antarctica, is a model for unique terrestrial calcium, chlorine, and sulfate weathering, accumulation, and distribution processes. The formation of Don Quixote Pond by simple shallow and deep groundwater contrasts more complex models for Don Juan Pond in the South Fork of Wright Valley. Our study intends to understand the formation of Don Quixote Pond as unique terrestrial processes and as a model for Ca, C1, and S weathering and distribution on Mars.

  2. Predicting wildfire occurrence distribution with spatial point process models and its uncertainty assessment: a case study in the Lake Tahoe Basin, USA

    Treesearch

    Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner

    2015-01-01

    Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...

  3. Multi-site calibration, validation, and sensitivity analysis of the MIKE SHE Model for a large watershed in northern China

    Treesearch

    S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao

    2012-01-01

    Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...

  4. Numerical modeling of laser assisted tape winding process

    NASA Astrophysics Data System (ADS)

    Zaami, Amin; Baran, Ismet; Akkerman, Remko

    2017-10-01

    Laser assisted tape winding (LATW) has become more and more popular way of producing new thermoplastic products such as ultra-deep sea water riser, gas tanks, structural parts for aerospace applications. Predicting the temperature in LATW has been a source of great interest since the temperature at nip-point plays a key role for mechanical interface performance. Modeling the LATW process includes several challenges such as the interaction of optics and heat transfer. In the current study, numerical modeling of the optical behavior of laser radiation on circular surfaces is investigated based on a ray tracing and non-specular reflection model. The non-specular reflection is implemented considering the anisotropic reflective behavior of the fiber-reinforced thermoplastic tape using a bidirectional reflectance distribution function (BRDF). The proposed model in the present paper includes a three-dimensional circular geometry, in which the effects of reflection from different ranges of the circular surface as well as effect of process parameters on temperature distribution are studied. The heat transfer model is constructed using a fully implicit method. The effect of process parameters on the nip-point temperature is examined. Furthermore, several laser distributions including Gaussian and linear are examined which has not been considered in literature up to now.

  5. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    NASA Astrophysics Data System (ADS)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  6. Concentration-driven models revisited: towards a unified framework to model settling tanks in water resource recovery facilities.

    PubMed

    Torfs, Elena; Martí, M Carmen; Locatelli, Florent; Balemans, Sophie; Bürger, Raimund; Diehl, Stefan; Laurent, Julien; Vanrolleghem, Peter A; François, Pierre; Nopens, Ingmar

    2017-02-01

    A new perspective on the modelling of settling behaviour in water resource recovery facilities is introduced. The ultimate goal is to describe in a unified way the processes taking place both in primary settling tanks (PSTs) and secondary settling tanks (SSTs) for a more detailed operation and control. First, experimental evidence is provided, pointing out distributed particle properties (such as size, shape, density, porosity, and flocculation state) as an important common source of distributed settling behaviour in different settling unit processes and throughout different settling regimes (discrete, hindered and compression settling). Subsequently, a unified model framework that considers several particle classes is proposed in order to describe distributions in settling behaviour as well as the effect of variations in particle properties on the settling process. The result is a set of partial differential equations (PDEs) that are valid from dilute concentrations, where they correspond to discrete settling, to concentrated suspensions, where they correspond to compression settling. Consequently, these PDEs model both PSTs and SSTs.

  7. DAVE: A plug and play model for distributed multimedia application development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mines, R.F.; Friesen, J.A.; Yang, C.L.

    1994-07-01

    This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less

  8. SGR-like behaviour of the repeating FRB 121102

    NASA Astrophysics Data System (ADS)

    Wang, F. Y.; Yu, H.

    2017-03-01

    Fast radio bursts (FRBs) are millisecond-duration radio signals occurring at cosmological distances. However the physical model of FRBs is mystery, many models have been proposed. Here we study the frequency distributions of peak flux, fluence, duration and waiting time for the repeating FRB 121102. The cumulative distributions of peak flux, fluence and duration show power-law forms. The waiting time distribution also shows power-law distribution, and is consistent with a non-stationary Poisson process. These distributions are similar as those of soft gamma repeaters (SGRs). We also use the statistical results to test the proposed models for FRBs. These distributions are consistent with the predictions from avalanche models of slowly driven nonlinear dissipative systems.

  9. A one-dimensional sectional model to simulate multicomponent aerosol dynamics in the marine boundary layer 2. Model application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitzgerald, James W.; Hoppel, William A.; Frick, Glendon M.

    1998-07-01

    The dynamics of aerosols in the marine boundary layer (MBL) are simulated with the marine boundary layer aerosol model (MARBLES), a one-dimensional, multicomponent sectional aerosol model [{ital Fitzgerald} {ital et al.}, this issue; {ital Gelbard} {ital et al.}, this issue]. First, to illustrate how the various aerosol processes influence the particle size distribution, the model was run with one or two processes operating on the same initial size distribution. Because of current interest in the effects of cloud processing of aerosols and exchange of aerosols with the free troposphere (FT) on marine aerosol size distributions, these two processes are examinedmore » in considerable detail. The simulations show that the effect of cloud processing (characteristic double-peaked size distribution) in the upper part of the MBL is manifested at the surface on a timescale that is much faster than changes due to exchange with the FT, assuming a typical exchange velocity of 0.6 cmthinsps{sup {minus}1}. The model predicts that the FT can be a significant source of particles for the MBL in the size range of the cloud-processing minimum, between the unactivated interstitial particles and the cloud condensation nuclei (CCN) which have grown as a result of conversion of dissolved SO{sub 2} to sulfate in cloud droplets. The model was also used to simulate the evolution of the aerosol size distribution in an air mass advecting from the east coast of the United States out over the ocean for up to 10 days. The modification of a continental aerosol size distribution to one that is remote marine in character occurs on a timescale of 6{endash}8 days. Nucleation was not observed in the base case 10-day advection simulation which assumed rather typical meteorological conditions. However, significant nucleation was predicted under a more favorable (albeit, atypical) combination of conditions which included significant precipitation scavenging (5 mmthinsph{sup {minus}1} of rain for 12 hours), colder temperatures by 10thinsp{degree}C (283 K at the surface decreasing to 278 K at 1000 m) and a high DMS flux (40 {mu}molthinspm{sup {minus}2}thinspd{sup {minus}1}). In a test of model self initialization, long-term (8{endash}10 days) predictions of marine aerosol size distributions were found to be essentially independent of initial conditions. {copyright} 1998 American Geophysical Union« less

  10. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  11. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  12. Computational modeling of the pressurization process in a NASP vehicle propellant tank experimental simulation

    NASA Technical Reports Server (NTRS)

    Sasmal, G. P.; Hochstein, J. I.; Wendl, M. C.; Hardy, T. L.

    1991-01-01

    A multidimensional computational model of the pressurization process in a slush hydrogen propellant storage tank was developed and its accuracy evaluated by comparison to experimental data measured for a 5 ft diameter spherical tank. The fluid mechanic, thermodynamic, and heat transfer processes within the ullage are represented by a finite-volume model. The model was shown to be in reasonable agreement with the experiment data. A parameter study was undertaken to examine the dependence of the pressurization process on initial ullage temperature distribution and pressurant mass flow rate. It is shown that for a given heat flux rate at the ullage boundary, the pressurization process is nearly independent of initial temperature distribution. Significant differences were identified between the ullage temperature and velocity fields predicted for pressurization of slush and those predicted for pressurization of liquid hydrogen. A simplified model of the pressurization process was constructed in search of a dimensionless characterization of the pressurization process. It is shown that the relationship derived from this simplified model collapses all of the pressure history data generated during this study into a single curve.

  13. Self-referenced processing, neurodevelopment and joint attention in autism.

    PubMed

    Mundy, Peter; Gwaltney, Mary; Henderson, Heather

    2010-09-01

    This article describes a parallel and distributed processing model (PDPM) of joint attention, self-referenced processing and autism. According to this model, autism involves early impairments in the capacity for rapid, integrated processing of self-referenced (proprioceptive and interoceptive) and other-referenced (exteroceptive) information. Measures of joint attention have proven useful in research on autism because they are sensitive to the early development of the 'parallel' and integrated processing of self- and other-referenced stimuli. Moreover, joint attention behaviors are a consequence, but also an organizer of the functional development of a distal distributed cortical system involving anterior networks including the prefrontal and insula cortices, as well as posterior neural networks including the temporal and parietal cortices. Measures of joint attention provide early behavioral indicators of atypical development in this parallel and distributed processing system in autism. In addition it is proposed that an early, chronic disturbance in the capacity for integrating self- and other-referenced information may have cascading effects on the development of self awareness in autism. The assumptions, empirical support and future research implications of this model are discussed.

  14. THE ROLE OF FISSION IN NEUTRON STAR MERGERS AND ITS IMPACT ON THE r-PROCESS PEAKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichler, M.; Panov, I.; Rauscher, T.

    2015-07-20

    Comparing observational abundance features with nucleosynthesis predictions of stellar evolution or explosion simulations, we can scrutinize two aspects: (a) the conditions in the astrophysical production site and (b) the quality of the nuclear physics input utilized. We test the abundance features of r-process nucleosynthesis calculations for the dynamical ejecta of neutron star merger simulations based on three different nuclear mass models: The Finite Range Droplet Model, the (quenched version of the) Extended Thomas Fermi Model with Strutinsky Integral, and the Hartree–Fock–Bogoliubov mass model. We make use of corresponding fission barrier heights and compare the impact of four different fission fragmentmore » distribution models on the final r-process abundance distribution. In particular, we explore the abundance distribution in the second r-process peak and the rare-earth sub-peak as a function of mass models and fission fragment distributions, as well as the origin of a shift in the third r-process peak position. The latter has been noticed in a number of merger nucleosynthesis predictions. We show that the shift occurs during the r-process freeze-out when neutron captures and β-decays compete and an (n,γ)–(γ,n) equilibrium is no longer maintained. During this phase neutrons originate mainly from fission of material above A = 240. We also investigate the role of β-decay half-lives from recent theoretical advances, which lead either to a smaller amount of fissioning nuclei during freeze-out or a faster (and thus earlier) release of fission neutrons, which can (partially) prevent this shift and has an impact on the second and rare-earth peak as well.« less

  15. MSEE: Stochastic Cognitive Linguistic Behavior Models for Semantic Sensing

    DTIC Science & Technology

    2013-09-01

    recognition, a Gaussian Process Dynamic Model with Social Network Analysis (GPDM-SNA) for a small human group action recognition, an extended GPDM-SNA...44  3.2. Small Human Group Activity Modeling Based on Gaussian Process Dynamic Model and Social Network Analysis (SN-GPDM...51  Approved for public release; distribution unlimited. 3 3.2.3. Gaussian Process Dynamical Model and

  16. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  17. Time-evolution of grain size distributions in random nucleation and growth crystallization processes

    NASA Astrophysics Data System (ADS)

    Teran, Anthony V.; Bill, Andreas; Bergmann, Ralf B.

    2010-02-01

    We study the time dependence of the grain size distribution N(r,t) during crystallization of a d -dimensional solid. A partial differential equation, including a source term for nuclei and a growth law for grains, is solved analytically for any dimension d . We discuss solutions obtained for processes described by the Kolmogorov-Avrami-Mehl-Johnson model for random nucleation and growth (RNG). Nucleation and growth are set on the same footing, which leads to a time-dependent decay of both effective rates. We analyze in detail how model parameters, the dimensionality of the crystallization process, and time influence the shape of the distribution. The calculations show that the dynamics of the effective nucleation and effective growth rates play an essential role in determining the final form of the distribution obtained at full crystallization. We demonstrate that for one class of nucleation and growth rates, the distribution evolves in time into the logarithmic-normal (lognormal) form discussed earlier by Bergmann and Bill [J. Cryst. Growth 310, 3135 (2008)]. We also obtain an analytical expression for the finite maximal grain size at all times. The theory allows for the description of a variety of RNG crystallization processes in thin films and bulk materials. Expressions useful for experimental data analysis are presented for the grain size distribution and the moments in terms of fundamental and measurable parameters of the model.

  18. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    NASA Astrophysics Data System (ADS)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  19. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    NASA Astrophysics Data System (ADS)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.

  20. Studies on thermokinetic of Chlorella pyrenoidosa devolatilization via different models.

    PubMed

    Chen, Zhihua; Lei, Jianshen; Li, Yunbei; Su, Xianfa; Hu, Zhiquan; Guo, Dabin

    2017-11-01

    The thermokinetics of Chlorella pyrenoidosa (CP) devolatilization were investigated based on iso-conversional model and different distributed activation energy models (DAEM). Iso-conversional process result showed that CP devolatilization roughly followed a single-step with mechanism function of f(α)=(1-α) 3 , and kinetic parameters pair of E 0 =180.5kJ/mol and A 0 =1.5E+13s -1 . Logistic distribution was the most suitable activation energy distribution function for CP devolatilization. Although reaction order n=3.3 was in accordance with iso-conversional process, Logistic DAEM could not detail the weight loss features since it presented as single-step reaction. The un-uniform feature of activation energy distribution in Miura-Maki DAEM, and weight fraction distribution in discrete DAEM reflected weight loss features. Due to the un-uniform distribution of activation and weight fraction, Miura-Maki DAEM and discreted DAEM could describe weight loss features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Distributed fiber optic sensor-enhanced detection and prediction of shrinkage-induced delamination of ultra-high-performance concrete overlay

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Valipour, Mahdi; Meng, Weina; Khayat, Kamal H.; Chen, Genda

    2017-08-01

    This study develops a delamination detection system for smart ultra-high-performance concrete (UHPC) overlays using a fully distributed fiber optic sensor. Three 450 mm (length) × 200 mm (width) × 25 mm (thickness) UHPC overlays were cast over an existing 200 mm thick concrete substrate. The initiation and propagation of delamination due to early-age shrinkage of the UHPC overlay were detected as sudden increases and their extension in spatial distribution of shrinkage-induced strains measured from the sensor based on pulse pre-pump Brillouin optical time domain analysis. The distributed sensor is demonstrated effective in detecting delamination openings from microns to hundreds of microns. A three-dimensional finite element model with experimental material properties is proposed to understand the complete delamination process measured from the distributed sensor. The model is validated using the distributed sensor data. The finite element model with cohesive elements for the overlay-substrate interface can predict the complete delamination process.

  2. The scaling of geographic ranges: implications for species distribution models

    USGS Publications Warehouse

    Yackulic, Charles B.; Ginsberg, Joshua R.

    2016-01-01

    There is a need for timely science to inform policy and management decisions; however, we must also strive to provide predictions that best reflect our understanding of ecological systems. Species distributions evolve through time and reflect responses to environmental conditions that are mediated through individual and population processes. Species distribution models that reflect this understanding, and explicitly model dynamics, are likely to give more accurate predictions.

  3. Modeling of Grain Size Distribution of Tsunami Sand Deposits in V-shaped Valley of Numanohama During the 2011 Tohoku Tsunami

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Satake, K.; Goto, T.; Takahashi, T.

    2016-12-01

    Estimating tsunami amplitude from tsunami sand deposit has been a challenge. The grain size distribution of tsunami sand deposit may have correlation with tsunami inundation process, and further with its source characteristics. In order to test this hypothesis, we need a tsunami sediment transport model that can accurately estimate grain size distribution of tsunami deposit. Here, we built and validate a tsunami sediment transport model that can simulate grain size distribution. Our numerical model has three layers which are suspended load layer, active bed layer, and parent bed layer. The two bed layers contain information about the grain size distribution. This numerical model can handle a wide range of grain sizes from 0.063 (4 ϕ) to 5.657 mm (-2.5 ϕ). We apply the numerical model to simulate the sedimentation process during the 2011 Tohoku earthquake in Numanohama, Iwate prefecture, Japan. The grain size distributions at 15 sample points along a 900 m transect from the beach are used to validate the tsunami sediment transport model. The tsunami deposits are dominated by coarse sand with diameter of 0.5 - 1 mm and their thickness are up to 25 cm. Our tsunami model can well reproduce the observed tsunami run-ups that are ranged from 16 to 34 m along the steep valley in Numanohama. The shapes of the simulated grain size distributions at many sample points located within 300 m from the shoreline are similar to the observations. The differences between observed and simulated peak of grain size distributions are less than 1 ϕ. Our result also shows that the simulated sand thickness distribution along the transect is consistent with the observation.

  4. A numerical study of zone-melting process for the thermoelectric material of Bi2Te3

    NASA Astrophysics Data System (ADS)

    Chen, W. C.; Wu, Y. C.; Hwang, W. S.; Hsieh, H. L.; Huang, J. Y.; Huang, T. K.

    2015-06-01

    In this study, a numerical model has been established by employing a commercial software; ProCAST, to simulate the variation/distribution of temperature and the subsequent microstructure of Bi2Te3 fabricated by zone-melting technique. Then an experiment is conducted to measure the temperature variation/distribution during the zone-melting process to validate the numerical system. Also, the effects of processing parameters on crystallization microstructure such as moving speed and temperature of heater are numerically evaluated. In the experiment, the Bi2Te3 powder are filled into a 30mm diameter quartz cylinder and the heater is set to 800°C with a moving speed 12.5 mm/hr. A thermocouple is inserted in the Bi2Te3 powder to measure the temperature variation/distribution of the zone-melting process. The temperature variation/distribution measured by experiment is compared to the results of numerical simulation. The results show that our model and the experiment are well matched. Then the model is used to evaluate the crystal formation for Bi2Te3 with a 30mm diameter process. It's found that when the moving speed is slower than 17.5 mm/hr, columnar crystal is obtained. In the end, we use this model to predict the crystal formation of zone-melting process for Bi2Te3 with a 45 mm diameter. The results show that it is difficult to grow columnar crystal when the diameter comes to 45mm.

  5. A Bayesian Framework of Uncertainties Integration in 3D Geological Model

    NASA Astrophysics Data System (ADS)

    Liang, D.; Liu, X.

    2017-12-01

    3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.

  6. Self-consistent approach for neutral community models with speciation

    NASA Astrophysics Data System (ADS)

    Haegeman, Bart; Etienne, Rampal S.

    2010-03-01

    Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.

  7. Self-Referenced Processing, Neurodevelopment and Joint Attention in Autism

    ERIC Educational Resources Information Center

    Mundy, Peter; Gwaltney, Mary; Henderson, Heather

    2010-01-01

    This article describes a parallel and distributed processing model (PDPM) of joint attention, self-referenced processing and autism. According to this model, autism involves early impairments in the capacity for rapid, integrated processing of self-referenced (proprioceptive and interoceptive) and other-referenced (exteroceptive) information.…

  8. Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Alruwaili, Manal

    With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.

  9. Statistical prediction with Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1989-01-01

    A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.

  10. An experimental paradigm for team decision processes

    NASA Technical Reports Server (NTRS)

    Serfaty, D.; Kleinman, D. L.

    1986-01-01

    The study of distributed information processing and decision making is presently hampered by two factors: (1) The inherent complexity of the mathematical formulation of decentralized problems has prevented the development of models that could be used to predict performance in a distributed environment; and (2) The lack of comprehensive scientific empirical data on human team decision making has hindered the development of significant descriptive models. As a part of a comprehensive effort to find a new framework for multihuman decision making problems, a novel experimental research paradigm was developed involving human terms in decision making tasks. Attempts to construct parts of an integrated model with ideas from queueing networks, team theory, distributed estimation and decentralized resource management are described.

  11. The Role of Graphlets in Viral Processes on Networks

    NASA Astrophysics Data System (ADS)

    Khorshidi, Samira; Al Hasan, Mohammad; Mohler, George; Short, Martin B.

    2018-05-01

    Predicting the evolution of viral processes on networks is an important problem with applications arising in biology, the social sciences, and the study of the Internet. In existing works, mean-field analysis based upon degree distribution is used for the prediction of viral spreading across networks of different types. However, it has been shown that degree distribution alone fails to predict the behavior of viruses on some real-world networks and recent attempts have been made to use assortativity to address this shortcoming. In this paper, we show that adding assortativity does not fully explain the variance in the spread of viruses for a number of real-world networks. We propose using the graphlet frequency distribution in combination with assortativity to explain variations in the evolution of viral processes across networks with identical degree distribution. Using a data-driven approach by coupling predictive modeling with viral process simulation on real-world networks, we show that simple regression models based on graphlet frequency distribution can explain over 95% of the variance in virality on networks with the same degree distribution but different network topologies. Our results not only highlight the importance of graphlets but also identify a small collection of graphlets which may have the highest influence over the viral processes on a network.

  12. Model of white oak flower survival and maturation

    Treesearch

    David R. Larsen; Robert A. Cecich

    1997-01-01

    A stochastic model of oak flower dynamics is presented that integrates a number of factors which appear to affect the oak pistillate flower development process. The factors are modeled such that the distribution of the predicted flower populations could have come from the same distribution as the observed flower populations. Factors included in the model are; the range...

  13. A superstatistical model of metastasis and cancer survival

    NASA Astrophysics Data System (ADS)

    Leon Chen, L.; Beck, Christian

    2008-05-01

    We introduce a superstatistical model for the progression statistics of malignant cancer cells. The metastatic cascade is modeled as a complex nonequilibrium system with several macroscopic pathways and inverse-chi-square distributed parameters of the underlying Poisson processes. The predictions of the model are in excellent agreement with observed survival-time probability distributions of breast cancer patients.

  14. A new statistical time-dependent model of earthquake occurrence: failure processes driven by a self-correcting model

    NASA Astrophysics Data System (ADS)

    Rotondi, Renata; Varini, Elisa

    2016-04-01

    The long-term recurrence of strong earthquakes is often modelled by the stationary Poisson process for the sake of simplicity, although renewal and self-correcting point processes (with non-decreasing hazard functions) are more appropriate. Short-term models mainly fit earthquake clusters due to the tendency of an earthquake to trigger other earthquakes; in this case, self-exciting point processes with non-increasing hazard are especially suitable. In order to provide a unified framework for analyzing earthquake catalogs, Schoenberg and Bolt proposed the SELC (Short-term Exciting Long-term Correcting) model (BSSA, 2000) and Varini employed a state-space model for estimating the different phases of a seismic cycle (PhD Thesis, 2005). Both attempts are combinations of long- and short-term models, but results are not completely satisfactory, due to the different scales at which these models appear to operate. In this study, we split a seismic sequence in two groups: the leader events, whose magnitude exceeds a threshold magnitude, and the remaining ones considered as subordinate events. The leader events are assumed to follow a well-known self-correcting point process named stress release model (Vere-Jones, J. Phys. Earth, 1978; Bebbington & Harte, GJI, 2003, Varini & Rotondi, Env. Ecol. Stat., 2015). In the interval between two subsequent leader events, subordinate events are expected to cluster at the beginning (aftershocks) and at the end (foreshocks) of that interval; hence, they are modeled by a failure processes that allows bathtub-shaped hazard function. In particular, we have examined the generalized Weibull distributions, a large family that contains distributions with different bathtub-shaped hazard as well as the standard Weibull distribution (Lai, Springer, 2014). The model is fitted to a dataset of Italian historical earthquakes and the results of Bayesian inference are shown.

  15. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  16. Identifyability measures to select the parameters to be estimated in a solid-state fermentation distributed parameter model.

    PubMed

    da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G

    2016-07-08

    Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.

  17. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  18. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  19. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  20. Multiscale Modeling of Antibody-Drug Conjugates: Connecting Tissue and Cellular Distribution to Whole Animal Pharmacokinetics and Potential Implications for Efficacy.

    PubMed

    Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M

    2016-09-01

    Antibody-drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from nonspecific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody-drug conjugate Kadcyla in HER2-positive mouse xenografts. This model is able to capture the impact of the drug-antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs.

  1. Multiscale Modeling of Antibody Drug Conjugates: Connecting tissue and cellular distribution to whole animal pharmacokinetics and potential implications for efficacy

    PubMed Central

    Cilliers, Cornelius; Guo, Hans; Liao, Jianshan; Christodolu, Nikolas; Thurber, Greg M.

    2016-01-01

    Antibody drug conjugates exhibit complex pharmacokinetics due to their combination of macromolecular and small molecule properties. These issues range from systemic concerns, such as deconjugation of the small molecule drug during the long antibody circulation time or rapid clearance from non-specific interactions, to local tumor tissue heterogeneity, cell bystander effects, and endosomal escape. Mathematical models can be used to study the impact of these processes on overall distribution in an efficient manner, and several types of models have been used to analyze varying aspects of antibody distribution including physiologically based pharmacokinetic (PBPK) models and tissue-level simulations. However, these processes are quantitative in nature and cannot be handled qualitatively in isolation. For example, free antibody from deconjugation of the small molecule will impact the distribution of conjugated antibodies within the tumor. To incorporate these effects into a unified framework, we have coupled the systemic and organ-level distribution of a PBPK model with the tissue-level detail of a distributed parameter tumor model. We used this mathematical model to analyze new experimental results on the distribution of the clinical antibody drug conjugate Kadcyla in HER2 positive mouse xenografts. This model is able to capture the impact of the drug antibody ratio (DAR) on tumor penetration, the net result of drug deconjugation, and the effect of using unconjugated antibody to drive ADC penetration deeper into the tumor tissue. This modeling approach will provide quantitative and mechanistic support to experimental studies trying to parse the impact of multiple mechanisms of action for these complex drugs. PMID:27287046

  2. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  3. Statistical self-similarity of hotspot seamount volumes modeled as self-similar criticality

    USGS Publications Warehouse

    Tebbens, S.F.; Burroughs, S.M.; Barton, C.C.; Naar, D.F.

    2001-01-01

    The processes responsible for hotspot seamount formation are complex, yet the cumulative frequency-volume distribution of hotspot seamounts in the Easter Island/Salas y Gomez Chain (ESC) is found to be well-described by an upper-truncated power law. We develop a model for hotspot seamount formation where uniform energy input produces events initiated on a self-similar distribution of critical cells. We call this model Self-Similar Criticality (SSC). By allowing the spatial distribution of magma migration to be self-similar, the SSC model recreates the observed ESC seamount volume distribution. The SSC model may have broad applicability to other natural systems.

  4. A simple model for factory distribution: Historical effect in an industry city

    NASA Astrophysics Data System (ADS)

    Uehara, Takashi; Sato, Kazunori; Morita, Satoru; Maeda, Yasunobu; Yoshimura, Jin; Tainaka, Kei-ichi

    2016-02-01

    The construction and discontinuance processes of factories are complicated problems in sociology. We focus on the spatial and temporal changes of factories at Hamamatsu city in Japan. Real data indicate that the clumping degree of factories decreases as the density of factory increases. To represent the spatial and temporal changes of factories, we apply "contact process" which is one of cellular automata. This model roughly explains the dynamics of factory distribution. We also find "historical effect" in spatial distribution. Namely, the recent factories have been dispersed due to the past distribution during the period of economic bubble. This effect may be related to heavy shock in Japanese stock market.

  5. Differential memory in the earth's magnetotail

    NASA Technical Reports Server (NTRS)

    Burkhart, G. R.; Chen, J.

    1991-01-01

    The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.

  6. Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing

    DTIC Science & Technology

    2012-12-14

    Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing Matei Zaharia Tathagata Das Haoyuan Li Timothy Hunter Scott Shenker Ion...SUBTITLE Discretized Streams: A Fault-Tolerant Model for Scalable Stream Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...time. However, current programming models for distributed stream processing are relatively low-level often leaving the user to worry about consistency of

  7. Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA

    NASA Astrophysics Data System (ADS)

    Montillet, Jean-Philippe; Yu, Kegen

    2015-04-01

    Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).

  8. Probabilistic modeling of the fate of Listeria monocytogenes in diced bacon during the manufacturing process.

    PubMed

    Billoir, Elise; Denis, Jean-Baptiste; Cammeau, Natalie; Cornu, Marie; Zuliani, Veronique

    2011-02-01

    To assess the impact of the manufacturing process on the fate of Listeria monocytogenes, we built a generic probabilistic model intended to simulate the successive steps in the process. Contamination evolution was modeled in the appropriate units (breasts, dice, and then packaging units through the successive steps in the process). To calibrate the model, parameter values were estimated from industrial data, from the literature, and based on expert opinion. By means of simulations, the model was explored using a baseline calibration and alternative scenarios, in order to assess the impact of changes in the process and of accidental events. The results are reported as contamination distributions and as the probability that the product will be acceptable with regards to the European regulatory safety criterion. Our results are consistent with data provided by industrial partners and highlight that tumbling is a key step for the distribution of the contamination at the end of the process. Process chain models could provide an important added value for risk assessment models that basically consider only the outputs of the process in their risk mitigation strategies. Moreover, a model calibrated to correspond to a specific plant could be used to optimize surveillance. © 2010 Society for Risk Analysis.

  9. SGR-like behaviour of the repeating FRB 121102

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, F.Y.; Yu, H., E-mail: fayinwang@nju.edu.cn, E-mail: yuhai@smail.nju.edu.cn

    2017-03-01

    Fast radio bursts (FRBs) are millisecond-duration radio signals occurring at cosmological distances. However the physical model of FRBs is mystery, many models have been proposed. Here we study the frequency distributions of peak flux, fluence, duration and waiting time for the repeating FRB 121102. The cumulative distributions of peak flux, fluence and duration show power-law forms. The waiting time distribution also shows power-law distribution, and is consistent with a non-stationary Poisson process. These distributions are similar as those of soft gamma repeaters (SGRs). We also use the statistical results to test the proposed models for FRBs. These distributions are consistentmore » with the predictions from avalanche models of slowly driven nonlinear dissipative systems.« less

  10. Speech Perception as a Cognitive Process: The Interactive Activation Model.

    ERIC Educational Resources Information Center

    Elman, Jeffrey L.; McClelland, James L.

    Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…

  11. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulisashvili, Archil, E-mail: guli@math.ohiou.ed; Stein, Elias M., E-mail: stein@math.princeton.ed

    2010-06-15

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  12. Multiple Meanings Are Not Necessarily a Disadvantage in Semantic Processing: Evidence from Homophone Effects in Semantic Categorisation

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Sears, Christopher R.; Owen, William J.

    2007-01-01

    The ambiguity disadvantage (slower processing of ambiguous words relative to unambiguous words) has been taken as evidence for a distributed semantic representational system like that embodied in parallel distributed processing (PDP) models. In the present study, we investigated whether semantic ambiguity slows meaning activation, as PDP models…

  13. Universal scaling for second-class particles in a one-dimensional misanthrope process

    NASA Astrophysics Data System (ADS)

    Rákos, Attila

    2010-06-01

    We consider the one-dimensional Katz-Lebowitz-Spohn (KLS) model, which is a generalization of the totally asymmetric simple exclusion process (TASEP) with nearest neighbour interaction. Using a powerful mapping, the KLS model can be translated into a misanthrope process. In this model, for the repulsive case, it is possible to introduce second-class particles, the number of which is conserved. We study the distance distribution of second-class particles in this model numerically and find that for large distances it decreases as x-3/2. This agrees with a previous analytical result of Derrida et al (1993) for the TASEP, where the same asymptotic behaviour was found. We also study the dynamical scaling function of the distance distribution and find that it is universal within this family of models.

  14. Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process

    NASA Astrophysics Data System (ADS)

    Konno, Hidetoshi; Tamura, Yoshiyasu

    2018-01-01

    In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).

  15. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  16. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  17. Numerical Uncertainties in the Simulation of Reversible Isentropic Processes and Entropy Conservation.

    NASA Astrophysics Data System (ADS)

    Johnson, Donald R.; Lenzen, Allen J.; Zapotocny, Tom H.; Schaack, Todd K.

    2000-11-01

    A challenge common to weather, climate, and seasonal numerical prediction is the need to simulate accurately reversible isentropic processes in combination with appropriate determination of sources/sinks of energy and entropy. Ultimately, this task includes the distribution and transport of internal, gravitational, and kinetic energies, the energies of water substances in all forms, and the related thermodynamic processes of phase changes involved with clouds, including condensation, evaporation, and precipitation processes.All of the processes noted above involve the entropies of matter, radiation, and chemical substances, conservation during transport, and/or changes in entropies by physical processes internal to the atmosphere. With respect to the entropy of matter, a means to study a model's accuracy in simulating internal hydrologic processes is to determine its capability to simulate the appropriate conservation of potential and equivalent potential temperature as surrogates of dry and moist entropy under reversible adiabatic processes in which clouds form, evaporate, and precipitate. In this study, a statistical strategy utilizing the concept of `pure error' is set forth to assess the numerical accuracies of models to simulate reversible processes during 10-day integrations of the global circulation corresponding to the global residence time of water vapor. During the integrations, the sums of squared differences between equivalent potential temperature e numerically simulated by the governing equations of mass, energy, water vapor, and cloud water and a proxy equivalent potential temperature te numerically simulated as a conservative property are monitored. Inspection of the differences of e and te in time and space and the relative frequency distribution of the differences details bias and random errors that develop from nonlinear numerical inaccuracies in the advection and transport of potential temperature and water substances within the global atmosphere.A series of nine global simulations employing various versions of Community Climate Models CCM2 and CCM3-all Eulerian spectral numerics, all semi-Lagrangian numerics, mixed Eulerian spectral, and semi-Lagrangian numerics-and the University of Wisconsin-Madison (UW) isentropic-sigma gridpoint model provides an interesting comparison of numerical accuracies in the simulation of reversibility. By day 10, large bias and random differences were identified in the simulation of reversible processes in all of the models except for the UW isentropic-sigma model. The CCM2 and CCM3 simulations yielded systematic differences that varied zonally, vertically, and temporally. Within the comparison, the UW isentropic-sigma model was superior in transporting water vapor and cloud water/ice and in simulating reversibility involving the conservation of dry and moist entropy. The only relative frequency distribution of differences that appeared optimal, in that the distribution remained unbiased and equilibrated with minimal variance as it remained statistically stationary, was the distribution from the UW isentropic-sigma model. All other distributions revealed nonstationary characteristics with spreading and/or shifting of the maxima as the biases and variances of the numerical differences of e and te amplified.

  18. Using Dirichlet Processes for Modeling Heterogeneous Treatment Effects across Sites

    ERIC Educational Resources Information Center

    Miratrix, Luke; Feller, Avi; Pillai, Natesh; Pati, Debdeep

    2016-01-01

    Modeling the distribution of site level effects is an important problem, but it is also an incredibly difficult one. Current methods rely on distributional assumptions in multilevel models for estimation. There it is hoped that the partial pooling of site level estimates with overall estimates, designed to take into account individual variation as…

  19. Using a spatially-distributed hydrologic biogeochemistry model to study the spatial variation of carbon processes in a Critical Zone Observatory

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Eissenstat, D. M.; Davis, K. J.; He, Y.

    2016-12-01

    Forest carbon processes are affected by, among other factors, soil moisture, soil temperature, soil nutrients and solar radiation. Most of the current biogeochemical models are 1-D and represent one point in space. Therefore, they cannot resolve the topographically driven hill-slope land surface heterogeneity or the spatial pattern of nutrient availability. A spatially distributed forest ecosystem model, Flux-PIHM-BGC, has been developed by coupling a 1-D mechanistic biogeochemical model Biome-BGC (BBGC) with a spatially distributed land surface hydrologic model, Flux-PIHM. Flux-PIHM is a coupled physically based model, which incorporates a land-surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model. Flux-PIHM is able to represent the link between groundwater and the surface energy balance, as well as the land surface heterogeneities caused by topography. In the coupled Flux-PIHM-BGC model, each Flux-PIHM model grid couples a 1-D BBGC model, while soil nitrogen is transported among model grids via subsurface water flow. In each grid, Flux-PIHM provides BBGC with soil moisture, soil temperature, and solar radiation information, while BBGC provides Flux-PIHM with leaf area index. The coupled Flux-PIHM-BGC model has been implemented at the Susquehanna/Shale Hills critical zone observatory (SSHCZO). Model results suggest that the vegetation and soil carbon distribution is primarily constrained by nitorgen availability (affected by nitorgen transport via topographically driven subsurface flow), and also constrained by solar radiation and root zone soil moisture. The predicted vegetation and soil carbon distribution generally agrees with the macro pattern observed within the watershed. The coupled ecosystem-hydrologic model provides an important tool to study the impact of topography on watershed carbon processes, as well as the impact of climate change on water resources.

  20. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  1. Gene tree rooting methods give distributions that mimic the coalescent process.

    PubMed

    Tian, Yuan; Kubatko, Laura S

    2014-01-01

    Multi-locus phylogenetic inference is commonly carried out via models that incorporate the coalescent process to model the possibility that incomplete lineage sorting leads to incongruence between gene trees and the species tree. An interesting question that arises in this context is whether data "fit" the coalescent model. Previous work (Rosenfeld et al., 2012) has suggested that rooting of gene trees may account for variation in empirical data that has been previously attributed to the coalescent process. We examine this possibility using simulated data. We show that, in the case of four taxa, the distribution of gene trees observed from rooting estimated gene trees with either the molecular clock or with outgroup rooting can be closely matched by the distribution predicted by the coalescent model with specific choices of species tree branch lengths. We apply commonly-used coalescent-based methods of species tree inference to assess their performance in these situations. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Fully Burdened Cost of Fuel Using Input-Output Analysis

    DTIC Science & Technology

    2011-12-01

    Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University

  3. Wave Processes in Arctic Seas, Observed from TerraSAR-X

    DTIC Science & Technology

    2015-09-30

    in order to improve wave models as well as ice models applicable to a changing Arctic wave/ and ice climate . This includes observation and...fields retrieved from the TS-X image swaths. 4. “Wave Climate and Wave Mixing in the Marginal Ice Zones of Arctic Seas, Observations and Modelling”, by...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. “Wave Processes in Arctic Seas, Observed from TerraSAR-X

  4. Formal Process Modeling to Improve Human Decision-Making in Test and Evaluation Acoustic Range Control

    DTIC Science & Technology

    2017-09-01

    AVAILABILITY STATEMENT Approved for public release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Test and...ambiguities and identify high -value decision points? This thesis explores how formalization of these experience-based decisions as a process model...representing a T&E event may reveal high -value decision nodes where certain decisions carry more weight or potential for impacts to a successful test. The

  5. A hierarchical model for estimating the spatial distribution and abundance of animals detected by continuous-time recorders

    USGS Publications Warehouse

    Dorazio, Robert; Karanth, K. Ullas

    2017-01-01

    MotivationSeveral spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.Model and data analysisWe developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.BenefitsOur approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in cases where spatial covariates of abundance are unknown or unavailable. We illustrated these benefits in the analysis of our data, which allowed us to quantify differences between nocturnal and diurnal activities of tigers and to estimate their spatial distribution and abundance across the study area. Our continuous-time SCR model allows an analyst to specify many of the ecological processes thought to be involved in the distribution, movement, and behavior of animals detected in a spatial trapping array of continuous-time recorders. We plan to extend this model to estimate the population dynamics of animals detected during multiple years of SCR surveys.

  6. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  7. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  8. Dynamic Modeling of Yield and Particle Size Distribution in Continuous Bayer Precipitation

    NASA Astrophysics Data System (ADS)

    Stephenson, Jerry L.; Kapraun, Chris

    Process engineers at Alcoa's Point Comfort refinery are using a dynamic model of the Bayer precipitation area to evaluate options in operating strategies. The dynamic model, a joint development effort between Point Comfort and the Alcoa Technical Center, predicts process yields, particle size distributions and occluded soda levels for various flowsheet configurations of the precipitation and classification circuit. In addition to rigorous heat, material and particle population balances, the model includes mechanistic kinetic expressions for particle growth and agglomeration and semi-empirical kinetics for nucleation and attrition. The kinetic parameters have been tuned to Point Comfort's operating data, with excellent matches between the model results and plant data. The model is written for the ACSL dynamic simulation program with specifically developed input/output graphical user interfaces to provide a user-friendly tool. Features such as a seed charge controller enhance the model's usefulness for evaluating operating conditions and process control approaches.

  9. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  10. Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.

    PubMed

    Finley, Benjamin J; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.

  11. Physics at a 100 TeV pp Collider: Standard Model Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mangano, M. L.; Zanderighi, G.; Aguilar Saavedra, J. A.

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  12. Technical report. The application of probability-generating functions to linear-quadratic radiation survival curves.

    PubMed

    Kendal, W S

    2000-04-01

    To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.

  13. Calculating Formulae of Proportion Factor and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang

    2016-07-01

    Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.

  14. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  15. Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Marks, D. G.; Gurney, R. J.

    2009-12-01

    The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.

  16. Distributed Modelling of Stormflow Generation: Assessing the Effect of Ground Cover

    NASA Astrophysics Data System (ADS)

    Jarihani, B.; Sidle, R. C.; Roth, C. H.; Bartley, R.; Wilkinson, S. N.

    2017-12-01

    Understanding the effects of grazing management and land cover changes on surface hydrology is important for water resources and land management. A distributed hydrological modelling platform, wflow, (that was developed as part of Deltares's OpenStreams project) is used to assess the effect of land management practices on runoff generation processes. The model was applied to Weany Creek, a small catchment (13.6 km2) of the Burdekin Basin, North Australia, which is being studied to understand sources of sediment and nutrients to the Great Barrier Reef. Satellite and drone-based ground cover data, high resolution topography from LiDAR, soil properties, and distributed rainfall data were used to parameterise the model. Wflow was used to predict total runoff, peak runoff, time of rise, and lag time for several events of varying magnitudes and antecedent moisture conditions. A nested approach was employed to calibrate the model by using recorded flow hydrographs at three scales: (1) a hillslope sub-catchment: (2) a gullied sub-catchment; and the 13.6 km2 catchment outlet. Model performance was evaluated by comparing observed and predicted stormflow hydrograph attributes using the Nash Sutcliffe efficiency metric. By using a nested approach, spatiotemporal patterns of overland flow occurrence across the catchment can also be evaluated. The results show that a process-based distributed model can be calibrated to simulate spatial and temporal patterns of runoff generation processes, to help identify dominant processes which may be addressed by land management to improve rainfall retention. The model will be used to assess the effects of ground cover changes due to management practices in grazed lands on storm runoff.

  17. Comment on Pisarenko et al., "Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory"

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2016-02-01

    In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.

  18. On joint subtree distributions under two evolutionary models.

    PubMed

    Wu, Taoyang; Choi, Kwok Pui

    2016-04-01

    In population and evolutionary biology, hypotheses about micro-evolutionary and macro-evolutionary processes are commonly tested by comparing the shape indices of empirical evolutionary trees with those predicted by neutral models. A key ingredient in this approach is the ability to compute and quantify distributions of various tree shape indices under random models of interest. As a step to meet this challenge, in this paper we investigate the joint distribution of cherries and pitchforks (that is, subtrees with two and three leaves) under two widely used null models: the Yule-Harding-Kingman (YHK) model and the proportional to distinguishable arrangements (PDA) model. Based on two novel recursive formulae, we propose a dynamic approach to numerically compute the exact joint distribution (and hence the marginal distributions) for trees of any size. We also obtained insights into the statistical properties of trees generated under these two models, including a constant correlation between the cherry and the pitchfork distributions under the YHK model, and the log-concavity and unimodality of the cherry distributions under both models. In addition, we show that there exists a unique change point for the cherry distributions between these two models. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  20. Spatio-temporal dynamics and laterality effects of face inversion, feature presence and configuration, and face outline

    PubMed Central

    Marinkovic, Ksenija; Courtney, Maureen G.; Witzel, Thomas; Dale, Anders M.; Halgren, Eric

    2014-01-01

    Although a crucial role of the fusiform gyrus (FG) in face processing has been demonstrated with a variety of methods, converging evidence suggests that face processing involves an interactive and overlapping processing cascade in distributed brain areas. Here we examine the spatio-temporal stages and their functional tuning to face inversion, presence and configuration of inner features, and face contour in healthy subjects during passive viewing. Anatomically-constrained magnetoencephalography (aMEG) combines high-density whole-head MEG recordings and distributed source modeling with high-resolution structural MRI. Each person's reconstructed cortical surface served to constrain noise-normalized minimum norm inverse source estimates. The earliest activity was estimated to the occipital cortex at ~100 ms after stimulus onset and was sensitive to an initial coarse level visual analysis. Activity in the right-lateralized ventral temporal area (inclusive of the FG) peaked at ~160 ms and was largest to inverted faces. Images containing facial features in the veridical and rearranged configuration irrespective of the facial outline elicited intermediate level activity. The M160 stage may provide structural representations necessary for downstream distributed areas to process identity and emotional expression. However, inverted faces additionally engaged the left ventral temporal area at ~180 ms and were uniquely subserved by bilateral processing. This observation is consistent with the dual route model and spared processing of inverted faces in prosopagnosia. The subsequent deflection, peaking at ~240 ms in the anterior temporal areas bilaterally, was largest to normal, upright faces. It may reflect initial engagement of the distributed network subserving individuation and familiarity. These results support dynamic models suggesting that processing of unfamiliar faces in the absence of a cognitive task is subserved by a distributed and interactive neural circuit. PMID:25426044

  1. Multi-temperature model derived from state-to-state kinetics for hypersonic entry in Jupiter atmosphere

    NASA Astrophysics Data System (ADS)

    Colonna, G.; D'Ambrosio, D.; D'Ammando, G.; Pietanza, L. D.; Capitelli, M.

    2014-12-01

    A state-to-state model of H2/He plasmas coupling the master equations for internal distributions of heavy species with the transport equation for the free electrons has been used as a basis for implementing a multi-temperature kinetic model. In the multi-temperature model internal distributions of heavy particles are Boltzmann, the electron energy distribution function is Maxwell, and the rate coefficients of the elementary processes become a function of local temperatures associated to the relevant equilibrium distributions. The state-to-state and multi-temperature models have been compared in the case of a homogenous recombining plasma, reproducing the conditions met during supersonic expansion though converging-diverging nozzles.

  2. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  3. Alpha spectrometric characterization of process-related particle size distributions from active particle sampling at the Los Alamos National Laboratory uranium foundry

    NASA Astrophysics Data System (ADS)

    Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.

    2010-03-01

    Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.

  4. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  5. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

  6. Cometary atmospheres: Modeling the spatial distribution of observed neutral radicals

    NASA Technical Reports Server (NTRS)

    Combi, M. R.

    1985-01-01

    Progress on modeling the spatial distributions of cometary radicals is described. The Monte Carlo particle-trajectory model was generalized to include the full time dependencies of initial comet expansion velocities, nucleus vaporization rates, photochemical lifetimes and photon emission rates which enter the problem through the comet's changing heliocentric distance and velocity. The effect of multiple collisions in the transition zone from collisional coupling to true free flow were also included. Currently available observations of the spatial distributions of the neutral radicals, as well as the latest available photochemical data were re-evaluated. Preliminary exploratory model results testing the effects of various processes on observable spatial distributions are also discussed.

  7. Formulation of the Multi-Hit Model With a Non-Poisson Distribution of Hits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, Oleg N., E-mail: Oleg.Vassiliev@albertahealthservices.ca

    2012-07-15

    Purpose: We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Methods and Materials: Poisson distribution is well justified for the first of the two processes. The second distribution depends on howmore » a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Results: Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Conclusions: Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality.« less

  8. Modeling and measurements of urban aerosol processes on the neighborhood scale in Rotterdam, Oslo and Helsinki

    NASA Astrophysics Data System (ADS)

    Karl, Matthias; Kukkonen, Jaakko; Keuken, Menno P.; Lützenkirchen, Susanne; Pirjola, Liisa; Hussein, Tareq

    2016-04-01

    This study evaluates the influence of aerosol processes on the particle number (PN) concentrations in three major European cities on the temporal scale of 1 h, i.e., on the neighborhood and city scales. We have used selected measured data of particle size distributions from previous campaigns in the cities of Helsinki, Oslo and Rotterdam. The aerosol transformation processes were evaluated using the aerosol dynamics model MAFOR, combined with a simplified treatment of roadside and urban atmospheric dispersion. We have compared the model predictions of particle number size distributions with the measured data, and conducted sensitivity analyses regarding the influence of various model input variables. We also present a simplified parameterization for aerosol processes, which is based on the more complex aerosol process computations; this simple model can easily be implemented to both Gaussian and Eulerian urban dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of two organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds was not taken into account. By choosing concentrations and particle size distributions at roadside as starting point of the computations, nucleation of gas-phase vapors from the exhaust has been regarded as post tail-pipe emission, avoiding the need to include nucleation in the process analysis. Dry deposition and coagulation of particles were identified to be the most important aerosol dynamic processes that control the evolution and removal of particles. The error of the contribution from dry deposition to PN losses due to the uncertainty of measured deposition velocities ranges from -76 to +64 %. The removal of nanoparticles by coagulation enhanced considerably when considering the fractal nature of soot aggregates and the combined effect of van der Waals and viscous interactions. The effect of condensation and evaporation of organic vapors emitted by vehicles on particle numbers and on particle size distributions was examined. Under inefficient dispersion conditions, the model predicts that condensational growth contributes to the evolution of PN from roadside to the neighborhood scale. The simplified parameterization of aerosol processes predicts the change in particle number concentrations between roadside and urban background within 10 % of that predicted by the fully size-resolved MAFOR model.

  9. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  10. Bayesian soft X-ray tomography using non-stationary Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  11. Bayesian soft X-ray tomography using non-stationary Gaussian Processes.

    PubMed

    Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R

    2013-08-01

    In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.

  12. Processing-microstructure relationships in thermotropic liquid crystalline polymers: Experimental and numerical modeling studies

    NASA Astrophysics Data System (ADS)

    Fang, Jun

    Thermotropic liquid crystalline polymers (TLCPs) are a class of promising engineering materials for high-demanding structural applications. Their excellent mechanical properties are highly correlated to the underlying molecular orientation states, which may be affected by complex flow fields during melt processing. Thus, understanding and eventually predicting how processing flows impact molecular orientation is a critical step towards rational design work in order to achieve favorable, balanced physical properties in finished products. This thesis aims to develop deeper understanding of orientation development in commercial TLCPs during processing by coordinating extensive experimental measurements with numerical computations. In situ measurements of orientation development of LCPs during processing are a focal point of this thesis. An x-ray capable injection molding apparatus is enhanced and utilized for time-resolved measurements of orientation development in multiple commercial TLCPs during injection molding. Ex situ wide angle x-ray scattering is also employed for more thorough characterization of molecular orientation distributions in molded plaques. Incompletely injection molded plaques ("short shots") are studied to gain further insights into the intermediate orientation states during mold filling. Finally, two surface orientation characterization techniques, near edge x-ray absorption fine structure (NEXAFS) and infrared attenuated total reflectance (FTIR-ATR) are combined to investigate the surface orientation distribution of injection molded plaques. Surface orientation states are found to be vastly different from their bulk counterparts due to different kinematics involved in mold filling. In general, complex distributions of orientation in molded plaques reflect the spatially varying competition between shear and extension during mold filling. To complement these experimental measurements, numerical calculations based on the Larson-Doi polydomain model are performed. The implementation of the Larson-Doi in complex processing flows is performed using a commercial process modeling software suite (MOLDFLOWRTM), exploiting a nearly exact analogy between the Larson-Doi model and a fiber orientation model that has been widely used in composites processing simulations. The modeling scheme is first verified by predicting many qualitative and quantitative features of molecular orientation distributions in isothermal extrusion-fed channel flows. In coordination with experiments, the model predictions are found to capture many qualitative features observed in injection molded plaques (including short shots). The final, stringent test of Larson-Doi model performance is prediction of in situ transient orientation data collected during mold filling. The model yields satisfactory results, though certain numerical approximations limit performance near the mold front.

  13. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  14. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  15. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  16. Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process

    PubMed Central

    Finley, Benjamin J.; Kilkki, Kalevi

    2014-01-01

    The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621

  17. A stress-induced phase transition model for semi-crystallize shape memory polymer

    NASA Astrophysics Data System (ADS)

    Guo, Xiaogang; Zhou, Bo; Liu, Liwu; Liu, Yanju; Leng, Jinsong

    2014-03-01

    The developments of constitutive models for shape memory polymer (SMP) have been motivated by its increasing applications. During cooling or heating process, the phase transition which is a continuous time-dependent process happens in semi-crystallize SMP and the various individual phases form at different temperature and in different configuration. Then, the transformation between these phases occurred and shape memory effect will emerge. In addition, stress applied on SMP is an important factor for crystal melting during phase transition. In this theory, an ideal phase transition model considering stress or pre-strain is the key to describe the behaviors of shape memory effect. So a normal distributed model was established in this research to characterize the volume fraction of each phase in SMP during phase transition. Generally, the experiment results are partly backward (in heating process) or forward (in cooling process) compared with the ideal situation considering delay effect during phase transition. So, a correction on the normal distributed model is needed. Furthermore, a nonlinear relationship between stress and phase transition temperature Tg is also taken into account for establishing an accurately normal distributed phase transition model. Finally, the constitutive model which taking the stress as an influence factor on phase transition was also established. Compared with the other expressions, this new-type model possesses less parameter and is more accurate. For the sake of verifying the rationality and accuracy of new phase transition and constitutive model, the comparisons between the simulated and experimental results were carried out.

  18. Nonlinear GARCH model and 1 / f noise

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Ruseckas, J.

    2015-06-01

    Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.

  19. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups

    NASA Astrophysics Data System (ADS)

    Capitán, José A.; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  1. Demography-based adaptive network model reproduces the spatial organization of human linguistic groups.

    PubMed

    Capitán, José A; Manrubia, Susanna

    2015-12-01

    The distribution of human linguistic groups presents a number of interesting and nontrivial patterns. The distributions of the number of speakers per language and the area each group covers follow log-normal distributions, while population and area fulfill an allometric relationship. The topology of networks of spatial contacts between different linguistic groups has been recently characterized, showing atypical properties of the degree distribution and clustering, among others. Human demography, spatial conflicts, and the construction of networks of contacts between linguistic groups are mutually dependent processes. Here we introduce an adaptive network model that takes all of them into account and successfully reproduces, using only four model parameters, not only those features of linguistic groups already described in the literature, but also correlations between demographic and topological properties uncovered in this work. Besides their relevance when modeling and understanding processes related to human biogeography, our adaptive network model admits a number of generalizations that broaden its scope and make it suitable to represent interactions between agents based on population dynamics and competition for space.

  2. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    ERIC Educational Resources Information Center

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  3. On the problem of modeling for parameter identification in distributed structures

    NASA Technical Reports Server (NTRS)

    Norris, Mark A.; Meirovitch, Leonard

    1988-01-01

    Structures are often characterized by parameters, such as mass and stiffness, that are spatially distributed. Parameter identification of distributed structures is subject to many of the difficulties involved in the modeling problem, and the choice of the model can greatly affect the results of the parameter identification process. Analogously to control spillover in the control of distributed-parameter systems, identification spillover is shown to exist as well and its effect is to degrade the parameter estimates. Moreover, as in modeling by the Rayleigh-Ritz method, it is shown that, for a Rayleigh-Ritz type identification algorithm, an inclusion principle exists in the identification of distributed-parameter systems as well, so that the identified natural frequencies approach the actual natural frequencies monotonically from above.

  4. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  5. Telemedicine and distributed medical intelligence.

    PubMed

    Warner, D; Tichenor, J M; Balch, D C

    1996-01-01

    Recent trends in health care informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. The authors present a new model of health care information, distributed medical intelligence, which promotes the development of an integrative medical communication system addressing the process of providing expert medical knowledge to the point of need. The model incorporates audio, video, high-resolution still images, and virtual reality applications into an integrated medical communications network. Three components of the model (care portals, Docking Station, and the bridge) are described. The implementation of this model at the East Carolina University School of Medicine is also outlined.

  6. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  7. Terrestrial gross carbon dioxide uptake: global distribution and covariation with climate.

    PubMed

    Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Arain, M Altaf; Baldocchi, Dennis; Bonan, Gordon B; Bondeau, Alberte; Cescatti, Alessandro; Lasslop, Gitta; Lindroth, Anders; Lomas, Mark; Luyssaert, Sebastiaan; Margolis, Hank; Oleson, Keith W; Roupsard, Olivier; Veenendaal, Elmar; Viovy, Nicolas; Williams, Christopher; Woodward, F Ian; Papale, Dario

    2010-08-13

    Terrestrial gross primary production (GPP) is the largest global CO(2) flux driving several ecosystem functions. We provide an observation-based estimate of this flux at 123 +/- 8 petagrams of carbon per year (Pg C year(-1)) using eddy covariance flux data and various diagnostic models. Tropical forests and savannahs account for 60%. GPP over 40% of the vegetated land is associated with precipitation. State-of-the-art process-oriented biosphere models used for climate predictions exhibit a large between-model variation of GPP's latitudinal patterns and show higher spatial correlations between GPP and precipitation, suggesting the existence of missing processes or feedback mechanisms which attenuate the vegetation response to climate. Our estimates of spatially distributed GPP and its covariation with climate can help improve coupled climate-carbon cycle process models.

  8. Importance of vesicle release stochasticity in neuro-spike communication.

    PubMed

    Ramezani, Hamideh; Akan, Ozgur B

    2017-07-01

    Aim of this paper is proposing a stochastic model for vesicle release process, a part of neuro-spike communication. Hence, we study biological events occurring in this process and use microphysiological simulations to observe functionality of these events. Since the most important source of variability in vesicle release probability is opening of voltage dependent calcium channels (VDCCs) followed by influx of calcium ions through these channels, we propose a stochastic model for this event, while using a deterministic model for other variability sources. To capture the stochasticity of calcium influx to pre-synaptic neuron in our model, we study its statistics and find that it can be modeled by a distribution defined based on Normal and Logistic distributions.

  9. The timing of sequences of saccades in visual search.

    PubMed Central

    Van Loon, E M; Hooge, I Th C; Van den Berg, A V

    2002-01-01

    According to the LATER model (linear approach to thresholds with ergodic rate), the latency of a single saccade in response to target appearance can be understood as a decision process, which is subject to (i) variations in the rate of (visual) information processing; and (ii) the threshold for the decision. We tested whether the LATER model can also be applied to the sequences of saccades in a multiple fixation search, during which latencies of second and subsequent saccades are typically shorter than that of the initial saccade. We found that the distributions of the reciprocal latencies for later saccades, unlike those of the first saccade, are highly asymmetrical, much like a gamma distribution. This suggests that the normal distribution of the rate r, which the LATER model assumes, is not appropriate to describe the rate distributions of subsequent saccades in a scanning sequence. By contrast, the gamma distribution is also appropriate to describe the distribution of reciprocal latencies for the first saccade. The change of the gamma distribution parameters as a function of the ordinal number of the saccade suggests a lowering of the threshold for second and later saccades, as well as a reduction in the number of target elements analysed. PMID:12184827

  10. Some considerations on the use of ecological models to predict species' geographic distributions

    USGS Publications Warehouse

    Peterjohn, B.G.

    2001-01-01

    Peterson (2001) used Genetic Algorithm for Rule-set Prediction (GARP) models to predict distribution patterns from Breeding Bird Survey (BBS) data. Evaluations of these models should consider inherent limitations of BBS data: (1) BBS methods may not sample species and habitats equally; (2) using BBS data for both model development and testing may overlook poor fit of some models; and (3) BBS data may not provide the desired spatial resolution or capture temporal changes in species distributions. The predictive value of GARP models requires additional study, especially comparisons with distribution patterns from independent data sets. When employed at appropriate temporal and geographic scales, GARP models show considerable promise for conservation biology applications but provide limited inferences concerning processes responsible for the observed patterns.

  11. Transport processes and sound velocity in vibrationally non-equilibrium gas of anharmonic oscillators

    NASA Astrophysics Data System (ADS)

    Rydalevskaya, Maria A.; Voroshilova, Yulia N.

    2018-05-01

    Vibrationally non-equilibrium flows of chemically homogeneous diatomic gases are considered under the conditions that the distribution of the molecules over vibrational levels differs significantly from the Boltzmann distribution. In such flows, molecular collisions can be divided into two groups: the first group corresponds to "rapid" microscopic processes whereas the second one corresponds to "slow" microscopic processes (their rate is comparable to or larger than that of gasdynamic parameters variation). The collisions of the first group form quasi-stationary vibrationally non-equilibrium distribution functions. The model kinetic equations are used to study the transport processes under these conditions. In these equations, the BGK-type approximation is used to model only the collision operators of the first group. It allows us to simplify derivation of the transport fluxes and calculation of the kinetic coefficients. Special attention is given to the connection between the formulae for the bulk viscosity coefficient and the sound velocity square.

  12. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  13. Random-Walk Type Model with Fat Tails for Financial Markets

    NASA Astrophysics Data System (ADS)

    Matuttis, Hans-Geors

    Starting from the random-walk model, practices of financial markets are included into the random-walk so that fat tail distributions like those in the high frequency data of the SP500 index are reproduced, though the individual mechanisms are modeled by normally distributed data. The incorporation of local correlation narrows the distribution for "frequent" events, whereas global correlations due to technical analysis leads to fat tails. Delay of market transactions in the trading process shifts the fat tail probabilities downwards. Such an inclusion of reactions to market fluctuations leads to mini-trends which are distributed with unit variance.

  14. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Modeling the isotopic evolution of snowpack and snowmelt: Testing a spatially distributed parsimonious approach.

    PubMed

    Ala-Aho, Pertti; Tetzlaff, Doerthe; McNamara, James P; Laudon, Hjalmar; Kormos, Patrick; Soulsby, Chris

    2017-07-01

    Use of stable water isotopes has become increasingly popular in quantifying water flow paths and travel times in hydrological systems using tracer-aided modeling. In snow-influenced catchments, snowmelt produces a traceable isotopic signal, which differs from original snowfall isotopic composition because of isotopic fractionation in the snowpack. These fractionation processes in snow are relatively well understood, but representing their spatiotemporal variability in tracer-aided studies remains a challenge. We present a novel, parsimonious modeling method to account for the snowpack isotope fractionation and estimate isotope ratios in snowmelt water in a fully spatially distributed manner. Our model introduces two calibration parameters that alone account for the isotopic fractionation caused by sublimation from interception and ground snow storage, and snowmelt fractionation progressively enriching the snowmelt runoff. The isotope routines are linked to a generic process-based snow interception-accumulation-melt model facilitating simulation of spatially distributed snowmelt runoff. We use a synthetic modeling experiment to demonstrate the functionality of the model algorithms in different landscape locations and under different canopy characteristics. We also provide a proof-of-concept model test and successfully reproduce isotopic ratios in snowmelt runoff sampled with snowmelt lysimeters in two long-term experimental catchment with contrasting winter conditions. To our knowledge, the method is the first such tool to allow estimation of the spatially distributed nature of isotopic fractionation in snowpacks and the resulting isotope ratios in snowmelt runoff. The method can thus provide a useful tool for tracer-aided modeling to better understand the integrated nature of flow, mixing, and transport processes in snow-influenced catchments.

  16. Linking multi-temporal satellite imagery to coastal wetland dynamics and bird distribution

    USGS Publications Warehouse

    Pickens, Bradley A.; King, Sammy L.

    2014-01-01

    Ecosystems are characterized by dynamic ecological processes, such as flooding and fires, but spatial models are often limited to a single measurement in time. The characterization of direct, fine-scale processes affecting animals is potentially valuable for management applications, but these are difficult to quantify over broad extents. Direct predictors are also expected to improve transferability of models beyond the area of study. Here, we investigated the ability of non-static and multi-temporal habitat characteristics to predict marsh bird distributions, while testing model generality and transferability between two coastal habitats. Distribution models were developed for king rail (Rallus elegans), common gallinule (Gallinula galeata), least bittern (Ixobrychus exilis), and purple gallinule (Porphyrio martinica) in fresh and intermediate marsh types in the northern Gulf Coast of Louisiana and Texas, USA. For model development, repeated point count surveys of marsh birds were conducted from 2009 to 2011. Landsat satellite imagery was used to quantify both annual conditions and cumulative, multi-temporal habitat characteristics. We used multivariate adaptive regression splines to quantify bird-habitat relationships for fresh, intermediate, and combined marsh habitats. Multi-temporal habitat characteristics ranked as more important than single-date characteristics, as temporary water was most influential in six of eight models. Predictive power was greater for marsh type-specific models compared to general models and model transferability was poor. Birds in fresh marsh selected for annual habitat characterizations, while birds in intermediate marsh selected for cumulative wetness and heterogeneity. Our findings emphasize that dynamic ecological processes can affect species distribution and species-habitat relationships may differ with dominant landscape characteristics.

  17. Time-Fractional Advection-Dispersion Equation (tFADE) to Quantify Aqueous Phase Contaminant Elution from a Trichloroethene (TCE) NAPL Source Zone in Sand Columns

    NASA Astrophysics Data System (ADS)

    Tick, G. R.; Wei, S.; Sun, H.; Zhang, Y.

    2016-12-01

    Pore-scale heterogeneity, NAPL distribution, and sorption/desorption processes can significantly affect aqueous phase elution and mass flux in porous media systems. The application of a scale-independent fractional derivative model (tFADE) was used to simulate elution curves for a series of columns (5 cm, 7 cm, 15 cm, 25 cm, and 80 cm) homogeneously packed with 20/30-mesh sand and distributed with uniform saturations (7-24%) of NAPL phase trichloroethene (TCE). An additional set of columns (7 cm and 25 cm) were packed with a heterogeneous distribution of quartz sand upon which TCE was emplaced by imbibing the immiscible liquid, under stable displacement conditions, to simulate a spill-type process. The tFADE model was able to better represent experimental elution behavior for systems that exhibited extensive long-term concentration tailing requiring much less parameters compared to typical multi-rate mass transfer models (MRMT). However, the tFADE model was not able to effectively simulate the entire elution curve for such systems with short concentration tailing periods since it assumes a power-law distribution for the dissolution rate for TCE. Such limitations may be solved using the tempered fractional derivative model, which can capture the single-rate mass transfer process and therefore the short elution concentration tailing behavior. Numerical solution for the tempered fractional-derivative model in bounded domains however remains a challenge and therefore requires further study. However, the tFADE model shows excellent promise for understanding impacts on concentration elution behavior for systems in which physical heterogeneity, non-uniform NAPL distribution, and pronounced sorption-desorption effects dominate or are present.

  18. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    PubMed

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.

  19. Enforcement of entailment constraints in distributed service-based business processes.

    PubMed

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.

  20. Thermodynamic Model of Spatial Memory

    NASA Astrophysics Data System (ADS)

    Kaufman, Miron; Allen, P.

    1998-03-01

    We develop and test a thermodynamic model of spatial memory. Our model is an application of statistical thermodynamics to cognitive science. It is related to applications of the statistical mechanics framework in parallel distributed processes research. Our macroscopic model allows us to evaluate an entropy associated with spatial memory tasks. We find that older adults exhibit higher levels of entropy than younger adults. Thurstone's Law of Categorical Judgment, according to which the discriminal processes along the psychological continuum produced by presentations of a single stimulus are normally distributed, is explained by using a Hooke spring model of spatial memory. We have also analyzed a nonlinear modification of the ideal spring model of spatial memory. This work is supported by NIH/NIA grant AG09282-06.

  1. Evaluate transport processes in MERRA driven chemical transport models using updated 222Rn emission inventories and global observations

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Liu, H.; Crawford, J. H.; Fairlie, T. D.; Chen, G.; Chambers, S. D.; Kang, C. H.; Williams, A. G.; Zhang, K.; Considine, D. B.; Payer Sulprizio, M.; Yantosca, R.

    2015-12-01

    Convective and synoptic processes play a major role in determining the transport and distribution of trace gases and aerosols in the troposphere. The representation of these processes in global models (at ~100-1000 km horizontal resolution) is challenging, because convection is a sub-grid process and needs to be parameterized, while synoptic processes are close to the grid scale. Depending on the parameterization schemes used in climate models, the role of convection in transporting trace gases and aerosols may vary from model to model. 222Rn is a chemically inert and radioactive gas constantly emitted from soil and has a half-life (3.8 days) comparable to synoptic timescale, which makes it an effective tracer for convective and synoptic transport. In this study, we evaluate the convective and synoptic transport in two chemical transport models (GMI and GEOS-Chem), both driven by the NASA's MERRA reanalysis. Considering the uncertainties in 222Rn emissions, we incorporate two more recent scenarios with regionally varying 222Rn emissions into GEOS-Chem/MERRA and compare the simulation results with those using the relatively uniform 222Rn emissions in the standard model. We evaluate the global distribution and seasonality of 222Rn concentrations simulated by the two models against an extended collection of 222Rn observations from 1970s to 2010s. The intercomparison will improve our understanding of the spatial variability in global 222Rn emissions, including the suspected excessive 222Rn emissions in East Asia, and provide useful feedbacks on 222Rn emission models. We will assess 222Rn vertical distributions at different latitudes in the models using observations at surface sites and in the upper troposphere and lower stratosphere. Results will be compared with previous models driven by other meteorological fields (e.g., fvGCM and GEOS4). Since the decay of 222Rn is the source of 210Pb, a useful radionuclide tracer attached to submicron aerosols, improved understanding of emissions and transport of 222Rn will provide insights into the transport, distribution, and wet deposition of 210Pb aerosols.

  2. Process Modeling of Composite Materials for Wind-Turbine Rotor Blades: Experiments and Numerical Modeling

    PubMed Central

    Wieland, Birgit; Ropte, Sven

    2017-01-01

    The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results. PMID:28981458

  3. Process Modeling of Composite Materials for Wind-Turbine Rotor Blades: Experiments and Numerical Modeling.

    PubMed

    Wieland, Birgit; Ropte, Sven

    2017-10-05

    The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results.

  4. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

  5. Time-independent models of asset returns revisited

    NASA Astrophysics Data System (ADS)

    Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.

    2000-07-01

    In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.

  6. Mathematical Modelling of Continuous Biotechnological Processes

    ERIC Educational Resources Information Center

    Pencheva, T.; Hristozov, I.; Shannon, A. G.

    2003-01-01

    Biotechnological processes (BTP) are characterized by a complicated structure of organization and interdependent characteristics. Partial differential equations or systems of partial differential equations are used for their behavioural description as objects with distributed parameters. Modelling of substrate without regard to dispersion…

  7. Extending the Community Multiscale Air Quality (CMAQ) Modeling System to Hemispheric Scales: Overview of Process Considerations and Initial Applications

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modeled processes were examined and enhanced to suitably represent the extended space and timesca...

  8. Evaluation of distributed hydrologic impacts of temperature-index and energy-based snow models

    USDA-ARS?s Scientific Manuscript database

    Proper characterizations of snow melt and accumulation processes in the snow-dominated mountain environment are needed to understand and predict spatiotemporal distribution of water cycle components. Two commonly used strategies in modeling of snow accumulation and melt are the full energy based and...

  9. Modelling of convective processes during the Bridgman growth of poly-silicon

    NASA Astrophysics Data System (ADS)

    Popov, V. N.

    2009-09-01

    An original 3D model was used to numerically examine convective heat-and-mass transfer processes in the melt during the growth of polycrystalline silicon in vertical Bridgman configuration. The flow in the liquid was modelled using the Navier — Stokes equations in the Boussinesq approximation. The distribution of dissolved impurities was determined by solving the convective diffusion equation. The effects due to non-uniform heating of the lateral wall of the vessel and due to the shape of the crystallization front on the structure of melt flows and on the distribution of dissolved impurities in the liquid are examined.

  10. Prediction of the filtrate particle size distribution from the pore size distribution in membrane filtration: Numerical correlations from computer simulations

    NASA Astrophysics Data System (ADS)

    Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio

    2018-03-01

    We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.

  11. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  12. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  13. Resource depletion promotes automatic processing: implications for distribution of practice.

    PubMed

    Scheel, Matthew H

    2010-12-01

    Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.

  14. Modeling potential future individual tree-species distributions in the eastern United States under a climate change scenario: a case study with Pinus virginiana

    Treesearch

    Louis R. Iverson; Anantha Prasad; Mark W. Schwartz; Mark W. Schwartz

    1999-01-01

    We are using a deterministic regression tree analysis model (DISTRIB) and a stochastic migration model (SHIFT) to examine potential distributions of ~66 individual species of eastern US trees under a 2 x CO2 climate change scenario. This process is demonstrated for Virginia pine (Pinus virginiana).

  15. Modeling Early-Stage Processes of U-10 Wt.%Mo Alloy Using Integrated Computational Materials Engineering Concepts

    NASA Astrophysics Data System (ADS)

    Wang, Xiaowo; Xu, Zhijie; Soulami, Ayoub; Hu, Xiaohua; Lavender, Curt; Joshi, Vineet

    2017-12-01

    Low-enriched uranium alloyed with 10 wt.% molybdenum (U-10Mo) has been identified as a promising alternative to high-enriched uranium. Manufacturing U-10Mo alloy involves multiple complex thermomechanical processes that pose challenges for computational modeling. This paper describes the application of integrated computational materials engineering (ICME) concepts to integrate three individual modeling components, viz. homogenization, microstructure-based finite element method for hot rolling, and carbide particle distribution, to simulate the early-stage processes of U-10Mo alloy manufacture. The resulting integrated model enables information to be passed between different model components and leads to improved understanding of the evolution of the microstructure. This ICME approach is then used to predict the variation in the thickness of the Zircaloy-2 barrier as a function of the degree of homogenization and to analyze the carbide distribution, which can affect the recrystallization, hardness, and fracture properties of U-10Mo in subsequent processes.

  16. The coalescent process in models with selection and recombination.

    PubMed

    Hudson, R R; Kaplan, N L

    1988-11-01

    The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.

  17. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Treesearch

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  18. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  19. A Distributed Snow Evolution Modeling System (SnowModel)

    NASA Astrophysics Data System (ADS)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  20. Designing Assessments of Microworld Training for Combat Service Support Staff

    DTIC Science & Technology

    2003-01-01

    training for distribution management skills as a part of a larger project that entailed making changes to the current structure, content, and methods...of CSS training. Microworld models are small-scale simulations of organizations and operations. They are useful for training distribution management processes...pilot studies using a microworld model for U.S. Army Reserve (USAR) soldiers in Distribution Management Centers. The degree to which trainees learned

  1. Learning to read aloud: A neural network approach using sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Joglekar, Umesh Dwarkanath

    1989-01-01

    An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.

  2. Evaluation of SCS-CN method using a fully distributed physically based coupled surface-subsurface flow model

    NASA Astrophysics Data System (ADS)

    Shokri, Ali

    2017-04-01

    The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.

  3. Simulation of product distribution at PT Anugrah Citra Boga by using capacitated vehicle routing problem method

    NASA Astrophysics Data System (ADS)

    Lamdjaya, T.; Jobiliong, E.

    2017-01-01

    PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.

  4. Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition

    ERIC Educational Resources Information Center

    Rogers, Timothy T.; McClelland, James L.

    2014-01-01

    This paper introduces a special issue of "Cognitive Science" initiated on the 25th anniversary of the publication of "Parallel Distributed Processing" (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP…

  5. Methods and tools for profiling and control of distributed systems

    NASA Astrophysics Data System (ADS)

    Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.

    2018-02-01

    This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  6. A Cognitive Processing Account of Individual Differences in Novice Logo Programmers' Conceptualisation and Use of Recursion.

    ERIC Educational Resources Information Center

    Gibbons, Pamela

    1995-01-01

    Describes a study that investigated individual differences in the construction of mental models of recursion in LOGO programming. The learning process was investigated from the perspective of Norman's mental models theory and employed diSessa's ontology regarding distributed, functional, and surrogate mental models, and the Luria model of brain…

  7. Interval timing under a behavioral microscope: Dissociating motivational and timing processes in fixed-interval performance.

    PubMed

    Daniels, Carter W; Sanabria, Federico

    2017-03-01

    The distribution of latencies and interresponse times (IRTs) of rats was compared between two fixed-interval (FI) schedules of food reinforcement (FI 30 s and FI 90 s), and between two levels of food deprivation. Computational modeling revealed that latencies and IRTs were well described by mixture probability distributions embodying two-state Markov chains. Analysis of these models revealed that only a subset of latencies is sensitive to the periodicity of reinforcement, and prefeeding only reduces the size of this subset. The distribution of IRTs suggests that behavior in FI schedules is organized in bouts that lengthen and ramp up in frequency with proximity to reinforcement. Prefeeding slowed down the lengthening of bouts and increased the time between bouts. When concatenated, latency and IRT models adequately reproduced sigmoidal FI response functions. These findings suggest that behavior in FI schedules fluctuates in and out of schedule control; an account of such fluctuation suggests that timing and motivation are dissociable components of FI performance. These mixture-distribution models also provide novel insights on the motivational, associative, and timing processes expressed in FI performance. These processes may be obscured, however, when performance in timing tasks is analyzed in terms of mean response rates.

  8. Persistence of exponential bed thickness distributions in the stratigraphic record: Experiments and theory

    NASA Astrophysics Data System (ADS)

    Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.

    2010-12-01

    Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.

  9. A model for the flux-r.m.s. correlation in blazar variability or the minijets-in-a-jet statistical model

    NASA Astrophysics Data System (ADS)

    Biteau, J.; Giebels, B.

    2012-12-01

    Very high energy gamma-ray variability of blazar emission remains of puzzling origin. Fast flux variations down to the minute time scale, as observed with H.E.S.S. during flares of the blazar PKS 2155-304, suggests that variability originates from the jet, where Doppler boosting can be invoked to relax causal constraints on the size of the emission region. The observation of log-normality in the flux distributions should rule out additive processes, such as those resulting from uncorrelated multiple-zone emission models, and favour an origin of the variability from multiplicative processes not unlike those observed in a broad class of accreting systems. We show, using a simple kinematic model, that Doppler boosting of randomly oriented emitting regions generates flux distributions following a Pareto law, that the linear flux-r.m.s. relation found for a single zone holds for a large number of emitting regions, and that the skewed distribution of the total flux is close to a log-normal, despite arising from an additive process.

  10. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses.

    PubMed

    Brown, Jason L; Bennett, Joseph R; French, Connor M

    2017-01-01

    SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs) to maximize each model's discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have 'universal' analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism), generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates-to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.

  11. Real-time physiological monitoring with distributed networks of sensors and object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Wiesmann, William P.; Pranger, L. Alex; Bogucki, Mary S.

    1998-05-01

    Remote monitoring of physiologic data from individual high- risk workers distributed over time and space is a considerable challenge. This is often due to an inadequate capability to accurately integrate large amounts of data into usable information in real time. In this report, we have used the vertical and horizontal organization of the 'fireground' as a framework to design a distributed network of sensors. In this system, sensor output is linked through a hierarchical object oriented programing process to accurately interpret physiological data, incorporate these data into a synchronous model and relay processed data, trends and predictions to members of the fire incident command structure. There are several unique aspects to this approach. The first includes a process to account for variability in vital parameter values for each individual's normal physiologic response by including an adaptive network in each data process. This information is used by the model in an iterative process to baseline a 'normal' physiologic response to a given stress for each individual and to detect deviations that indicate dysfunction or a significant insult. The second unique capability of the system orders the information for each user including the subject, local company officers, medical personnel and the incident commanders. Information can be retrieved and used for training exercises and after action analysis. Finally this system can easily be adapted to existing communication and processing links along with incorporating the best parts of current models through the use of object oriented programming techniques. These modern software techniques are well suited to handling multiple data processes independently over time in a distributed network.

  12. Critical thresholds for eventual extinction in randomly disturbed population growth models.

    PubMed

    Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick

    2018-02-16

    This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.

  13. Flow processes on the catchment scale - modeling of initial structural states and hydrological behavior in an artificial exemplary catchment

    NASA Astrophysics Data System (ADS)

    Maurer, Thomas; Caviedes-Voullième, Daniel; Hinz, Christoph; Gerke, Horst H.

    2017-04-01

    Landscapes that are heavily disturbed or newly formed by either natural processes or human activity are in a state of disequilibrium. Their initial development is thus characterized by highly dynamic processes under all climatic conditions. The primary distribution and structure of the solid phase (i.e. mineral particles forming the pore space) is one of the decisive factors for the development of hydrological behavior of the eco-hydrological system and therefore (co-) determining for its - more or less - stable final state. The artificially constructed ‚Hühnerwasser' catchment (a 6 ha area located in the open-cast lignite mine Welzow-Süd, southern Brandenburg, Germany) is a landscape laboratory where the initial eco-hydrological development is observed since 2005. The specific formation (or construction) processes generated characteristic sediment structures and distributions, resulting in a spatially heterogeneous initial state of the catchment. We developed a structure generator that simulates the characteristic distribution of the solid phase for such constructed landscapes. The program is able to generate quasi-realistic structures and sediment compositions on multiple spatial levels (1 cm up to 100 m scale). The generated structures can be i) conditioned to actual measurement values (e.g., soil texture and bulk distribution); ii) stochastically generated, and iii) calculated deterministically according to the geology and technical processes at the excavation site. Results are visualized using the GOCAD software package and the free software Paraview. Based on the 3D-spatial sediment distributions, effective hydraulic van-Genuchten parameters are calculated using pedotransfer functions. The hydraulic behavior of different sediment distribution (i.e. versions or variations of the catchment's porous body) is calculated using a numerical model developed by one of us (Caviedes-Voullième). Observation data are available from catchment monitoring are available for i) determining the boundary conditions (e.g., precipitation), and ii) the calibration / validation of the model (catchment discharge, ground water). The analysis of multiple sediment distribution scenarios should allow to approximately determine the influx of starting conditions on initial development of hydrological behavior. We present first flow modeling results for a reference (conditioned) catchment model and variations thereof. We will also give an outlook on further methodical development of our approach.

  14. Using fuzzy rule-based knowledge model for optimum plating conditions search

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.

    2018-03-01

    The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.

  15. CATS - A process-based model for turbulent turbidite systems at the reservoir scale

    NASA Astrophysics Data System (ADS)

    Teles, Vanessa; Chauveau, Benoît; Joseph, Philippe; Weill, Pierre; Maktouf, Fakher

    2016-09-01

    The Cellular Automata for Turbidite systems (CATS) model is intended to simulate the fine architecture and facies distribution of turbidite reservoirs with a multi-event and process-based approach. The main processes of low-density turbulent turbidity flow are modeled: downslope sediment-laden flow, entrainment of ambient water, erosion and deposition of several distinct lithologies. This numerical model, derived from (Salles, 2006; Salles et al., 2007), proposes a new approach based on the Rouse concentration profile to consider the flow capacity to carry the sediment load in suspension. In CATS, the flow distribution on a given topography is modeled with local rules between neighboring cells (cellular automata) based on potential and kinetic energy balance and diffusion concepts. Input parameters are the initial flow parameters and a 3D topography at depositional time. An overview of CATS capabilities in different contexts is presented and discussed.

  16. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.

    2005-01-01

    Cloud microphysics are inevitable affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds, Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effect of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bim microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions.

  17. Raster Data Partitioning for Supporting Distributed GIS Processing

    NASA Astrophysics Data System (ADS)

    Nguyen Thai, B.; Olasz, A.

    2015-08-01

    In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.

  18. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution.

    PubMed

    Colen, Hadewig B; Neef, Cees; Schuring, Roel W

    2003-06-01

    Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.

  19. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  20. Modeling and Simulation of Quenching and Tempering Process in steels

    NASA Astrophysics Data System (ADS)

    Deng, Xiaohu; Ju, Dongying

    Quenching and tempering (Q&T) is a combined heat treatment process to achieve maximum toughness and ductility at a specified hardness and strength. It is important to develop a mathematical model for quenching and tempering process for satisfy requirement of mechanical properties with low cost. This paper presents a modified model to predict structural evolution and hardness distribution during quenching and tempering process of steels. The model takes into account tempering parameters, carbon content, isothermal and non-isothermal transformations. Moreover, precipitation of transition carbides, decomposition of retained austenite and precipitation of cementite can be simulated respectively. Hardness distributions of quenched and tempered workpiece are predicted by experimental regression equation. In order to validate the model, it is employed to predict the tempering of 80MnCr5 steel. The predicted precipitation dynamics of transition carbides and cementite is consistent with the previous experimental and simulated results from literature. Then the model is implemented within the framework of the developed simulation code COSMAP to simulate microstructure, stress and distortion in the heat treated component. It is applied to simulate Q&T process of J55 steel. The calculated results show a good agreement with the experimental ones. This agreement indicates that the model is effective for simulation of Q&T process of steels.

  1. Single Plant Root System Modeling under Soil Moisture Variation

    NASA Astrophysics Data System (ADS)

    Yabusaki, S.; Fang, Y.; Chen, X.; Scheibe, T. D.

    2016-12-01

    A prognostic Virtual Plant-Atmosphere-Soil System (vPASS) model is being developed that integrates comprehensively detailed mechanistic single plant modeling with microbial, atmospheric, and soil system processes in its immediate environment. Three broad areas of process module development are targeted: Incorporating models for root growth and function, rhizosphere interactions with bacteria and other organisms, litter decomposition and soil respiration into established porous media flow and reactive transport models Incorporating root/shoot transport, growth, photosynthesis and carbon allocation process models into an integrated plant physiology model Incorporating transpiration, Volatile Organic Compounds (VOC) emission, particulate deposition and local atmospheric processes into a coupled plant/atmosphere model. The integrated plant ecosystem simulation capability is being developed as open source process modules and associated interfaces under a modeling framework. The initial focus addresses the coupling of root growth, vascular transport system, and soil under drought scenarios. Two types of root water uptake modeling approaches are tested: continuous root distribution and constitutive root system architecture. The continuous root distribution models are based on spatially averaged root development process parameters, which are relatively straightforward to accommodate in the continuum soil flow and reactive transport module. Conversely, the constitutive root system architecture models use root growth rates, root growth direction, and root branching to evolve explicit root geometries. The branching topologies require more complex data structures and additional input parameters. Preliminary results are presented for root model development and the vascular response to temporal and spatial variations in soil conditions.

  2. Diffusion theory of decision making in continuous report.

    PubMed

    Smith, Philip L

    2016-07-01

    I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Optimization of cell seeding in a 2D bio-scaffold system using computational models.

    PubMed

    Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong

    2017-05-01

    The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Predicting species distributions for conservation decisions

    PubMed Central

    Guisan, Antoine; Tingley, Reid; Baumgartner, John B; Naujokaitis-Lewis, Ilona; Sutcliffe, Patricia R; Tulloch, Ayesha I T; Regan, Tracey J; Brotons, Lluis; McDonald-Madden, Eve; Mantyka-Pringle, Chrystal; Martin, Tara G; Rhodes, Jonathan R; Maggini, Ramona; Setterfield, Samantha A; Elith, Jane; Schwartz, Mark W; Wintle, Brendan A; Broennimann, Olivier; Austin, Mike; Ferrier, Simon; Kearney, Michael R; Possingham, Hugh P; Buckley, Yvonne M

    2013-01-01

    Species distribution models (SDMs) are increasingly proposed to support conservation decision making. However, evidence of SDMs supporting solutions for on-ground conservation problems is still scarce in the scientific literature. Here, we show that successful examples exist but are still largely hidden in the grey literature, and thus less accessible for analysis and learning. Furthermore, the decision framework within which SDMs are used is rarely made explicit. Using case studies from biological invasions, identification of critical habitats, reserve selection and translocation of endangered species, we propose that SDMs may be tailored to suit a range of decision-making contexts when used within a structured and transparent decision-making process. To construct appropriate SDMs to more effectively guide conservation actions, modellers need to better understand the decision process, and decision makers need to provide feedback to modellers regarding the actual use of SDMs to support conservation decisions. This could be facilitated by individuals or institutions playing the role of ‘translators’ between modellers and decision makers. We encourage species distribution modellers to get involved in real decision-making processes that will benefit from their technical input; this strategy has the potential to better bridge theory and practice, and contribute to improve both scientific knowledge and conservation outcomes. PMID:24134332

  5. Derivation of Markov processes that violate detailed balance

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2018-03-01

    Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.

  6. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  7. Time interval between successive trading in foreign currency market: from microscopic to macroscopic

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-12-01

    Recently, it has been shown that inter-transaction interval (ITI) distribution of foreign currency rates has a fat tail. In order to understand the statistical property of the ITI dealer model with N interactive agents is proposed. From numerical simulations it is confirmed that the ITI distribution of the dealer model has a power law tail. The random multiplicative process (RMP) can be approximately derived from the ITI of the dealer model. Consequently, we conclude that the power law tail of the ITI distribution of the dealer model is a result of the RMP.

  8. Size distribution of dust grains: A problem of self-similarity

    NASA Technical Reports Server (NTRS)

    Henning, TH.; Dorschner, J.; Guertler, J.

    1989-01-01

    Distribution functions describing the results of natural processes frequently show the shape of power laws, e.g., mass functions of stars and molecular clouds, velocity spectrum of turbulence, size distributions of asteroids, micrometeorites and also interstellar dust grains. It is an open question whether this behavior is a result simply coming about by the chosen mathematical representation of the observational data or reflects a deep-seated principle of nature. The authors suppose the latter being the case. Using a dust model consisting of silicate and graphite grains Mathis et al. (1977) showed that the interstellar extinction curve can be represented by taking a grain radii distribution of power law type n(a) varies as a(exp -p) with 3.3 less than or equal to p less than or equal to 3.6 (example 1) as a basis. A different approach to understanding power laws like that in example 1 becomes possible by the theory of self-similar processes (scale invariance). The beta model of turbulence (Frisch et al., 1978) leads in an elementary way to the concept of the self-similarity dimension D, a special case of Mandelbrot's (1977) fractal dimension. In the frame of this beta model, it is supposed that on each stage of a cascade the system decays to N clumps and that only the portion beta N remains active further on. An important feature of this model is that the active eddies become less and less space-filling. In the following, the authors assume that grain-grain collisions are such a scale-invarient process and that the remaining grains are the inactive (frozen) clumps of the cascade. In this way, a size distribution n(a) da varies as a(exp -(D+1))da (example 2) results. It seems to be highly probable that the power law character of the size distribution of interstellar dust grains is the result of a self-similarity process. We can, however, not exclude that the process leading to the interstellar grain size distribution is not fragmentation at all. It could be, e.g., diffusion-limited growth discussed by Sander (1986), who applied the theory of fractal geometry to the classification of non-equilibrium growth processes. He received D=2.4 for diffusion-limited aggregation in 3d-space.

  9. Features of development process displacement of earth’s surface when dredging coal in Eastern Donbas

    NASA Astrophysics Data System (ADS)

    Posylniy, Yu V.; Versilov, S. O.; Shurygin, D. N.; Kalinchenko, V. M.

    2017-10-01

    The results of studies of the process of the earth’s surface displacement due to the influence of the adjacent longwalls are presented. It is established that the actual distributions of soil subsidence in the fall and revolt of the reservoir with the same boundary settlement processes differ both from each other and by the distribution of subsidence, recommended by the rules of structures protection. The application of the new boundary criteria - the relative subsidence of 0.03 - allows one to go from two distributions to one distribution, which is also different from the sedimentation distribution of protection rules. The use of a new geometrical element - a virtual point of the mould - allows one to transform the actual distribution of subsidence in the model distribution of rules of constructions protection. When transforming the curves of subsidence, the boundary points vary and, consequently, the boundary corners do.

  10. Sequential data assimilation for a distributed hydrologic model considering different time scale of internal processes

    NASA Astrophysics Data System (ADS)

    Noh, S.; Tachikawa, Y.; Shiiba, M.; Kim, S.

    2011-12-01

    Applications of the sequential data assimilation methods have been increasing in hydrology to reduce uncertainty in the model prediction. In a distributed hydrologic model, there are many types of state variables and each variable interacts with each other based on different time scales. However, the framework to deal with the delayed response, which originates from different time scale of hydrologic processes, has not been thoroughly addressed in the hydrologic data assimilation. In this study, we propose the lagged filtering scheme to consider the lagged response of internal states in a distributed hydrologic model using two filtering schemes; particle filtering (PF) and ensemble Kalman filtering (EnKF). The EnKF is one of the widely used sub-optimal filters implementing an efficient computation with limited number of ensemble members, however, still based on Gaussian approximation. PF can be an alternative in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions involved. In case of PF, advanced particle regularization scheme is implemented together to preserve the diversity of the particle system. In case of EnKF, the ensemble square root filter (EnSRF) are implemented. Each filtering method is parallelized and implemented in the high performance computing system. A distributed hydrologic model, the water and energy transfer processes (WEP) model, is applied for the Katsura River catchment, Japan to demonstrate the applicability of proposed approaches. Forecasted results via PF and EnKF are compared and analyzed in terms of the prediction accuracy and the probabilistic adequacy. Discussions are focused on the prospects and limitations of each data assimilation method.

  11. Influence of mesh structure on 2D full shallow water equations and SCS Curve Number simulation of rainfall/runoff events

    NASA Astrophysics Data System (ADS)

    Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier

    2012-07-01

    SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.

  12. Gsflow-py: An integrated hydrologic model development tool

    NASA Astrophysics Data System (ADS)

    Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.

    2017-12-01

    Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.

  13. A neurally plausible parallel distributed processing model of event-related potential word reading data.

    PubMed

    Laszlo, Sarah; Plaut, David C

    2012-03-01

    The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Post-processing of multi-model ensemble river discharge forecasts using censored EMOS

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Lisniak, Dmytro; Klein, Bastian

    2014-05-01

    When forecasting water levels and river discharge, ensemble weather forecasts are used as meteorological input to hydrologic process models. As hydrologic models are imperfect and the input ensembles tend to be biased and underdispersed, the output ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, statistical post-processing is required in order to achieve calibrated and sharp predictions. Standard post-processing methods such as Ensemble Model Output Statistics (EMOS) that have their origins in meteorological forecasting are now increasingly being used in hydrologic applications. Here we consider two sub-catchments of River Rhine, for which the forecasting system of the Federal Institute of Hydrology (BfG) uses runoff data that are censored below predefined thresholds. To address this methodological challenge, we develop a censored EMOS method that is tailored to such data. The censored EMOS forecast distribution can be understood as a mixture of a point mass at the censoring threshold and a continuous part based on a truncated normal distribution. Parameter estimates of the censored EMOS model are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over the training dataset. Model fitting on Box-Cox transformed data allows us to take account of the positive skewness of river discharge distributions. In order to achieve realistic forecast scenarios over an entire range of lead-times, there is a need for multivariate extensions. To this end, we smooth the marginal parameter estimates over lead-times. In order to obtain realistic scenarios of discharge evolution over time, the marginal distributions have to be linked with each other. To this end, the multivariate dependence structure can either be adopted from the raw ensemble like in Ensemble Copula Coupling (ECC), or be estimated from observations in a training period. The censored EMOS model has been applied to multi-model ensemble forecasts issued on a daily basis over a period of three years. For the two catchments considered, this resulted in well calibrated and sharp forecast distributions over all lead-times from 1 to 114 h. Training observations tended to be better indicators for the dependence structure than the raw ensemble.

  15. Vascular system modeling in parallel environment - distributed and shared memory approaches

    PubMed Central

    Jurczuk, Krzysztof; Kretowski, Marek; Bezy-Wendling, Johanne

    2011-01-01

    The paper presents two approaches in parallel modeling of vascular system development in internal organs. In the first approach, new parts of tissue are distributed among processors and each processor is responsible for perfusing its assigned parts of tissue to all vascular trees. Communication between processors is accomplished by passing messages and therefore this algorithm is perfectly suited for distributed memory architectures. The second approach is designed for shared memory machines. It parallelizes the perfusion process during which individual processing units perform calculations concerning different vascular trees. The experimental results, performed on a computing cluster and multi-core machines, show that both algorithms provide a significant speedup. PMID:21550891

  16. Lipid droplets fusion in adipocyte differentiated 3T3-L1 cells: A Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschi, Federico, E-mail: federico.boschi@univr.it; Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134 Verona; Rizzatti, Vanni

    Several human worldwide diseases like obesity, type 2 diabetes, hepatic steatosis, atherosclerosis and other metabolic pathologies are related to the excessive accumulation of lipids in cells. Lipids accumulate in spherical cellular inclusions called lipid droplets (LDs) whose sizes range from fraction to one hundred of micrometers in adipocytes. It has been suggested that LDs can grow in size due to a fusion process by which a larger LD is obtained with spherical shape and volume equal to the sum of the progenitors’ ones. In this study, the size distribution of two populations of LDs was analyzed in immature and maturemore » (5-days differentiated) 3T3-L1 adipocytes (first and second populations, respectively) after Oil Red O staining. A Monte Carlo simulation of interaction between LDs has been developed in order to quantify the size distribution and the number of fusion events needed to obtain the distribution of the second population size starting from the first one. Four models are presented here based on different kinds of interaction: a surface weighted interaction (R2 Model), a volume weighted interaction (R3 Model), a random interaction (Random model) and an interaction related to the place where the LDs are born (Nearest Model). The last two models mimic quite well the behavior found in the experimental data. This work represents a first step in developing numerical simulations of the LDs growth process. Due to the complex phenomena involving LDs (absorption, growth through additional neutral lipid deposition in existing droplets, de novo formation and catabolism) the study focuses on the fusion process. The results suggest that, to obtain the observed size distribution, a number of fusion events comparable with the number of LDs themselves is needed. Moreover the MC approach results a powerful tool for investigating the LDs growth process. Highlights: • We evaluated the role of the fusion process in the synthesis of the lipid droplets. • We compared the size distribution of the lipid droplets in immature and mature cells. • We used the Monte Carlo simulation approach, simulating 10 thousand of fusion events. • Four different interaction models between the lipid droplets were tested. • The best model which mimics the experimental measures was selected.« less

  17. Students' Development of Structure Sense for the Distributive Law

    ERIC Educational Resources Information Center

    Schüler-Meyer, Alexander

    2017-01-01

    After being introduced to the distributive law in meaningful contexts, students need to extend its scope of application to unfamiliar expressions. In this article, a process model for the development of structure sense is developed. Building on this model, this article reports on a design research project in which exercise tasks support students…

  18. Atmospheric Ozone 1985. Assessment of our understanding of the processes controlling its present distribution and change, volume 3

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Topics addressed include: assessment models; model predictions of ozone changes; ozone and temperature trends; trace gas effects on climate; kinetics and photchemical data base; spectroscopic data base (infrared to microwave); instrument intercomparisons and assessments; and monthly mean distribution of ozone and temperature.

  19. Distribution of Chinese names

    NASA Astrophysics Data System (ADS)

    Huang, Ding-wei

    2013-03-01

    We present a statistical model for the distribution of Chinese names. Both family names and given names are studied on the same basis. With naive expectation, the distribution of family names can be very different from that of given names. One is affected mostly by genealogy, while the other can be dominated by cultural effects. However, we find that both distributions can be well described by the same model. Various scaling behaviors can be understood as a result of stochastic processes. The exponents of different power-law distributions are controlled by a single parameter. We also comment on the significance of full-name repetition in Chinese population.

  20. Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Masoudi, A.; Newson, T. P.

    2017-04-01

    A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.

  1. An overview of current applications, challenges, and future trends in distributed process-based models in hydrology

    USGS Publications Warehouse

    Fatichi, Simone; Vivoni, Enrique R.; Odgen, Fred L; Ivanov, Valeriy Y; Mirus, Benjamin B.; Gochis, David; Downer, Charles W; Camporese, Matteo; Davison, Jason H; Ebel, Brian A.; Jones, Norm; Kim, Jongho; Mascaro, Giuseppe; Niswonger, Richard G.; Restrepo, Pedro; Rigon, Riccardo; Shen, Chaopeng; Sulis, Mauro; Tarboton, David

    2016-01-01

    Process-based hydrological models have a long history dating back to the 1960s. Criticized by some as over-parameterized, overly complex, and difficult to use, a more nuanced view is that these tools are necessary in many situations and, in a certain class of problems, they are the most appropriate type of hydrological model. This is especially the case in situations where knowledge of flow paths or distributed state variables and/or preservation of physical constraints is important. Examples of this include: spatiotemporal variability of soil moisture, groundwater flow and runoff generation, sediment and contaminant transport, or when feedbacks among various Earth’s system processes or understanding the impacts of climate non-stationarity are of primary concern. These are situations where process-based models excel and other models are unverifiable. This article presents this pragmatic view in the context of existing literature to justify the approach where applicable and necessary. We review how improvements in data availability, computational resources and algorithms have made detailed hydrological simulations a reality. Avenues for the future of process-based hydrological models are presented suggesting their use as virtual laboratories, for design purposes, and with a powerful treatment of uncertainty.

  2. Process membership in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1993-01-01

    The development of reliable distributed software is simplified by the ability to assume a fail-stop failure model. The emulation of such a model in an asynchronous distributed environment is discussed. The solution proposed, called Strong-GMP, can be supported through a highly efficient protocol, and was implemented as part of a distributed systems software project at Cornell University. The precise definition of the problem, the protocol, correctness proofs, and an analysis of costs are addressed.

  3. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  4. Mistaking geography for biology: inferring processes from species distributions.

    PubMed

    Warren, Dan L; Cardillo, Marcel; Rosauer, Dan F; Bolnick, Daniel I

    2014-10-01

    Over the past few decades, there has been a rapid proliferation of statistical methods that infer evolutionary and ecological processes from data on species distributions. These methods have led to considerable new insights, but they often fail to account for the effects of historical biogeography on present-day species distributions. Because the geography of speciation can lead to patterns of spatial and temporal autocorrelation in the distributions of species within a clade, this can result in misleading inferences about the importance of deterministic processes in generating spatial patterns of biodiversity. In this opinion article, we discuss ways in which patterns of species distributions driven by historical biogeography are often interpreted as evidence of particular evolutionary or ecological processes. We focus on three areas that are especially prone to such misinterpretations: community phylogenetics, environmental niche modelling, and analyses of beta diversity (compositional turnover of biodiversity). Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  5. A Model of the Base Civil Engineering Work Request/Work Order Processing System.

    DTIC Science & Technology

    1979-09-01

    changes to the work order processing system. This research identifies the variables that significantly affect the accomplishment time and proposes a... order processing system and its behavior with respect to work order processing time. A conceptual model was developed to describe the work request...work order processing system as a stochastic queueing system in which the processing times and the various distributions are treated as random variables

  6. A phenomenological retention tank model using settling velocity distributions.

    PubMed

    Maruejouls, T; Vanrolleghem, P A; Pelletier, G; Lessard, P

    2012-12-15

    Many authors have observed the influence of the settling velocity distribution on the sedimentation process in retention tanks. However, the pollutants' behaviour in such tanks is not well characterized, especially with respect to their settling velocity distribution. This paper presents a phenomenological modelling study dealing with the way by which the settling velocity distribution of particles in combined sewage changes between entering and leaving an off-line retention tank. The work starts from a previously published model (Lessard and Beck, 1991) which is first implemented in a wastewater management modelling software, to be then tested with full-scale field data for the first time. Next, its performance is improved by integrating the particle settling velocity distribution and adding a description of the resuspension due to pumping for emptying the tank. Finally, the potential of the improved model is demonstrated by comparing the results for one more rain event. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Real-time distribution of pelagic fish: combining hydroacoustics, GIS and spatial modelling at a fine spatial scale.

    PubMed

    Muška, Milan; Tušer, Michal; Frouzová, Jaroslava; Mrkvička, Tomáš; Ricard, Daniel; Seďa, Jaromír; Morelli, Federico; Kubečka, Jan

    2018-03-29

    Understanding spatial distribution of organisms in heterogeneous environment remains one of the chief issues in ecology. Spatial organization of freshwater fish was investigated predominantly on large-scale, neglecting important local conditions and ecological processes. However, small-scale processes are of an essential importance for individual habitat preferences and hence structuring trophic cascades and species coexistence. In this work, we analysed the real-time spatial distribution of pelagic freshwater fish in the Římov Reservoir (Czechia) observed by hydroacoustics in relation to important environmental predictors during 48 hours at 3-h interval. Effect of diurnal cycle was revealed of highest significance in all spatial models with inverse trends between fish distribution and predictors in day and night in general. Our findings highlighted daytime pelagic fish distribution as highly aggregated, with general fish preferences for central, deep and highly illuminated areas, whereas nighttime distribution was more disperse and fish preferred nearshore steep sloped areas with higher depth. This turnover suggests prominent movements of significant part of fish assemblage between pelagic and nearshore areas on a diel basis. In conclusion, hydroacoustics, GIS and spatial modelling proved as valuable tool for predicting local fish distribution and elucidate its drivers, which has far reaching implications for understanding freshwater ecosystem functioning.

  8. The size distributions of fragments ejected at a given velocity from impact craters

    NASA Technical Reports Server (NTRS)

    O'Keefe, John D.; Ahrens, Thomas J.

    1987-01-01

    The mass distribution of fragments that are ejected at a given velocity for impact craters is modeled to allow extrapolation of laboratory, field, and numerical results to large scale planetary events. The model is semi-empirical in nature and is derived from: (1) numerical calculations of cratering and the resultant mass versus ejection velocity, (2) observed ejecta blanket particle size distributions, (3) an empirical relationship between maximum ejecta fragment size and crater diameter, (4) measurements and theory of maximum ejecta size versus ejecta velocity, and (5) an assumption on the functional form for the distribution of fragments ejected at a given velocity. This model implies that for planetary impacts into competent rock, the distribution of fragments ejected at a given velocity is broad, e.g., 68 percent of the mass of the ejecta at a given velocity contains fragments having a mass less than 0.1 times a mass of the largest fragment moving at that velocity. The broad distribution suggests that in impact processes, additional comminution of ejecta occurs after the upward initial shock has passed in the process of the ejecta velocity vector rotating from an initially downward orientation. This additional comminution produces the broader size distribution in impact ejecta as compared to that obtained in simple brittle failure experiments.

  9. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  10. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  11. Shape modeling with family of Pearson distributions: Langmuir waves

    NASA Astrophysics Data System (ADS)

    Vidojevic, Sonja

    2014-10-01

    Two major effects of Langmuir wave electric field influence on spectral line shapes are appearance of depressions shifted from unperturbed line and an additional dynamical line broadening. More realistic and accurate models of Langmuir waves are needed to study these effects with more confidence. In this article we present distribution shapes of a high-quality data set of Langmuir waves electric field observed by the WIND satellite. Using well developed numerical techniques, the distributions of the empirical measurements are modeled by family of Pearson distributions. The results suggest that the existing theoretical models of energy conversion between an electron beam and surrounding plasma is more complex. If the processes of the Langmuir wave generation are better understood, the influence of Langmuir waves on spectral line shapes could be modeled better.

  12. Prediction of temperature and HAZ in thermal-based processes with Gaussian heat source by a hybrid GA-ANN model

    NASA Astrophysics Data System (ADS)

    Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali

    2018-02-01

    Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.

  13. Semantic Service Design for Collaborative Business Processes in Internetworked Enterprises

    NASA Astrophysics Data System (ADS)

    Bianchini, Devis; Cappiello, Cinzia; de Antonellis, Valeria; Pernici, Barbara

    Modern collaborating enterprises can be seen as borderless organizations whose processes are dynamically transformed and integrated with the ones of their partners (Internetworked Enterprises, IE), thus enabling the design of collaborative business processes. The adoption of Semantic Web and service-oriented technologies for implementing collaboration in such distributed and heterogeneous environments promises significant benefits. IE can model their own processes independently by using the Software as a Service paradigm (SaaS). Each enterprise maintains a catalog of available services and these can be shared across IE and reused to build up complex collaborative processes. Moreover, each enterprise can adopt its own terminology and concepts to describe business processes and component services. This brings requirements to manage semantic heterogeneity in process descriptions which are distributed across different enterprise systems. To enable effective service-based collaboration, IEs have to standardize their process descriptions and model them through component services using the same approach and principles. For enabling collaborative business processes across IE, services should be designed following an homogeneous approach, possibly maintaining a uniform level of granularity. In the paper we propose an ontology-based semantic modeling approach apt to enrich and reconcile semantics of process descriptions to facilitate process knowledge management and to enable semantic service design (by discovery, reuse and integration of process elements/constructs). The approach brings together Semantic Web technologies, techniques in process modeling, ontology building and semantic matching in order to provide a comprehensive semantic modeling framework.

  14. Integrating remote sensing with species distribution models; Mapping tamarisk invasions using the Software for Assisted Habitat Modeling (SAHM)

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Young, Nicholas E.; Stohlgren, Thomas J.; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-01-01

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  15. Integrating Remote Sensing with Species Distribution Models; Mapping Tamarisk Invasions Using the Software for Assisted Habitat Modeling (SAHM).

    PubMed

    West, Amanda M; Evangelista, Paul H; Jarnevich, Catherine S; Young, Nicholas E; Stohlgren, Thomas J; Talbert, Colin; Talbert, Marian; Morisette, Jeffrey; Anderson, Ryan

    2016-10-11

    Early detection of invasive plant species is vital for the management of natural resources and protection of ecosystem processes. The use of satellite remote sensing for mapping the distribution of invasive plants is becoming more common, however conventional imaging software and classification methods have been shown to be unreliable. In this study, we test and evaluate the use of five species distribution model techniques fit with satellite remote sensing data to map invasive tamarisk (Tamarix spp.) along the Arkansas River in Southeastern Colorado. The models tested included boosted regression trees (BRT), Random Forest (RF), multivariate adaptive regression splines (MARS), generalized linear model (GLM), and Maxent. These analyses were conducted using a newly developed software package called the Software for Assisted Habitat Modeling (SAHM). All models were trained with 499 presence points, 10,000 pseudo-absence points, and predictor variables acquired from the Landsat 5 Thematic Mapper (TM) sensor over an eight-month period to distinguish tamarisk from native riparian vegetation using detection of phenological differences. From the Landsat scenes, we used individual bands and calculated Normalized Difference Vegetation Index (NDVI), Soil-Adjusted Vegetation Index (SAVI), and tasseled capped transformations. All five models identified current tamarisk distribution on the landscape successfully based on threshold independent and threshold dependent evaluation metrics with independent location data. To account for model specific differences, we produced an ensemble of all five models with map output highlighting areas of agreement and areas of uncertainty. Our results demonstrate the usefulness of species distribution models in analyzing remotely sensed data and the utility of ensemble mapping, and showcase the capability of SAHM in pre-processing and executing multiple complex models.

  16. Distributed software framework and continuous integration in hydroinformatics systems

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  17. A simple model of the effect of ocean ventilation on ocean heat uptake

    NASA Astrophysics Data System (ADS)

    Nadiga, Balu; Urban, Nathan

    2017-11-01

    Transport of water from the surface mixed layer into the ocean interior is achieved, in large part, by the process of ventilation-a process associated with outcropping isopycnals. Starting from such a configuration of outcropping isopycnals, we derive a simple model of the effect of ventilation on ocean uptake of anomalous radiative forcing. This model can be seen as an improvement of the popular anomaly-diffusing class of energy balance models (AD-EBM) that are routinely employed to analyze and emulate the warming response of both observed and simulated Earth system. We demonstrate that neither multi-layer, nor continuous-diffusion AD-EBM variants can properly represent both surface-warming and the vertical distribution of ocean heat uptake. The new model overcomes this deficiency. The simplicity of the models notwithstanding, the analysis presented and the necessity of the modification is indicative of the role played by processes related to the down-welling branch of global ocean circulation in shaping the vertical distribution of ocean heat uptake.

  18. Weighted Distances in Scale-Free Configuration Models

    NASA Astrophysics Data System (ADS)

    Adriaans, Erwin; Komjáthy, Júlia

    2018-01-01

    In this paper we study first-passage percolation in the configuration model with empirical degree distribution that follows a power-law with exponent τ \\in (2,3) . We assign independent and identically distributed (i.i.d.) weights to the edges of the graph. We investigate the weighted distance (the length of the shortest weighted path) between two uniformly chosen vertices, called typical distances. When the underlying age-dependent branching process approximating the local neighborhoods of vertices is found to produce infinitely many individuals in finite time—called explosive branching process—Baroni, Hofstad and the second author showed in Baroni et al. (J Appl Probab 54(1):146-164, 2017) that typical distances converge in distribution to a bounded random variable. The order of magnitude of typical distances remained open for the τ \\in (2,3) case when the underlying branching process is not explosive. We close this gap by determining the first order of magnitude of typical distances in this regime for arbitrary, not necessary continuous edge-weight distributions that produce a non-explosive age-dependent branching process with infinite mean power-law offspring distributions. This sequence tends to infinity with the amount of vertices, and, by choosing an appropriate weight distribution, can be tuned to be any growing function that is O(log log n) , where n is the number of vertices in the graph. We show that the result remains valid for the the erased configuration model as well, where we delete loops and any second and further edges between two vertices.

  19. Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model

    NASA Astrophysics Data System (ADS)

    Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan

    2012-09-01

    The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.

  20. Calculating Formulas of Coefficient and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Zhou, G. D.; Ma, K.; Ma, W. J.; Cui, W. Y.; Zhang, B.

    2015-11-01

    Present studies have shown that, in the main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the distributions of neutron exposures in the nucleosynthesis regions can all be expressed by an exponential function ({ρ_{AGB}}(τ) = C/{τ_0}exp ( - τ/{τ_0})) in the effective range of values. However, the specific expressions of the proportional coefficient C and the mean neutron exposure ({τ_0}) in the formula for different models are not completely determined in the related literatures. Through dissecting the basic solving method of the exponential distribution of neutron exposures, and systematically combing the solution procedure of exposure distribution for different stellar models, the general calculating formulas as well as their auxiliary equations for calculating C and ({τ_0}) are reduced. Given the discrete distribution of neutron exposures ({P_k}), i.e. the mass ratio of the materials which have exposed to neutrons for (k) ((k = 0, 1, 2 \\cdots )) times when reaching the final distribution with respect to the materials of the He intershell, (C = - {P_1}/ln R), and ({τ_0} = - Δ τ /ln R) can be obtained. Here, (R) expresses the probability that the materials can successively experience neutron irradiation for two times in the He intershell. For the convective nucleosynthesis model (including the Ulrich model and the ({}^{13}{C})-pocket convective burning model), (R) is just the overlap factor r, namely the mass ratio of the materials which can undergo two successive thermal pulses in the He intershell. And for the (^{13}{C})-pocket radiative burning model, (R = sumlimits_{k = 1}^∞ {{P_k}} ). This set of formulas practically give the corresponding relationship between C or ({τ_0}) and the model parameters. The results of this study effectively solve the problem of analytically calculating the distribution of neutron exposures in the low-mass AGB star s-process nucleosynthesis model of (^{13}{C})-pocket radiative burning.

  1. The Brain as a Distributed Intelligent Processing System: An EEG Study

    PubMed Central

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-01-01

    Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657

  2. Pricing foreign equity option under stochastic volatility tempered stable Lévy processes

    NASA Astrophysics Data System (ADS)

    Gong, Xiaoli; Zhuang, Xintian

    2017-10-01

    Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.

  3. Growth and Filling Regularities of Filamentary Channels in Non-Metallic Inorganic Coatings Under Anodic Oxidation of Valve Metals. Mathematical Modeling

    NASA Astrophysics Data System (ADS)

    Mamaev, A. I.; Mamaeva, V. A.; Kolenchin, N. F.; Chubenko, A. K.; Kovalskaya, Ya. B.; Dolgova, Yu. N.; Beletskaya, E. Yu.

    2015-12-01

    Theoretical models are developed for growth and filling processes in filamentary channels of nanostructured non-metallic coatings produced by anodizing and microplasma oxidation. Graphical concentration distributions are obtained for channel-reacting anions, cations, and sparingly soluble reaction products depending on the time of electric current transmission and the length of the filamentary channel. Graphical distributions of the front moving velocity for the sparingly soluble compound are presented. The resulting model representation increases the understanding of the anodic process nature and can be used for a description and prediction of porous anodic film growth and filling. It is shown that the character of the filamentary channel growth and filling causes a variety of processes determining the textured metal - nonmetallic inorganic coating phase boundary formation.

  4. The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing

    DTIC Science & Technology

    2017-08-01

    access to the GPU for general purpose processing .5 CUDA is designed to work easily with multiple programming languages , including Fortran. CUDA is a...Using Graphics Processing Unit (GPU) Computing by Leelinda P Dawson Approved for public release; distribution unlimited...The Performance Improvement of the Lagrangian Particle Dispersion Model (LPDM) Using Graphics Processing Unit (GPU) Computing by Leelinda

  5. On two diffusion neuronal models with multiplicative noise: The mean first-passage time properties

    NASA Astrophysics Data System (ADS)

    D'Onofrio, G.; Lansky, P.; Pirozzi, E.

    2018-04-01

    Two diffusion processes with multiplicative noise, able to model the changes in the neuronal membrane depolarization between two consecutive spikes of a single neuron, are considered and compared. The processes have the same deterministic part but different stochastic components. The differences in the state-dependent variabilities, their asymptotic distributions, and the properties of the first-passage time across a constant threshold are investigated. Closed form expressions for the mean of the first-passage time of both processes are derived and applied to determine the role played by the parameters involved in the model. It is shown that for some values of the input parameters, the higher variability, given by the second moment, does not imply shorter mean first-passage time. The reason for that can be found in the complete shape of the stationary distribution of the two processes. Applications outside neuroscience are also mentioned.

  6. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  7. Recent topographic evolution and erosion of the deglaciated Washington Cascades inferred from a stochastic landscape evolution model

    NASA Astrophysics Data System (ADS)

    Moon, S.; Shelef, E.; Hilley, G. E.

    2013-12-01

    The Washington Cascades is currently in topographic and erosional disequilibrium after deglaciation occurred around 11- 17 ka ago. The topography still shows the features inherited from prior alpine glacial processes (e.g., cirques, steep side-valleys, and flat valley bottoms), though postglacial processes are currently denuding this landscape. Our previous study in this area calculated the thousand-year-timescale denudation rates using cosmogenic 10Be concentration (CRN-denudation rates), and showed that they were ~ four times higher than million-year-timescale uplift rates. In addition, the spatial distribution of denudation rates showed a good correlation with a factor-of-ten variation in precipitation. We interpreted this correlation as reflecting the sensitivity of landslide triggering in over-steepened deglaciated topography to precipitation, which produced high denudation rates in wet areas that experienced frequent landsliding. We explored this interpretation using a model of postglacial surface processes that predicts the evolution of the topography and denudation rates within the deglaciated Washington Cascades. Specifically, we used the model to understand the controls on and timescales of landscape response to changes in the surface process regime after deglaciation. The postglacial adjustment of this landscape is modeled using a geomorphic-transport-law-based numerical model that includes processes of river incision, hillslope diffusion, and stochastic landslides. The surface lowering due to landslides is parameterized using a physically-based slope stability model coupled to a stochastic model of the generation of landslides. The model parameters of river incision and stochastic landslides are calibrated based on the rates and distribution of thousand-year-timescale denudation rates measured from cosmogenic 10Be isotopes. The probability distribution of model parameters required to fit the observed denudation rates shows comparable ranges from previous studies in similar rock types and climatic conditions. The calibrated parameters suggest that the dominant sediment source of river sediments originates from stochastic landslides. The magnitude of landslide denudation rates is determined by failure density (similar to landslide frequency), while their spatial distribution is largely controlled by precipitation and slope angles. Simulation results show that denudation rates decay over time and take approximately 130-180 ka to reach steady-state rates. This response timescale is longer than glacial/interglacial cycles, suggesting that frequent climatic perturbations during the Quaternary may prevent these types of landscapes from reaching a dynamic equilibrium with postglacial processes.

  8. Vere-Jones' self-similar branching model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saichev, A.; Institute of Geophysics and Planetary Physics, University of California, Los Angeles, California 90095; Sornette, D.

    2005-11-01

    Motivated by its potential application to earthquake statistics as well as for its intrinsic interest in the theory of branching processes, we study the exactly self-similar branching process introduced recently by Vere-Jones. This model extends the ETAS class of conditional self-excited branching point-processes of triggered seismicity by removing the problematic need for a minimum (as well as maximum) earthquake size. To make the theory convergent without the need for the usual ultraviolet and infrared cutoffs, the distribution of magnitudes m{sup '} of daughters of first-generation of a mother of magnitude m has two branches m{sup '}m with exponent {beta}+d, where {beta} and d are two positive parameters. We investigate the condition and nature of the subcritical, critical, and supercritical regime in this and in an extended version interpolating smoothly between several models. We predict that the distribution of magnitudes of events triggered by a mother of magnitude m over all generations has also two branches m{sup '}m with exponent {beta}+h, with h=d{radical}(1-s), where s is the fraction of triggered events. This corresponds to a renormalization of the exponent d into h by the hierarchy of successive generations of triggered events. For a significant part of the parameter space, the distribution of magnitudes over a full catalog summed over an average steady flow of spontaneous sources (immigrants) reproduces the distribution of the spontaneous sources with a single branch and is blind to the exponents {beta},d of the distribution of triggered events. Since the distribution of earthquake magnitudes is usually obtained with catalogs including many sequences, we conclude that the two branches of the distribution of aftershocks are not directly observable and the model is compatible with real seismic catalogs. In summary, the exactly self-similar Vere-Jones model provides an attractive new approach to model triggered seismicity, which alleviates delicate questions on the role of magnitude cutoffs in other non-self-similar models. The new prediction concerning two branches in the distribution of magnitudes of aftershocks could be tested with recently introduced stochastic reconstruction methods, tailored to disentangle the different triggered sequences.« less

  9. Conceptual model of sediment processes in the upper Yuba River watershed, Sierra Nevada, CA

    USGS Publications Warehouse

    Curtis, J.A.; Flint, L.E.; Alpers, Charles N.; Yarnell, S.M.

    2005-01-01

    This study examines the development of a conceptual model of sediment processes in the upper Yuba River watershed; and we hypothesize how components of the conceptual model may be spatially distributed using a geographical information system (GIS). The conceptual model illustrates key processes controlling sediment dynamics in the upper Yuba River watershed and was tested and revised using field measurements, aerial photography, and low elevation videography. Field reconnaissance included mass wasting and channel storage inventories, assessment of annual channel change in upland tributaries, and evaluation of the relative importance of sediment sources and transport processes. Hillslope erosion rates throughout the study area are relatively low when compared to more rapidly eroding landscapes such as the Pacific Northwest and notable hillslope sediment sources include highly erodible andesitic mudflows, serpentinized ultramafics, and unvegetated hydraulic mine pits. Mass wasting dominates surface erosion on the hillslopes; however, erosion of stored channel sediment is the primary contributor to annual sediment yield. We used GIS to spatially distribute the components of the conceptual model and created hillslope erosion potential and channel storage models. The GIS models exemplify the conceptual model in that landscapes with low potential evapotranspiration, sparse vegetation, steep slopes, erodible geology and soils, and high road densities display the greatest hillslope erosion potential and channel storage increases with increasing stream order. In-channel storage in upland tributaries impacted by hydraulic mining is an exception. Reworking of stored hydraulic mining sediment in low-order tributaries continues to elevate upper Yuba River sediment yields. Finally, we propose that spatially distributing the components of a conceptual model in a GIS framework provides a guide for developing more detailed sediment budgets or numerical models making it an inexpensive way to develop a roadmap for understanding sediment dynamics at a watershed scale.

  10. Dynamics of assembly production flow

    NASA Astrophysics Data System (ADS)

    Ezaki, Takahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2015-06-01

    Despite recent developments in management theory, maintaining a manufacturing schedule remains difficult because of production delays and fluctuations in demand and supply of materials. The response of manufacturing systems to such disruptions to dynamic behavior has been rarely studied. To capture these responses, we investigate a process that models the assembly of parts into end products. The complete assembly process is represented by a directed tree, where the smallest parts are injected at leaves and the end products are removed at the root. A discrete assembly process, represented by a node on the network, integrates parts, which are then sent to the next downstream node as a single part. The model exhibits some intriguing phenomena, including overstock cascade, phase transition in terms of demand and supply fluctuations, nonmonotonic distribution of stockout in the network, and the formation of a stockout path and stockout chains. Surprisingly, these rich phenomena result from only the nature of distributed assembly processes. From a physical perspective, these phenomena provide insight into delay dynamics and inventory distributions in large-scale manufacturing systems.

  11. Tropical Oceanic Precipitation Processes Over Warm Pool: 2D and 3D Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Johnson, D.; Simpson, J.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere. The vertical distribution of convective latent-heat release modulates the large-scale circulations of the topics. Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate model simulate processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMs) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and clouds systems. The major objective of this paper is to investigate the latent heating, moisture and momentum budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (GCE) model which includes a 3-class ice-phase microphysics scheme.

  12. Economic design of control charts considering process shift distributions

    NASA Astrophysics Data System (ADS)

    Vommi, Vijayababu; Kasarapu, Rukmini V.

    2014-09-01

    Process shift is an important input parameter in the economic design of control charts. Earlier control chart designs considered constant shifts to occur in the mean of the process for a given assignable cause. This assumption has been criticized by many researchers since it may not be realistic to produce a constant shift whenever an assignable cause occurs. To overcome this difficulty, in the present work, a distribution for the shift parameter has been considered instead of a single value for a given assignable cause. Duncan's economic design model for chart has been extended to incorporate the distribution for the process shift parameter. It is proposed to minimize total expected loss-cost to obtain the control chart parameters. Further, three types of process shifts namely, positively skewed, uniform and negatively skewed distributions are considered and the situations where it is appropriate to use the suggested methodology are recommended.

  13. Continuously graded extruded polymer composites for energetic applications fabricated using twin-screw extrusion processing technology

    NASA Astrophysics Data System (ADS)

    Gallant, Frederick M.

    A novel method of fabricating functionally graded extruded composite materials is proposed for propellant applications using the technology of continuous processing with a Twin-Screw Extruder. The method is applied to the manufacturing of grains for solid rocket motors in an end-burning configuration with an axial gradient in ammonium perchlorate volume fraction and relative coarse/fine particle size distributions. The fabrication of functionally graded extruded polymer composites with either inert or energetic ingredients has yet to be investigated. The lack of knowledge concerning the processing of these novel materials has necessitated that a number of research issues be addressed. Of primary concern is characterizing and modeling the relationship between the extruder screw geometry, transient processing conditions, and the gradient architecture that evolves in the extruder. Recent interpretations of the Residence Time Distributions (RTDs) and Residence Volume Distributions (RVDs) for polymer composites in the TSE are used to develop new process models for predicting gradient architectures in the direction of extrusion. An approach is developed for characterizing the sections of the extrudate using optical, mechanical, and compositional analysis to determine the gradient architectures. The effects of processing on the burning rate properties of extruded energetic polymer composites are characterized for homogeneous formulations over a range of compositions to determine realistic gradient architectures for solid rocket motor applications. The new process models and burning rate properties that have been characterized in this research effort will be the basis for an inverse design procedure that is capable of determining gradient architectures for grains in solid rocket motors that possess tailored burning rate distributions that conform to user-defined performance specifications.

  14. Modeling preferential water flow and solute transport in unsaturated soil using the active region model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, F.; Wang, K.; Zhang, R.

    2009-03-15

    Preferential flow and solute transport are common processes in the unsaturated soil, in which distributions of soil water content and solute concentrations are often characterized as fractal patterns. An active region model (ARM) was recently proposed to describe the preferential flow and transport patterns. In this study, ARM governing equations were derived to model the preferential soil water flow and solute transport processes. To evaluate the ARM equations, dye infiltration experiments were conducted, in which distributions of soil water content and Cl{sup -} concentration were measured. Predicted results using the ARM and the mobile-immobile region model (MIM) were compared withmore » the measured distributions of soil water content and Cl{sup -} concentration. Although both the ARM and the MIM are two-region models, they are fundamental different in terms of treatments of the flow region. The models were evaluated based on the modeling efficiency (ME). The MIM provided relatively poor prediction results of the preferential flow and transport with negative ME values or positive ME values less than 0.4. On the contrary, predicted distributions of soil water content and Cl- concentration using the ARM agreed reasonably well with the experimental data with ME values higher than 0.8. The results indicated that the ARM successfully captured the macroscopic behavior of preferential flow and solute transport in the unsaturated soil.« less

  15. Managed traffic evacuation using distributed sensor processing

    NASA Astrophysics Data System (ADS)

    Ramuhalli, Pradeep; Biswas, Subir

    2005-05-01

    This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.

  16. Predicting the geographic distribution of a species from presence-only data subject to detection errors

    USGS Publications Warehouse

    Dorazio, Robert M.

    2012-01-01

    Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point-process models and binary-regression models for case-augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point-process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence-only sample sizes. Analyses of presence-only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site-occupancy analyses of detections and nondetections of these species.

  17. On the Use of the Beta Distribution in Probabilistic Resource Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olea, Ricardo A., E-mail: olea@usgs.gov

    2011-12-15

    The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less

  18. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  19. Parameter Variability and Distributional Assumptions in the Diffusion Model

    ERIC Educational Resources Information Center

    Ratcliff, Roger

    2013-01-01

    If the diffusion model (Ratcliff & McKoon, 2008) is to account for the relative speeds of correct responses and errors, it is necessary that the components of processing identified by the model vary across the trials of a task. In standard applications, the rate at which information is accumulated by the diffusion process is assumed to be normally…

  20. Evaluation of the whole body physiologically based pharmacokinetic (WB-PBPK) modeling of drugs.

    PubMed

    Munir, Anum; Azam, Shumaila; Fazal, Sahar; Bhatti, A I

    2018-08-14

    The Physiologically based pharmacokinetic (PBPK) modeling is a supporting tool in drug discovery and improvement. Simulations produced by these models help to save time and aids in examining the effects of different variables on the pharmacokinetics of drugs. For this purpose, Sheila and Peters suggested a PBPK model capable of performing simulations to study a given drug absorption. There is a need to extend this model to the whole body entailing all another process like distribution, metabolism, and elimination, besides absorption. The aim of this scientific study is to hypothesize a WB-PBPK model through integrating absorption, distribution, metabolism, and elimination processes with the existing PBPK model.Absorption, distribution, metabolism, and elimination models are designed, integrated with PBPK model and validated. For validation purposes, clinical records of few drugs are collected from the literature. The developed WB-PBPK model is affirmed by comparing the simulations produced by the model against the searched clinical data. . It is proposed that the WB-PBPK model may be used in pharmaceutical industries to create of the pharmacokinetic profiles of drug candidates for better outcomes, as it is advance PBPK model and creates comprehensive PK profiles for drug ADME in concentration-time plots. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Testing the Race Model Inequality in Redundant Stimuli with Variable Onset Asynchrony

    ERIC Educational Resources Information Center

    Gondan, Matthias

    2009-01-01

    In speeded response tasks with redundant signals, parallel processing of the signals is tested by the race model inequality. This inequality states that given a race of two signals, the cumulative distribution of response times for redundant stimuli never exceeds the sum of the cumulative distributions of response times for the single-modality…

  2. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  3. Macroergonomic study of food sector company distribution centres.

    PubMed

    García Acosta, Gabriel; Lange Morales, Karen

    2008-07-01

    This study focussed on the work system design to be used by a Colombian food sector company for distributing products. It considered the concept of participative ergonomics, where people from the commercial, logistics, operation, occupational health areas worked in conjunction with the industrial designers, ergonomists who methodologically led the project. As a whole, the project was conceived as having five phases: outline, diagnosis, modelling the process, scalability, instrumentation. The results of the project translate into procedures for selecting, projecting a new distribution centre, the operational process model, a description of ergonomic systems that will enable specific work stations to be designed, the procedure for adapting existing warehouses. Strategically, this work helped optimise the company's processes and ensure that knowledge would be transferred within it. In turn, it became a primary prevention strategy in the field of health, aimed at reducing occupational risks, improving the quality of life at work.

  4. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  5. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  6. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini; ...

    2015-01-15

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  7. Modeling the curing process of thick-section autoclave cured composites

    NASA Technical Reports Server (NTRS)

    Loos, A. C.; Dara, P. H.

    1985-01-01

    Temperature gradients are significant during cure of large area, thick-section composites. Such temperature gradients result in nonuniformly cured parts with high void contents, poor ply compaction, and variations in the fiber/resin distribution. A model was developed to determine the temperature distribution in thick-section autoclave cured composites. Using the model, long with temperature measurements obtained from the thick-section composites, the effects of various processing parameters on the thermal response of the composites were examined. A one-dimensional heat transfer model was constructed for the composite-tool assembly. The governing differential equations and associated boundary conditions describing one-dimensional unsteady heat-conduction in the composite, tool plate, and pressure plate are given. Solution of the thermal model was obtained using an implicit finite difference technique.

  8. Numerical modeling of sorption kinetics of organic compounds to soil and sediment particles

    NASA Astrophysics Data System (ADS)

    Wu, Shian-chee; Gschwend, Phillip M.

    1988-08-01

    A numerical model is developed to simulate hydrophobic organic compound sorption kinetics, based on a retarded intraaggregate diffusion conceptualization of this solid-water exchange process. This model was used to ascertain the sensitivity of the sorption process for various sorbates to nonsteady solution concentrations and to polydisperse soil or sediment aggregate particle size distributions. Common approaches to modeling sorption kinetics amount to simplifications of our model and appear justified only when (1) the concentration fluctuations occur on a time scale which matches the sorption timescale of interest and (2) the particle size distribution is relatively narrow. Finally, a means is provided to estimate the extent of approach of a sorbing system to equilibrium as a function of aggregate size, chemical diffusivity and hydrophobicity, and system solids concentration.

  9. Cloud processing of gases and aerosols in the Community Multiscale Air Quality (CMAQ) model: Impacts of extended chemistry

    EPA Science Inventory

    Clouds and fogs can significantly impact the concentration and distribution of atmospheric gases and aerosols through chemistry, scavenging, and transport. This presentation summarizes the representation of cloud processes in the Community Multiscale Air Quality (CMAQ) modeling ...

  10. Land Processes Distributed Active Archive Center (LP DAAC) 25th Anniversary Recognition "A Model for Government Partnerships". LP DAAC "History and a Look Forward"

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Doescher, Chris

    2015-01-01

    This presentation discusses 25 years of interactions between NASA and the USGS to manage a Land Processes Distributed Active Archive Center (LPDAAC) for the purpose of providing users access to NASA's rich collection of Earth Science data. The presentation addresses challenges, efforts and metrics on the performance.

  11. Watershed Modeling Applications with the Open-Access Modular Distributed Watershed Educational Toolbox (MOD-WET) and Introductory Hydrology Textbook

    NASA Astrophysics Data System (ADS)

    Huning, L. S.; Margulis, S. A.

    2014-12-01

    Traditionally, introductory hydrology courses focus on hydrologic processes as independent or semi-independent concepts that are ultimately integrated into a watershed model near the end of the term. When an "off-the-shelf" watershed model is introduced in the curriculum, this approach can result in a potential disconnect between process-based hydrology and the inherent interconnectivity of processes within the water cycle. In order to curb this and reduce the learning curve associated with applying hydrologic concepts to complex real-world problems, we developed the open-access Modular Distributed Watershed Educational Toolbox (MOD-WET). The user-friendly, MATLAB-based toolbox contains the same physical equations for hydrological processes (i.e. precipitation, snow, radiation, evaporation, unsaturated flow, infiltration, groundwater, and runoff) that are presented in the companion e-textbook (http://aqua.seas.ucla.edu/margulis_intro_to_hydro_textbook.html) and taught in the classroom. The modular toolbox functions can be used by students to study individual hydrologic processes. These functions are integrated together to form a simple spatially-distributed watershed model, which reinforces a holistic understanding of how hydrologic processes are interconnected and modeled. Therefore when watershed modeling is introduced, students are already familiar with the fundamental building blocks that have been unified in the MOD-WET model. Extensive effort has been placed on the development of a highly modular and well-documented code that can be run on a personal computer within the commonly-used MATLAB environment. MOD-WET was designed to: 1) increase the qualitative and quantitative understanding of hydrological processes at the basin-scale and demonstrate how they vary with watershed properties, 2) emphasize applications of hydrologic concepts rather than computer programming, 3) elucidate the underlying physical processes that can often be obscured with a complicated "off-the-shelf" watershed model in an introductory hydrology course, and 4) reduce the learning curve associated with analyzing meaningful real-world problems. The open-access MOD-WET and e-textbook have already been successfully incorporated within our undergraduate curriculum.

  12. Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach.

    PubMed

    Hofmans, Joeri

    2017-01-01

    A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories-in the form of the dynamic model of the psychological contract-and research methods-in the form of daily diary research and experience sampling research-are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models-the Zero-Inflated model and the Hurdle model-that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue.

  13. Computer models of social processes: the case of migration.

    PubMed

    Beshers, J M

    1967-06-01

    The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.

  14. Model Checking Techniques for Assessing Functional Form Specifications in Censored Linear Regression Models.

    PubMed

    León, Larry F; Cai, Tianxi

    2012-04-01

    In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.

  15. On-line consolidation of thermoplastic composites

    NASA Astrophysics Data System (ADS)

    Shih, Po-Jen

    An on-line consolidation system, which includes a computer-controlled filament winding machine and a consolidation head assembly, has been designed and constructed to fabricate composite parts from thermoplastic towpregs. A statistical approach was used to determine the significant processing parameters and their effect on the mechanical and physical properties of composite cylinders fabricated by on-line consolidation. A central composite experimental design was used to select the processing conditions for manufacturing the composite cylinders. The thickness, density, void content, degree of crystallinity and interlaminar shear strength (ILSS) were measured for each composite cylinder. Micrographs showed that complete intimate contact and uniform fiber-matrix distribution were achieved. The degree of crystallinity of the cylinders was found to be in the range of 25-30%. Under optimum processing conditions, an ILSS of 58 MPa and a void content of <1% were achieved for APC-2 (PEEK/Carbon fiber) composite cylinders. An in-situ measurement system which uses a slip ring assembly and a computer data acquisition system was developed to obtain temperature data during winding. Composite cylinders were manufactured with eight K-type thermocouples installed in various locations inside the cylinder. The temperature distribution inside the composite cylinder during winding was measured for different processing conditions. ABAQUS finite element models of the different processes that occur during on-line consolidation were constructed. The first model was used to determine the convective heat transfer coefficient for the hot-air heat source. A convective heat transfer coefficient of 260 w/msp{2°}K was obtained by matching the calculated temperature history to the in-situ measurement data. To predict temperature distribution during winding an ABAQUS winding simulation model was developed. The winding speed was modeled by incrementally moving the convective boundary conditions around the outer surface of the composite cylinder. A towpreg heating model was constructed to predict the temperature distribution on the cross section of the incoming towpreg. For the process-induced thermal stresses analysis, a thermoelastic finite element model was constructed. Using the temperature history obtained from thermal analysis as the initial conditions, the thermal stresses during winding and cooling were investigated.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demeure, I.M.

    The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less

  17. Stochastic models for the Trojan Y-Chromosome eradication strategy of an invasive species.

    PubMed

    Wang, Xueying; Walton, Jay R; Parshad, Rana D

    2016-01-01

    The Trojan Y-Chromosome (TYC) strategy, an autocidal genetic biocontrol method, has been proposed to eliminate invasive alien species. In this work, we develop a Markov jump process model for this strategy, and we verify that there is a positive probability for wild-type females going extinct within a finite time. Moreover, when sex-reversed Trojan females are introduced at a constant population size, we formulate a stochastic differential equation (SDE) model as an approximation to the proposed Markov jump process model. Using the SDE model, we investigate the probability distribution and expectation of the extinction time of wild-type females by solving Kolmogorov equations associated with these statistics. The results indicate how the probability distribution and expectation of the extinction time are shaped by the initial conditions and the model parameters.

  18. Experimental design for dynamics identification of cellular processes.

    PubMed

    Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T

    2014-03-01

    We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.

  19. Estimation of the processes controlling variability in phytoplankton pigment distributions on the southeastern U.S. continental shelf

    NASA Technical Reports Server (NTRS)

    Mcclain, Charles R.; Ishizaka, Joji; Hofmann, Eileen E.

    1990-01-01

    Five coastal-zone-color-scanner images from the southeastern U.S. continental shelf are combined with concurrent moored current meter measurements to assess the processes controlling the variability in chlorophyll concentration and distribution in this region. An equation governing the space and time distribution of a nonconservative quantity such as chlorophyll is used in the calculations. The terms of the equation, estimated from observations, show that advective, diffusive, and local processes contribute to the plankton distributions and vary with time and location. The results from this calculation are compared with similar results obtained using a numerical physical-biological model with circulation fields derived from an optimal interpolation of the current meter observations and it is concluded that the two approaches produce different estimates of the processes controlling phytoplankton variability.

  20. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  1. Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation

    PubMed Central

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228

  2. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    PubMed

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  3. Ship Detection in SAR Image Based on the Alpha-stable Distribution

    PubMed Central

    Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng

    2008-01-01

    This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794

  4. Weak ergodicity of population evolution processes.

    PubMed

    Inaba, H

    1989-10-01

    The weak ergodic theorems of mathematical demography state that the age distribution of a closed population is asymptotically independent of the initial distribution. In this paper, we provide a new proof of the weak ergodic theorem of the multistate population model with continuous time. The main tool to attain this purpose is a theory of multiplicative processes, which was mainly developed by Garrett Birkhoff, who showed that ergodic properties generally hold for an appropriate class of multiplicative processes. First, we construct a general theory of multiplicative processes on a Banach lattice. Next, we formulate a dynamical model of a multistate population and show that its evolution operator forms a multiplicative process on the state space of the population. Subsequently, we investigate a sufficient condition that guarantees the weak ergodicity of the multiplicative process. Finally, we prove the weak and strong ergodic theorems for the multistate population and resolve the consistency problem.

  5. Classification tree models for predicting distributions of michigan stream fish from landscape variables

    USGS Publications Warehouse

    Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.

    2008-01-01

    Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.

  6. Explicit simulation of ice particle habits in a Numerical Weather Prediction Model

    NASA Astrophysics Data System (ADS)

    Hashino, Tempei

    2007-05-01

    This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.

  7. The impact of runoff generation mechanisms on the location of critical source areas

    USGS Publications Warehouse

    Lyon, S.W.; McHale, M.R.; Walter, M.T.; Steenhuis, T.S.

    2006-01-01

    Identifying phosphorus (P) source areas and transport pathways is a key step in decreasing P loading to natural water systems. This study compared the effects of two modeled runoff generation processes - saturation excess and infiltration excess - on total phosphorus (TP) and soluble reactive phosphorus (SRP) concentrations in 10 catchment streams of a Catskill mountain watershed in southeastern New York. The spatial distribution of runoff from forested land and agricultural land was generated for both runoff processes; results of both distributions were consistent with Soil Conservation Service-Curve Number (SCS-CN) theory. These spatial runoff distributions were then used to simulate stream concentrations of TP and SRP through a simple equation derived from an observed relation between P concentration and land use; empirical results indicate that TP and SRP concentrations increased with increasing percentage of agricultural land. Simulated TP and SRP stream concentrations predicted for the 10 catchments were strongly affected by the assumed runoff mechanism. The modeled TP and SRP concentrations produced by saturation excess distribution averaged 31 percent higher and 42 percent higher, respectively, than those produced by the infiltration excess distribution. Misrepresenting the primary runoff mechanism could not only produce erroneous concentrations, it could fail to correctly locate critical source areas for implementation of best management practices. Thus, identification of the primary runoff mechanism is critical in selection of appropriate models in the mitigation of nonpoint source pollution. Correct representation of runoff processes is also critical in the future development of biogeochemical transport models, especially those that address nutrient fluxes.

  8. A distributed Clips implementation: dClips

    NASA Technical Reports Server (NTRS)

    Li, Y. Philip

    1993-01-01

    A distributed version of the Clips language, dClips, was implemented on top of two existing generic distributed messaging systems to show that: (1) it is easy to create a coarse-grained parallel programming environment out of an existing language if a high level messaging system is used; and (2) the computing model of a parallel programming environment can be changed easily if we change the underlying messaging system. dClips processes were first connected with a simple master-slave model. A client-server model with intercommunicating agents was later implemented. The concept of service broker is being investigated.

  9. Integrated measurements and modeling of CO2, CH4, and N2O fluxes using soil microsite frequency distributions

    NASA Astrophysics Data System (ADS)

    Davidson, Eric; Sihi, Debjani; Savage, Kathleen

    2017-04-01

    Soil fluxes of greenhouse gases (GHGs) play a significant role as biotic feedbacks to climate change. Production and consumption of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) are affected by complex interactions of temperature, moisture, and substrate supply, which are further complicated by spatial heterogeneity of the soil matrix. Models of belowground processes of these GHGs should be internally consistent with respect to the biophysical processes of gaseous production, consumption, and transport within the soil, including the contrasting effects of oxygen (O2) as either substrate or inhibitor. We installed automated chambers to simultaneously measure soil fluxes of CO2 (using LiCor-IRGA), CH4, and N2O (using Aerodyne quantum cascade laser) along soil moisture gradients at the Howland Forest in Maine, USA. Measured fluxes of these GHGs were used to develop and validate a merged model. While originally intended for aerobic respiration, the core structure of the Dual Arrhenius and Michaelis-Menten (DAMM) model was modified by adding M-M and Arrhenius functions for each GHG production and consumption process, and then using the same diffusion functions for each GHG and for O2. The area under a soil chamber was partitioned according to a log-normal probability distribution function, where only a small fraction of microsites had high available-C. The probability distribution of soil C leads to a simulated distribution of heterotrophic respiration, which translates to a distribution of O2 consumption among microsites. Linking microsite consumption of O2 with a diffusion model generates microsite concentrations of O2, which then determine the distribution of microsite production and consumption of CH4 and N2O, and subsequently their microsite concentrations using the same diffusion function. At many moisture values, there are some microsites of production and some of consumption for each gas, and the resulting simulated microsite concentrations of CH4 and N2O range from below ambient to above ambient atmospheric values. As soil moisture or temperature increase, the skewness of the microsite distributions of heterotrophic respiration and CH4 concentrations shifts toward a larger fraction of high values, while the skewness of microsite distributions of O2 and N2O concentrations shifts toward a larger fraction of low values. This approach of probability distribution functions for each gas simulates the importance of microsite hotspots of methanogenesis and N2O reduction at high moisture (and temperature). In addition, the model demonstrates that net consumption of atmospheric CH4 and N2O can occur simultaneously within a chamber due to the distribution of soil microsite conditions, which is consistent with some episodes of measured fluxes. Because soil CO2, N2O and CH4 fluxes are linked through substrate supply and O2 effects, the multiple constraints of simultaneous measurements of all three GHGs proved to be effective when applied to our combined model. Simulating all three GHGs simultaneously in a parsimonious modeling framework provides confidence that the most important mechanisms are skillfully simulated using appropriate parameterization and good process representation.

  10. How Bright is the Proton? A Precise Determination of the Photon Parton Distribution Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manohar, Aneesh; Nason, Paolo; Salam, Gavin P.

    2016-12-09

    It has become apparent in recent years that it is important, notably for a range of physics studies at the Large Hadron Collider, to have accurate knowledge on the distribution of photons in the proton. We show how the photon parton distribution function (PDF) can be determined in a model-independent manner, using electron-proton (ep) scattering data, in effect viewing the ep → e + X process as an electron scattering off the photon field of the proton. To this end, we consider an imaginary, beyond the Standard Model process with a flavor changing photon-lepton vertex. We write its cross sectionmore » in two ways: one in terms of proton structure functions, the other in terms of a photon distribution. Requiring their equivalence yields the photon distribution as an integral over proton structure functions. As a result of the good precision of ep data, we constrain the photon PDF at the level of 1%–2% over a wide range of momentum fractions.« less

  11. How Bright is the Proton? A Precise Determination of the Photon Parton Distribution Function.

    PubMed

    Manohar, Aneesh; Nason, Paolo; Salam, Gavin P; Zanderighi, Giulia

    2016-12-09

    It has become apparent in recent years that it is important, notably for a range of physics studies at the Large Hadron Collider, to have accurate knowledge on the distribution of photons in the proton. We show how the photon parton distribution function (PDF) can be determined in a model-independent manner, using electron-proton (ep) scattering data, in effect viewing the ep→e+X process as an electron scattering off the photon field of the proton. To this end, we consider an imaginary, beyond the Standard Model process with a flavor changing photon-lepton vertex. We write its cross section in two ways: one in terms of proton structure functions, the other in terms of a photon distribution. Requiring their equivalence yields the photon distribution as an integral over proton structure functions. As a result of the good precision of ep data, we constrain the photon PDF at the level of 1%-2% over a wide range of momentum fractions.

  12. Modified parton branching model for multi-particle production in hadronic collisions: Application to SUSY particle branching

    NASA Astrophysics Data System (ADS)

    Yuanyuan, Zhang

    The stochastic branching model of multi-particle productions in high energy collision has theoretical basis in perturbative QCD, and also successfully describes the experimental data for a wide energy range. However, over the years, little attention has been put on the branching model for supersymmetric (SUSY) particles. In this thesis, a stochastic branching model has been built to describe the pure supersymmetric particle jets evolution. This model is a modified two-phase stochastic branching process, or more precisely a two phase Simple Birth Process plus Poisson Process. The general case that the jets contain both ordinary particle jets and supersymmetric particle jets has also been investigated. We get the multiplicity distribution of the general case, which contains a Hypergeometric function in its expression. We apply this new multiplicity distribution to the current experimental data of pp collision at center of mass energy √s = 0.9, 2.36, 7 TeV. The fitting shows the supersymmetric particles haven't participate branching at current collision energy.

  13. The Role of Aerosols on Precipitation Processes: Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, X.; Matsui, T.

    2012-01-01

    Cloud microphysics is inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, a detailed spectral-bin microphysical scheme was implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles [i.e. pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail]. Each type is described by a special size distribution function containing many categories (i.e., 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, the sub-tropics (Florida) and midlatitudes using identical thermodynamic conditions but with different concentrations of CCN: a low "clean" concentration and a high "dirty" concentration. Results indicate that the low CCN concentration case produces rainfall at the surface sooner than the high CeN case but has less cloud water mass aloft. Because the spectral-bin model explicitly calculates and allows for the examination of both the mass and number concentration of species in each size category, a detailed analysis of the instantaneous size spectrum can be obtained for these cases. It is shown that since the low (CN case produces fewer droplets, larger sizes develop due to greater condensational and collection growth, leading to a broader size spectrum in comparison to the high CCN case. Sensitivity tests were performed to identify the impact of ice processes, radiation and large-scale influence on cloud-aerosol interactive processes, especially regarding surface rainfall amounts and characteristics (i.e., heavy or convective versus light or stratiform types). In addition, an inert tracer was included to follow the vertical redistribution of aerosols by cloud processes. We will also give a brief review from observational evidence on the role of aerosol on precipitation processes.

  14. Variances in the projections, resulting from CLIMEX, Boosted Regression Trees and Random Forests techniques

    NASA Astrophysics Data System (ADS)

    Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh

    2017-08-01

    The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm ( Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations, especially in correlative models such as MX, BRT, and RF. Intersections between different techniques may decrease uncertainty in future distribution projections. However, readers should not miss the fact that the uncertainties are mostly because the future GHG emission scenarios are unknowable with sufficient precision. Suggestions towards methodology and processing for improving projections are included.

  15. Spatial perspectives in state-and-transition models: A missing link to land management?

    USDA-ARS?s Scientific Manuscript database

    Conceptual models of alternative states and thresholds are based largely on observations of ecosystem processes at a few points in space. Because the distribution of alternative states in spatially-structured ecosystems is the result of variations in pattern-process interactions at different scales,...

  16. Modeling financial markets by the multiplicative sequence of trades

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-12-01

    We introduce the stochastic multiplicative point process modeling trading activity of financial markets. Such a model system exhibits power-law spectral density S(f)∝1/fβ, scaled as power of frequency for various values of β between 0.5 and 2. Furthermore, we analyze the relation between the power-law autocorrelations and the origin of the power-law probability distribution of the trading activity. The model reproduces the spectral properties of trading activity and explains the mechanism of power-law distribution in real markets.

  17. Bayesian analysis of Jolly-Seber type models

    USGS Publications Warehouse

    Matechou, Eleni; Nicholls, Geoff K.; Morgan, Byron J. T.; Collazo, Jaime A.; Lyons, James E.

    2016-01-01

    We propose the use of finite mixtures of continuous distributions in modelling the process by which new individuals, that arrive in groups, become part of a wildlife population. We demonstrate this approach using a data set of migrating semipalmated sandpipers (Calidris pussila) for which we extend existing stopover models to allow for individuals to have different behaviour in terms of their stopover duration at the site. We demonstrate the use of reversible jump MCMC methods to derive posterior distributions for the model parameters and the models, simultaneously. The algorithm moves between models with different numbers of arrival groups as well as between models with different numbers of behavioural groups. The approach is shown to provide new ecological insights about the stopover behaviour of semipalmated sandpipers but is generally applicable to any population in which animals arrive in groups and potentially exhibit heterogeneity in terms of one or more other processes.

  18. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses

    PubMed Central

    Bennett, Joseph R.; French, Connor M.

    2017-01-01

    SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs) to maximize each model’s discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have ‘universal’ analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism), generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates—to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user. PMID:29230356

  19. Time distributions of solar energetic particle events: Are SEPEs really random?

    NASA Astrophysics Data System (ADS)

    Jiggens, P. T. A.; Gabriel, S. B.

    2009-10-01

    Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.

  20. 45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...

  1. 45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...

  2. 45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...

  3. Degree Distribution of Position-Dependent Ball-Passing Networks in Football Games

    NASA Astrophysics Data System (ADS)

    Narizuka, Takuma; Yamamoto, Ken; Yamazaki, Yoshihiro

    2015-08-01

    We propose a simple stochastic model describing the position-dependent ball-passing network in football (soccer) games. In this network, a player in a certain area in a divided field is a node, and a pass between two nodes corresponds to an edge. Our stochastic process model is characterized by the consecutive choice of a node depending on its intrinsic fitness. We derive an explicit expression for the degree distribution and find that the derived distribution reproduces that for actual data reasonably well.

  4. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  5. Simulator of Non-homogenous Alumina and Current Distribution in an Aluminum Electrolysis Cell to Predict Low-Voltage Anode Effects

    NASA Astrophysics Data System (ADS)

    Dion, Lukas; Kiss, László I.; Poncsák, Sándor; Lagacé, Charles-Luc

    2018-04-01

    Perfluorocarbons are important contributors to aluminum production greenhouse gas inventories. Tetrafluoromethane and hexafluoroethane are produced in the electrolysis process when a harmful event called anode effect occurs in the cell. This incident is strongly related to the lack of alumina and the current distribution in the cell and can be classified into two categories: high-voltage and low-voltage anode effects. The latter is hard to detect during the normal electrolysis process and, therefore, new tools are necessary to predict this event and minimize its occurrence. This paper discusses a new approach to model the alumina distribution behavior in an electrolysis cell by dividing the electrolytic bath into non-homogenous concentration zones using discrete elements. The different mechanisms related to the alumina distribution are discussed in detail. Moreover, with a detailed electrical model, it is possible to calculate the current distribution among the different anodic assemblies. With this information, the model can evaluate if low-voltage emissions are likely to be present under the simulated conditions. Using the simulator will help the understanding of the role of the alumina distribution which, in turn, will improve the cell energy consumption and stability while reducing the occurrence of high- and low-voltage anode effects.

  6. TSPA 1991: An initial total-system performance assessment for Yucca Mountain; Yucca Mountain Site Characterization Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnard, R.W.; Wilson, M.L.; Dockery, H.A.

    1992-07-01

    This report describes an assessment of the long-term performance of a repository system that contains deeply buried highly radioactive waste; the system is assumed to be located at the potential site at Yucca Mountain, Nevada. The study includes an identification of features, events, and processes that might affect the potential repository, a construction of scenarios based on this identification, a selection of models describing these scenarios (including abstraction of appropriate models from detailed models), a selection of probability distributions for the parameters in the models, a stochastic calculation of radionuclide releases for the scenarios, and a derivation of complementary cumulativemore » distribution functions (CCDFs) for the releases. Releases and CCDFs are calculated for four categories of scenarios: aqueous flow (modeling primarily the existing conditions at the site, with allowances for climate change), gaseous flow, basaltic igneous activity, and human intrusion. The study shows that models of complex processes can be abstracted into more simplified representations that preserve the understanding of the processes and produce results consistent with those of more complex models.« less

  7. Does climate have heavy tails?

    NASA Astrophysics Data System (ADS)

    Bermejo, Miguel; Mudelsee, Manfred

    2013-04-01

    When we speak about a distribution with heavy tails, we are referring to the probability of the existence of extreme values will be relatively large. Several heavy-tail models are constructed from Poisson processes, which are the most tractable models. Among such processes, one of the most important are the Lévy processes, which are those process with independent, stationary increments and stochastic continuity. If the random component of a climate process that generates the data exhibits a heavy-tail distribution, and if that fact is ignored by assuming a finite-variance distribution, then there would be serious consequences (in the form, e.g., of bias) for the analysis of extreme values. Yet, it appears that it is an open question to what extent and degree climate data exhibit heavy-tail phenomena. We present a study about the statistical inference in the presence of heavy-tail distribution. In particular, we explore (1) the estimation of tail index of the marginal distribution using several estimation techniques (e.g., Hill estimator, Pickands estimator) and (2) the power of hypothesis tests. The performance of the different methods are compared using artificial time-series by means of Monte Carlo experiments. We systematically apply the heavy tail inference to observed climate data, in particular we focus on time series data. We study several proxy and directly observed climate variables from the instrumental period, the Holocene and the Pleistocene. This work receives financial support from the European Commission (Marie Curie Initial Training Network LINC, No. 289447, within the 7th Framework Programme).

  8. The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees

    NASA Astrophysics Data System (ADS)

    Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard

    2017-09-01

    Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.

  9. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

    USGS Publications Warehouse

    Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

    2017-05-04

    The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle tracking is anticipated to evaluate if these model design considerations are similarly important for understanding the primary modeling objective - to simulate reasonable groundwater age distributions.

  10. Development of operating mode distributions for different types of roadways under different congestion levels for vehicle emission assessment using MOVES.

    PubMed

    Qi, Yi; Padiath, Ameena; Zhao, Qun; Yu, Lei

    2016-10-01

    The Motor Vehicle Emission Simulator (MOVES) quantifies emissions as a function of vehicle modal activities. Hence, the vehicle operating mode distribution is the most vital input for running MOVES at the project level. The preparation of operating mode distributions requires significant efforts with respect to data collection and processing. This study is to develop operating mode distributions for both freeway and arterial facilities under different traffic conditions. For this purpose, in this study, we (1) collected/processed geographic information system (GIS) data, (2) developed a model of CO2 emissions and congestion from observations, (3) implemented the model to evaluate potential emission changes from a hypothetical roadway accident scenario. This study presents a framework by which practitioners can assess emission levels in the development of different strategies for traffic management and congestion mitigation. This paper prepared the primary input, that is, the operating mode ID distribution, required for running MOVES and developed models for estimating emissions for different types of roadways under different congestion levels. The results of this study will provide transportation planners or environmental analysts with the methods for qualitatively assessing the air quality impacts of different transportation operation and demand management strategies.

  11. Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger

    2007-12-01

    Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.

  12. The spectral energy distributions of isolated neutron stars in the resonant cyclotron scattering model

    NASA Astrophysics Data System (ADS)

    Tong, Hao; Xu, Renxin

    2013-03-01

    The X-ray dim isolated neutron stars (XDINSs) are peculiar pulsar-like objects, characterized by their very well Planck-like spectrum. In studying their spectral energy distributions, the optical/UV excess is a long standing problem. Recently, Kaplan et al. (2011) have measured the optical/UV excess for all seven sources, which is understandable in the resonant cyclotron scattering (RCS) model previously addressed. The RCS model calculations show that the RCS process can account for the observed optical/UV excess for most sources. The flat spectrum of RX J2143.0+0654 may due to contribution from bremsstrahlung emission of the electron system in addition to the RCS process.

  13. Lévy-Student distributions for halos in accelerator beams.

    PubMed

    Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio; Illuminati, Fabrizio

    2005-12-01

    We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schrödinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonic (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Lévy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Lévy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Lévy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.

  14. Levy-Student distributions for halos in accelerator beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cufaro Petroni, Nicola; De Martino, Salvatore; De Siena, Silvio

    2005-12-15

    We describe the transverse beam distribution in particle accelerators within the controlled, stochastic dynamical scheme of stochastic mechanics (SM) which produces time reversal invariant diffusion processes. This leads to a linearized theory summarized in a Schroedinger-like (SL) equation. The space charge effects have been introduced in recent papers by coupling this S-L equation with the Maxwell equations. We analyze the space-charge effects to understand how the dynamics produces the actual beam distributions, and in particular we show how the stationary, self-consistent solutions are related to the (external and space-charge) potentials both when we suppose that the external field is harmonicmore » (constant focusing), and when we a priori prescribe the shape of the stationary solution. We then proceed to discuss a few other ideas by introducing generalized Student distributions, namely, non-Gaussian, Levy infinitely divisible (but not stable) distributions. We will discuss this idea from two different standpoints: (a) first by supposing that the stationary distribution of our (Wiener powered) SM model is a Student distribution; (b) by supposing that our model is based on a (non-Gaussian) Levy process whose increments are Student distributed. We show that in the case (a) the longer tails of the power decay of the Student laws and in the case (b) the discontinuities of the Levy-Student process can well account for the rare escape of particles from the beam core, and hence for the formation of a halo in intense beams.« less

  15. The Pitman-Yor Process and an Empirical Study of Choice Behavior

    NASA Astrophysics Data System (ADS)

    Hisakado, Masato; Sano, Fumiaki; Mori, Shintaro

    2018-02-01

    This study discusses choice behavior using a voting model in which voters can obtain information from a finite number of previous r voters. Voters vote for a candidate with a probability proportional to the previous vote ratio, which is visible to the voters. We obtain the Pitman sampling formula as the equilibrium distribution of r votes. We present the model as a process of posting on a bulletin board system, 2ch.net, where users can choose one of many threads to create a post. We explore how this choice depends on the last r posts and the distribution of these last r posts across threads. We conclude that the posting process is described by our voting model with analog herders for a small r, which might correspond to the time horizon of users' responses.

  16. Research on the semi-distributed monthly rainfall runoff model at the Lancang River basin based on DEM

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Zhao, Rong; Liu, Jiping; Zhang, Qingpu

    2007-06-01

    The Lancang River Basin is so narrow and its hydrological and meteorological information are so flexible. The Rainfall, evaporation, glacial melt water and groundwater affect the runoff whose replenishment forms changing notable with the season in different areas at the basin. Characters of different kind of distributed model and conceptual hydrological model are analyzed. A semi-distributed hydrological model of relation between monthly runoff and rainfall, temperate and soil type has been built in Changdu County based on Visual Basic and ArcObject. The way of discretization of distributed hydrological model was used in the model, and principles of conceptual model are taken into account. The sub-catchment of Changdu is divided into regular cells, and all kinds of hydrological and meteorological information and land use classes and slope extracted from 1:250000 digital elevation models are distributed in each cell. The model does not think of the rainfall-runoff hydro-physical process but use the conceptual model to simulate the whole contributes to the runoff of the area. The affection of evapotranspiration loss and underground water is taken into account at the same time. The spatial distribute characteristics of the monthly runoff in the area are simulated and analyzed with a few parameters.

  17. Some Technical Implications of Distributed Cognition on the Design on Interactive Learning Environments.

    ERIC Educational Resources Information Center

    Dillenbourg, Pierre

    1996-01-01

    Maintains that diagnosis, explanation, and tutoring, the functions of an interactive learning environment, are collaborative processes. Examines how human-computer interaction can be improved using a distributed cognition framework. Discusses situational and distributed knowledge theories and provides a model on how they can be used to redesign…

  18. Modeling post-wildfire hydrological processes with ParFlow

    NASA Astrophysics Data System (ADS)

    Escobar, I. S.; Lopez, S. R.; Kinoshita, A. M.

    2017-12-01

    Wildfires alter the natural processes within a watershed, such as surface runoff, evapotranspiration rates, and subsurface water storage. Post-fire hydrologic models are typically one-dimensional, empirically-based models or two-dimensional, conceptually-based models with lumped parameter distributions. These models are useful for modeling and predictions at the watershed outlet; however, do not provide detailed, distributed hydrologic processes at the point scale within the watershed. This research uses ParFlow, a three-dimensional, distributed hydrologic model to simulate post-fire hydrologic processes by representing the spatial and temporal variability of soil burn severity (via hydrophobicity) and vegetation recovery. Using this approach, we are able to evaluate the change in post-fire water components (surface flow, lateral flow, baseflow, and evapotranspiration). This work builds upon previous field and remote sensing analysis conducted for the 2003 Old Fire Burn in Devil Canyon, located in southern California (USA). This model is initially developed for a hillslope defined by a 500 m by 1000 m lateral extent. The subsurface reaches 12.4 m and is assigned a variable cell thickness to explicitly consider soil burn severity throughout the stages of recovery and vegetation regrowth. We consider four slope and eight hydrophobic layer configurations. Evapotranspiration is used as a proxy for vegetation regrowth and is represented by the satellite-based Simplified Surface Energy Balance (SSEBOP) product. The pre- and post-fire surface runoff, subsurface storage, and surface storage interactions are evaluated at the point scale. Results will be used as a basis for developing and fine-tuning a watershed-scale model. Long-term simulations will advance our understanding of post-fire hydrological partitioning between water balance components and the spatial variability of watershed processes, providing improved guidance for post-fire watershed management. In reference to the presenter, Isabel Escobar: Research is funded by the NASA-DIRECT STEM Program. Travel expenses for this presentation is funded by CSU-LSAMP. CSU-LSAMP is supported by the National Science Foundation under Grant # HRD-1302873 and the CSU Office of Chancellor.

  19. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less

  20. Modeling Psychological Contract Violation using Dual Regime Models: An Event-based Approach

    PubMed Central

    Hofmans, Joeri

    2017-01-01

    A good understanding of the dynamics of psychological contract violation requires theories, research methods and statistical models that explicitly recognize that violation feelings follow from an event that violates one's acceptance limits, after which interpretative processes are set into motion, determining the intensity of these violation feelings. Whereas theories—in the form of the dynamic model of the psychological contract—and research methods—in the form of daily diary research and experience sampling research—are available by now, the statistical tools to model such a two-stage process are still lacking. The aim of the present paper is to fill this gap in the literature by introducing two statistical models—the Zero-Inflated model and the Hurdle model—that closely mimic the theoretical process underlying the elicitation violation feelings via two model components: a binary distribution that models whether violation has occurred or not, and a count distribution that models how severe the negative impact is. Moreover, covariates can be included for both model components separately, which yields insight into their unique and shared antecedents. By doing this, the present paper offers a methodological-substantive synergy, showing how sophisticated methodology can be used to examine an important substantive issue. PMID:29163316

  1. Rain-rate data base development and rain-rate climate analysis

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1993-01-01

    The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.

  2. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  3. Fitting statistical distributions to sea duck count data: implications for survey design and abundance estimation

    USGS Publications Warehouse

    Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.

    2014-01-01

    Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are becoming increasingly popular for group size modeling. Choosing appropriate statistical distributions for modeling flock size data is fundamental to accurately estimating population summaries, determining required survey effort, and assessing and propagating uncertainty through decision-making processes.

  4. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  5. Development of a web service for analysis in a distributed network.

    PubMed

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.

  6. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  7. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  8. Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution

    NASA Astrophysics Data System (ADS)

    Baldacchino, Tara; Worden, Keith; Rowson, Jennifer

    2017-02-01

    A novel variational Bayesian mixture of experts model for robust regression of bifurcating and piece-wise continuous processes is introduced. The mixture of experts model is a powerful model which probabilistically splits the input space allowing different models to operate in the separate regions. However, current methods have no fail-safe against outliers. In this paper, a robust mixture of experts model is proposed which consists of Student-t mixture models at the gates and Student-t distributed experts, trained via Bayesian inference. The Student-t distribution has heavier tails than the Gaussian distribution, and so it is more robust to outliers, noise and non-normality in the data. Using both simulated data and real data obtained from the Z24 bridge this robust mixture of experts performs better than its Gaussian counterpart when outliers are present. In particular, it provides robustness to outliers in two forms: unbiased parameter regression models, and robustness to overfitting/complex models.

  9. Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin

    NASA Astrophysics Data System (ADS)

    He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu

    2017-06-01

    This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.

  10. Momentum distributions for H 2 ( e , e ' p )

    DOE PAGES

    Ford, William P.; Jeschonnek, Sabine; Van Orden, J. W.

    2014-12-29

    [Background] A primary goal of deuteron electrodisintegration is the possibility of extracting the deuteron momentum distribution. This extraction is inherently fraught with difficulty, as the momentum distribution is not an observable and the extraction relies on theoretical models dependent on other models as input. [Purpose] We present a new method for extracting the momentum distribution which takes into account a wide variety of model inputs thus providing a theoretical uncertainty due to the various model constituents. [Method] The calculations presented here are using a Bethe-Salpeter like formalism with a wide variety of bound state wave functions, form factors, and finalmore » state interactions. We present a method to extract the momentum distributions from experimental cross sections, which takes into account the theoretical uncertainty from the various model constituents entering the calculation. [Results] In order to test the extraction pseudo-data was generated, and the extracted "experimental'' distribution, which has theoretical uncertainty from the various model inputs, was compared with the theoretical distribution used to generate the pseudo-data. [Conclusions] In the examples we compared the original distribution was typically within the error band of the extracted distribution. The input wave functions do contain some outliers which are discussed in the text, but at least this process can provide an upper bound on the deuteron momentum distribution. Due to the reliance on the theoretical calculation to obtain this quantity any extraction method should account for the theoretical error inherent in these calculations due to model inputs.« less

  11. Post-processing of multi-hydrologic model simulations for improved streamflow projections

    NASA Astrophysics Data System (ADS)

    khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid

    2016-04-01

    Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.

  12. Fast simulation of reconstructed phylogenies under global time-dependent birth-death processes.

    PubMed

    Höhna, Sebastian

    2013-06-01

    Diversification rates and patterns may be inferred from reconstructed phylogenies. Both the time-dependent and the diversity-dependent birth-death process can produce the same observed patterns of diversity over time. To develop and test new models describing the macro-evolutionary process of diversification, generic and fast algorithms to simulate under these models are necessary. Simulations are not only important for testing and developing models but play an influential role in the assessment of model fit. In the present article, I consider as the model a global time-dependent birth-death process where each species has the same rates but rates may vary over time. For this model, I derive the likelihood of the speciation times from a reconstructed phylogenetic tree and show that each speciation event is independent and identically distributed. This fact can be used to simulate efficiently reconstructed phylogenetic trees when conditioning on the number of species, the time of the process or both. I show the usability of the simulation by approximating the posterior predictive distribution of a birth-death process with decreasing diversification rates applied on a published bird phylogeny (family Cettiidae). The methods described in this manuscript are implemented in the R package TESS, available from the repository CRAN (http://cran.r-project.org/web/packages/TESS/). Supplementary data are available at Bioinformatics online.

  13. An Extension of a Parallel-Distributed Processing Framework of Reading Aloud in Japanese: Human Nonword Reading Accuracy Does Not Require a Sequential Mechanism

    ERIC Educational Resources Information Center

    Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun

    2017-01-01

    Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…

  14. Microphysical Processes Affecting the Pinatubo Volcanic Plume

    NASA Technical Reports Server (NTRS)

    Hamill, Patrick; Houben, Howard; Young, Richard; Turco, Richard; Zhao, Jingxia

    1996-01-01

    In this paper we consider microphysical processes which affect the formation of sulfate particles and their size distribution in a dispersing cloud. A model for the dispersion of the Mt. Pinatubo volcanic cloud is described. We then consider a single point in the dispersing cloud and study the effects of nucleation, condensation and coagulation on the time evolution of the particle size distribution at that point.

  15. Evidence of Long Range Dependence and Self-similarity in Urban Traffic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam S; Helmy, Ahmed; Hui, Pan

    2015-01-01

    Transportation simulation technologies should accurately model traffic demand, distribution, and assignment parame- ters for urban environment simulation. These three param- eters significantly impact transportation engineering bench- mark process, are also critical in realizing realistic traffic modeling situations. In this paper, we model and charac- terize traffic density distribution of thousands of locations around the world. The traffic densities are generated from millions of images collected over several years and processed using computer vision techniques. The resulting traffic den- sity distribution time series are then analyzed. It is found using the goodness-of-fit test that the traffic density dis- tributions follows heavy-tailmore » models such as Log-gamma, Log-logistic, and Weibull in over 90% of analyzed locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators strongly indicate that the traffic distribution pat- terns are stochastically self-similar (0.5 H 1.0). We believe this is an important finding that will influence the design and development of the next generation traffic simu- lation techniques and also aid in accurately modeling traffic engineering of urban systems. In addition, it shall provide a much needed input for the development of smart cities.« less

  16. Study of temperature distributions in wafer exposure process

    NASA Astrophysics Data System (ADS)

    Lin, Zone-Ching; Wu, Wen-Jang

    During the exposure process of photolithography, wafer absorbs the exposure energy, which results in rising temperature and the phenomenon of thermal expansion. This phenomenon was often neglected due to its limited effect in the previous generation of process. However, in the new generation of process, it may very likely become a factor to be considered. In this paper, the finite element model for analyzing the transient behavior of the distribution of wafer temperature during exposure was established under the assumption that the wafer was clamped by a vacuum chuck without warpage. The model is capable of simulating the distribution of the wafer temperature under different exposure conditions. The flowchart of analysis begins with the simulation of transient behavior in a single exposure region to the variation of exposure energy, interval of exposure locations and interval of exposure time under continuous exposure to investigate the distribution of wafer temperature. The simulation results indicate that widening the interval of exposure locations has a greater impact in improving the distribution of wafer temperature than extending the interval of exposure time between neighboring image fields. Besides, as long as the distance between the field center locations of two neighboring exposure regions exceeds the straight distance equals to three image fields wide, the interacting thermal effect during wafer exposure can be ignored. The analysis flow proposed in this paper can serve as a supporting reference tool for engineers in planning exposure paths.

  17. Phylogenetic analyses suggest that diversification and body size evolution are independent in insects.

    PubMed

    Rainford, James L; Hofreiter, Michael; Mayhew, Peter J

    2016-01-08

    Skewed body size distributions and the high relative richness of small-bodied taxa are a fundamental property of a wide range of animal clades. The evolutionary processes responsible for generating these distributions are well described in vertebrate model systems but have yet to be explored in detail for other major terrestrial clades. In this study, we explore the macro-evolutionary patterns of body size variation across families of Hexapoda (insects and their close relatives), using recent advances in phylogenetic understanding, with an aim to investigate the link between size and diversity within this ancient and highly diverse lineage. The maximum, minimum and mean-log body lengths of hexapod families are all approximately log-normally distributed, consistent with previous studies at lower taxonomic levels, and contrasting with skewed distributions typical of vertebrate groups. After taking phylogeny and within-tip variation into account, we find no evidence for a negative relationship between diversification rate and body size, suggesting decoupling of the forces controlling these two traits. Likelihood-based modeling of the log-mean body size identifies distinct processes operating within Holometabola and Diptera compared with other hexapod groups, consistent with accelerating rates of size evolution within these clades, while as a whole, hexapod body size evolution is found to be dominated by neutral processes including significant phylogenetic conservatism. Based on our findings we suggest that the use of models derived from well-studied but atypical clades, such as vertebrates may lead to misleading conclusions when applied to other major terrestrial lineages. Our results indicate that within hexapods, and within the limits of current systematic and phylogenetic knowledge, insect diversification is generally unfettered by size-biased macro-evolutionary processes, and that these processes over large timescales tend to converge on apparently neutral evolutionary processes. We also identify limitations on available data within the clade and modeling approaches for the resolution of trees of higher taxa, the resolution of which may collectively enhance our understanding of this key component of terrestrial ecosystems.

  18. Simulating the decentralized processes of the human immune system in a virtual anatomy model.

    PubMed

    Sarpe, Vladimir; Jacob, Christian

    2013-01-01

    Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.

  19. On the Use of the Beta Distribution in Probabilistic Resource Assessments

    USGS Publications Warehouse

    Olea, R.A.

    2011-01-01

    The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).

  20. Mechanistic modeling of pesticide exposure: The missing keystone of honey bee toxicology.

    PubMed

    Sponsler, Douglas B; Johnson, Reed M

    2017-04-01

    The role of pesticides in recent honey bee losses is controversial, partly because field studies often fail to detect effects predicted by laboratory studies. This dissonance highlights a critical gap in the field of honey bee toxicology: there exists little mechanistic understanding of the patterns and processes of exposure that link honey bees to pesticides in their environment. The authors submit that 2 key processes underlie honey bee pesticide exposure: 1) the acquisition of pesticide by foraging bees, and 2) the in-hive distribution of pesticide returned by foragers. The acquisition of pesticide by foraging bees must be understood as the spatiotemporal intersection between environmental contamination and honey bee foraging activity. This implies that exposure is distributional, not discrete, and that a subset of foragers may acquire harmful doses of pesticide while the mean colony exposure would appear safe. The in-hive distribution of pesticide is a complex process driven principally by food transfer interactions between colony members, and this process differs importantly between pollen and nectar. High priority should be placed on applying the extensive literature on honey bee biology to the development of more rigorously mechanistic models of honey bee pesticide exposure. In combination with mechanistic effects modeling, mechanistic exposure modeling has the potential to integrate the field of honey bee toxicology, advancing both risk assessment and basic research. Environ Toxicol Chem 2017;36:871-881. © 2016 SETAC. © 2016 SETAC.

  1. Numerical simulation of the alloying process during impulse induction heating of the metal substrate

    NASA Astrophysics Data System (ADS)

    Popov, V. N.

    2017-10-01

    2D numerical modeling of the processes during the alloying of the substrate surface metal layer is carried out. Heating, phase transition, heat and mass transfer in the molten metal, solidification of the melt are considered with the aid the proposed mathematical model. Under study is the applicability of the high-frequency electromagnetic field impulse for metal heating and melting. The distribution of the electromagnetic energy in the metal is described by empirical formulas. According to the results of numerical experiments, the flow structure in the melt and distribution of the alloying substances is evaluated.

  2. Barista: A Framework for Concurrent Speech Processing by USC-SAIL

    PubMed Central

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G.; Narayanan, Shrikanth S.

    2016-01-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0. PMID:27610047

  3. Method for distributed agent-based non-expert simulation of manufacturing process behavior

    DOEpatents

    Ivezic, Nenad; Potok, Thomas E.

    2004-11-30

    A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.

  4. Barista: A Framework for Concurrent Speech Processing by USC-SAIL.

    PubMed

    Can, Doğan; Gibson, James; Vaz, Colin; Georgiou, Panayiotis G; Narayanan, Shrikanth S

    2014-05-01

    We present Barista, an open-source framework for concurrent speech processing based on the Kaldi speech recognition toolkit and the libcppa actor library. With Barista, we aim to provide an easy-to-use, extensible framework for constructing highly customizable concurrent (and/or distributed) networks for a variety of speech processing tasks. Each Barista network specifies a flow of data between simple actors, concurrent entities communicating by message passing, modeled after Kaldi tools. Leveraging the fast and reliable concurrency and distribution mechanisms provided by libcppa, Barista lets demanding speech processing tasks, such as real-time speech recognizers and complex training workflows, to be scheduled and executed on parallel (and/or distributed) hardware. Barista is released under the Apache License v2.0.

  5. A physically based catchment partitioning method for hydrological analysis

    NASA Astrophysics Data System (ADS)

    Menduni, Giovanni; Riboni, Vittoria

    2000-07-01

    We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.

  6. Seasonal evolution of the Arctic marginal ice zone and its power-law obeying floe size distribution

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Stern, H. L., III; Schweiger, A. J. B.; Steele, M.; Hwang, P. B.

    2017-12-01

    A thickness, floe size, and enthalpy distribution (TFED) sea ice model, implemented numerically into the Pan-arctic Ice-Ocean Modeling and Assimilation System (PIOMAS), is used to investigate the seasonal evolution of the Arctic marginal ice zone (MIZ) and its floe size distribution. The TFED sea ice model, by coupling the Zhang et al. [2015] sea ice floe size distribution (FSD) theory with the Thorndike et al. [1975] ice thickness distribution (ITD) theory, simulates 12-category FSD and ITD explicitly and jointly. A range of ice thickness and floe size observations were used for model calibration and validation. The model creates FSDs that generally obey a power law or upper truncated power law, as observed by satellites and aerial surveys. In this study, we will examine the role of ice fragmentation and lateral melting in altering FSDs in the Arctic MIZ. We will also investigate how changes in FSD impact the seasonal evolution of the MIZ by modifying the thermodynamic processes.

  7. Microbial production of polyhydroxybutyrate with tailor-made properties: an integrated modelling approach and experimental validation.

    PubMed

    Penloglou, Giannis; Chatzidoukas, Christos; Kiparissides, Costas

    2012-01-01

    The microbial production of polyhydroxybutyrate (PHB) is a complex process in which the final quantity and quality of the PHB depend on a large number of process operating variables. Consequently, the design and optimal dynamic operation of a microbial process for the efficient production of PHB with tailor-made molecular properties is an extremely interesting problem. The present study investigates how key process operating variables (i.e., nutritional and aeration conditions) affect the biomass production rate and the PHB accumulation in the cells and its associated molecular weight distribution. A combined metabolic/polymerization/macroscopic modelling approach, relating the process performance and product quality with the process variables, was developed and validated using an extensive series of experiments and measurements. The model predicts the dynamic evolution of the biomass growth, the polymer accumulation, the consumption of carbon and nitrogen sources and the average molecular weights of the PHB in a bioreactor, under batch and fed-batch operating conditions. The proposed integrated model was used for the model-based optimization of the production of PHB with tailor-made molecular properties in Azohydromonas lata bacteria. The process optimization led to a high intracellular PHB accumulation (up to 95% g of PHB per g of DCW) and the production of different grades (i.e., different molecular weight distributions) of PHB. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Species abundance distributions in neutral models with immigration or mutation and general lifetimes.

    PubMed

    Lambert, Amaury

    2011-07-01

    We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate λ. Thus, the population size is a homogeneous, binary Crump-Mode-Jagers process (which is not necessarily a Markov process). We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate μ (e.g., from the mainland into the island). In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate θ. We are interested in the species abundance distribution, i.e., in the numbers, denoted I(n)(k) in the immigration model and A(n)(k) in the mutation model, of species represented by k individuals, k = 1, 2, . . . , n, when there are n individuals in the total population. In the immigration model, we prove that the numbers (I(t)(k); k ≥ 1) of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's log-series. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, I(n)(k) converges as n → ∞ to a Poisson r.v. with mean γ/k, where γ : = μ/λ. In the mutation model, as n → ∞, we obtain the almost sure convergence of n (-1) A(n)(k) to a nonrandom explicit constant. In the case of a critical, linear birth-death process, this constant is given by Fisher's log-series, namely n(-1) A(n)(k) converges to α(k)/k, where α : = λ/(λ + θ). In both models, the abundances of the most abundant species are briefly discussed.

  9. Implications of movement for species distribution models - Rethinking environmental data tools.

    PubMed

    Bruneel, Stijn; Gobeyn, Sacha; Verhelst, Pieterjan; Reubens, Jan; Moens, Tom; Goethals, Peter

    2018-07-01

    Movement is considered an essential process in shaping the distributions of species. Nevertheless, most species distribution models (SDMs) still focus solely on environment-species relationships to predict the occurrence of species. Furthermore, the currently used indirect estimates of movement allow to assess habitat accessibility, but do not provide an accurate description of movement. Better proxies of movement are needed to assess the dispersal potential of individual species and to gain a more practical insight in the interconnectivity of communities. Telemetry techniques are rapidly evolving and highly capable to provide explicit descriptions of movement, but their usefulness for SDMs will mainly depend on the ability of these models to deal with hitherto unconsidered ecological processes. More specifically, the integration of movement is likely to affect the environmental data requirements as the connection between environmental and biological data is crucial to provide reliable results. Mobility implies the occupancy of a continuum of space, hence an adequate representation of both geographical and environmental space is paramount to study mobile species distributions. In this context, environmental models, remote sensing techniques and animal-borne environmental sensors are discussed as potential techniques to obtain suitable environmental data. In order to provide an in-depth review of the aforementioned methods, we have chosen to use the modelling of fish distributions as a case study. The high mobility of fish and the often highly variable nature of the aquatic environment generally complicate model development, making it an adequate subject for research. Furthermore, insight into the distribution of fish is of great interest for fish stock assessments and water management worldwide, underlining its practical relevance. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  11. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  12. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  13. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  14. On the proportional abundance of species: Integrating population genetics and community ecology.

    PubMed

    Marquet, Pablo A; Espinoza, Guillermo; Abades, Sebastian R; Ganz, Angela; Rebolledo, Rolando

    2017-12-01

    The frequency of genes in interconnected populations and of species in interconnected communities are affected by similar processes, such as birth, death and immigration. The equilibrium distribution of gene frequencies in structured populations is known since the 1930s, under Wright's metapopulation model known as the island model. The equivalent distribution for the species frequency (i.e. the species proportional abundance distribution (SPAD)), at the metacommunity level, however, is unknown. In this contribution, we develop a stochastic model to analytically account for this distribution (SPAD). We show that the same as for genes SPAD follows a beta distribution, which provides a good description of empirical data and applies across a continuum of scales. This stochastic model, based upon a diffusion approximation, provides an alternative to neutral models for the species abundance distribution (SAD), which focus on number of individuals instead of proportions, and demonstrate that the relative frequency of genes in local populations and of species within communities follow the same probability law. We hope our contribution will help stimulate the mathematical and conceptual integration of theories in genetics and ecology.

  15. Time series behaviour of the number of Air Asia passengers: A distributional approach

    NASA Astrophysics Data System (ADS)

    Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman

    2013-09-01

    The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.

  16. Coupled transport, mixing and biogeochemical reactions in fractured media: experimental observations and modelling at the Ploemeur fractured rock observatory

    NASA Astrophysics Data System (ADS)

    Le Borgne, T.; Bochet, O.; Klepikova, M.; Kang, P. K.; Shakas, A.; Aquilina, L.; Dufresne, A.; Linde, N.; Dentz, M.; Bour, O.

    2016-12-01

    Transport processes in fractured media and associated reactions are governed by multiscale heterogeneity ranging from fracture wall roughness at small scale to broadly distributed fracture lengths at network scale. This strong disorder induces a variety of emerging phenomena, including flow channeling, anomalous transport and heat transfer, enhanced mixing and reactive hotspot development. These processes are generally difficult to isolate and monitor in the field because of the high degree of complexity and coupling between them. We report in situ experimental observations from the Ploemeur fractured rock observatory (http://hplus.ore.fr/en/ploemeur) that provide new insights on the dynamics of transport and reaction processes in fractured media. These include dipole and push pull tracer tests that allow understanding and modelling anomalous transport processes characterized by heavy-tailed residence time distributions (Kang et al. 2015), thermal push pull tests that show the existence of highly channeled flow with a strong control on fracture matrix exchanges (Klepikova et al. 2016) and time lapse hydrogeophysical monitoring of saline tracer tests that allow quantifying the distribution of transport length scales governing dispersion processes (Shakas et al. 2016). These transport processes are then shown to induce rapid oxygen delivery and mixing at depth leading to massive biofilm development (Bochet et al., in prep.). Hence, this presentation will attempt to link these observations made at different scales to quantify and model the coupling between flow channeling, non-Fickian transport, mixing and chemical reactions in fractured media. References: Bochet et al. Biofilm blooms driven by enhanced mixing in fractured rock, in prep. Klepikova et al. 2016, Heat as a tracer for understanding transport processes in fractured media: theory and field assessment from multi-scale thermal push-pull tracer tests, Water Resour. Res. 52Shakas et al. 2016, Hydrogeophysical characterization of transport processes in fractured rock by combining push-pull and single-hole ground penetrating radar experiments, Water Resour. Res. 52 Kang et al. 2015, Impact of velocity correlation and distribution on transport in fractured media : Field evidence and theoretical model, Water Resour. Res., 51

  17. Airport Facility Queuing Model Validation

    DOT National Transportation Integrated Search

    1977-05-01

    Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...

  18. Mission Assurance in a Distributed Environment

    DTIC Science & Technology

    2009-06-01

    Notation ( BPMN ) – Graphical representation of business processes in a workflow • Unified Modeling Language (UML) – Use standard UML diagrams to model the system – Component, sequence, activity diagrams

  19. Intercomparison and Evaluation of Global Aerosol Microphysical Properties Among Aerocom Models of a Range of Complexity

    NASA Technical Reports Server (NTRS)

    Mann, G. W.; Carslaw, K. S.; Reddington, C. L.; Pringle, K. J.; Schulz, M.; Asmi, A.; Spracklen, D. V.; Ridley, D. A.; Woodhouse, M. T.; Lee, L. A.; hide

    2014-01-01

    Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multimodel- mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation an growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.

  20. Linking laser scanning to snowpack modeling: Data processing and visualization

    NASA Astrophysics Data System (ADS)

    Teufelsbauer, H.

    2009-07-01

    SnowSim is a newly developed physical snowpack model that can use three-dimensional terrestrial laser scanning data to generate model domains. This greatly simplifies the input and numerical simulation of snow covers in complex terrains. The program can model two-dimensional cross sections of general slopes, with complicated snow distributions. The model predicts temperature distributions and snow settlements in this cross section. Thus, the model can be used for a wide range of problems in snow science and engineering, including numerical investigations of avalanche formation. The governing partial differential equations are solved by means of the finite element method, using triangular elements. All essential data for defining the boundary conditions and evaluating the simulation results are gathered by automatic weather and snow measurement sites. This work focuses on the treatment of these measurements and the simulation results, and presents a pre- and post-processing graphical user interface (GUI) programmed in Matlab.

  1. Data-Driven Robust RVFLNs Modeling of a Blast Furnace Iron-Making Process Using Cauchy Distribution Weighted M-Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Lv, Youbin; Wang, Hong

    Optimal operation of a practical blast furnace (BF) ironmaking process depends largely on a good measurement of molten iron quality (MIQ) indices. However, measuring the MIQ online is not feasible using the available techniques. In this paper, a novel data-driven robust modeling is proposed for online estimation of MIQ using improved random vector functional-link networks (RVFLNs). Since the output weights of traditional RVFLNs are obtained by the least squares approach, a robustness problem may occur when the training dataset is contaminated with outliers. This affects the modeling accuracy of RVFLNs. To solve this problem, a Cauchy distribution weighted M-estimation basedmore » robust RFVLNs is proposed. Since the weights of different outlier data are properly determined by the Cauchy distribution, their corresponding contribution on modeling can be properly distinguished. Thus robust and better modeling results can be achieved. Moreover, given that the BF is a complex nonlinear system with numerous coupling variables, the data-driven canonical correlation analysis is employed to identify the most influential components from multitudinous factors that affect the MIQ indices to reduce the model dimension. Finally, experiments using industrial data and comparative studies have demonstrated that the obtained model produces a better modeling and estimating accuracy and stronger robustness than other modeling methods.« less

  2. Frequency mismatch in stimulated scattering processes: An important factor for the transverse distribution of scattered light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Tao; Research Center of Laser Fusion, China Academy of Engineering Physics, Mianyang, Sichuan 621900; Zheng, Jian, E-mail: jzheng@ustc.edu.cn

    2016-06-15

    A 2D cylindrically symmetric model with inclusion of both diffraction and self-focus effects is developed to deal with the stimulated scattering processes of a single hotspot. The calculated results show that the transverse distribution of the scattered light is sensitive to the longitudinal profiles of the plasma parameters. The analysis of the evolution of the scattered light indicates that it is the frequency mismatch of coupling due to the inhomogeneity of plasmas that determines the transverse distribution of the scattered light.

  3. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  4. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  5. Rapidity distribution of photons from an anisotropic quark-gluon plasma

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Lusaka; Roy, Pradip

    2010-05-01

    We calculate rapidity distribution of photons due to Compton and annihilation processes from quark gluon plasma with pre-equilibrium momentum-space anisotropy. We also include contributions from hadronic matter with late-stage transverse expansion. A phenomenological model has been used for the time evolution of hard momentum scale, phard(τ), and anisotropy parameter, ξ(τ). As a result of pre-equilibrium momentum-space anisotropy, we find significant modification of photons rapidity distribution. For example, with the fixed initial condition (FIC) free-streaming (δ=2) interpolating model we observe significant enhancement of photon rapidity distribution at fixed pT, where as for FIC collisionally broadened (δ=2/3) interpolating model the yield increases till y~1. Beyond that suppression is observed. With fixed final multiplicity (FFM) free-streaming interpolating model we predict enhancement of photon yield which is less than the case of FIC. Suppression is always observed for FFM collisionally broadened interpolating model.

  6. Discrete epidemic models with arbitrary stage distributions and applications to disease control.

    PubMed

    Hernandez-Ceron, Nancy; Feng, Zhilan; Castillo-Chavez, Carlos

    2013-10-01

    W.O. Kermack and A.G. McKendrick introduced in their fundamental paper, A Contribution to the Mathematical Theory of Epidemics, published in 1927, a deterministic model that captured the qualitative dynamic behavior of single infectious disease outbreaks. A Kermack–McKendrick discrete-time general framework, motivated by the emergence of a multitude of models used to forecast the dynamics of epidemics, is introduced in this manuscript. Results that allow us to measure quantitatively the role of classical and general distributions on disease dynamics are presented. The case of the geometric distribution is used to evaluate the impact of waiting-time distributions on epidemiological processes or public health interventions. In short, the geometric distribution is used to set up the baseline or null epidemiological model used to test the relevance of realistic stage-period distribution on the dynamics of single epidemic outbreaks. A final size relationship involving the control reproduction number, a function of transmission parameters and the means of distributions used to model disease or intervention control measures, is computed. Model results and simulations highlight the inconsistencies in forecasting that emerge from the use of specific parametric distributions. Examples, using the geometric, Poisson and binomial distributions, are used to highlight the impact of the choices made in quantifying the risk posed by single outbreaks and the relative importance of various control measures.

  7. Determination of material distribution in heading process of small bimetallic bar

    NASA Astrophysics Data System (ADS)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  8. Parallel Distributed Processing Theory in the Age of Deep Networks.

    PubMed

    Bowers, Jeffrey S

    2017-12-01

    Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.

  9. Optimization of chlorine fluxing process for magnesium removal from molten aluminum

    NASA Astrophysics Data System (ADS)

    Fu, Qian

    High-throughput and low operational cost are the keys to a successful industrial process. Much aluminum is now recycled in the form of used beverage cans and this aluminum is of alloys that contain high levels of magnesium. It is common practice to "demag" the metal by injecting chlorine that preferentially reacts with the magnesium. In the conventional chlorine fluxing processes, low reaction efficiency results in excessive reactive gas emissions. In this study, through an experimental investigation of the reaction kinetics involved in this process, a mathematical model is set up for the purpose of process optimization. A feedback controlled chlorine reduction process strategy is suggested for demagging the molten aluminum to the desired magnesium level without significant gas emissions. This strategy also needs the least modification of the existing process facility. The suggested process time will only be slightly longer than conventional methods and chlorine usage and emissions will be reduced. In order to achieve process optimization through novel designs in any fluxing process, a system is necessary for measuring the bubble distribution in liquid metals. An electro-resistivity probe described in the literature has low accuracy and its capability to measure bubble distribution has not yet been fully demonstrated. A capacitance bubble probe was designed for bubble measurements in molten metals. The probe signal was collected and processed digitally. Higher accuracy was obtained by higher discrimination against corrupted signals. A single-size bubble experiment in Belmont metal was designed to reveal the characteristic response of the capacitance probe. This characteristic response fits well with a theoretical model. It is suggested that using a properly designed deconvolution process, the actual bubble size distribution can be calculated. The capacitance probe was used to study some practical bubble generation devices. Preliminary results on bubble distribution generated by a porous plug in Belmont metal showed bubbles much bigger than those in a water model. Preliminary results in molten aluminum showed that the probe was applicable in this harsh environment. An interesting bubble coalescence phenomenon was also observed in both Belmont metal and molten aluminum.

  10. Simulation model of fatigue crack opening/closing phenomena for predicting RPG load under arbitrary stress distribution field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toyosada, M.; Niwa, T.

    1995-12-31

    In this paper, Newman`s calculation model is modified to solve his neglected effect of the change of stress distribution ahead of a crack, and to leave elastic plastic materials along the crack surface because of the compatibility of Dugdale model. In addition to above treatment, the authors introduce plastic shrinkage at an immediate generation of new crack surfaces due to emancipation of internal force with the magnitude of yield stress level during unloading process in the model. Moreover, the model is expanded to arbitrary stress distribution field. By using the model, RPG load is simulated for a center notched specimenmore » under constant amplitude loading with various stress ratios and decreased maximum load while keeping minimum load.« less

  11. Antarctic surface temperature and sea ice biases in coupled climate models linked with cloud and land surface properties

    NASA Astrophysics Data System (ADS)

    Skiles, M.; Painter, T. H.; Marks, D. G.; Hedrick, A. R.

    2014-12-01

    Since 2013 the Airborne Snow Observatory (ASO) has been measuring spatial and temporal distribution of both snow water equivalent and snow albedo, the two most critical properties for understanding snowmelt runoff and timing, across key basins in the Western US. It is generally understood that net solar radiation (as controlled by variations in snow albedo and irradiance) provides the energy available for melt in almost all snow-covered environments. Until now, sparse measurements have restricted the ability to utilize measured net solar radiation in energy balance models, and current process simulations and model prediction of albedo evolution rely on oversimplifications of the processes. Data from ASO offers the unprecedented opportunity to utilize weekly measurements of spatially extensive spectral snow albedo to constrain and update snow albedo in a distributed snowmelt model for the first time. Here, we first investigate the sensitivity of the snow energy balance model SNOBAL to prescribed changes in snow albedo at two instrumented alpine catchments: at the point scale across 10 years at Senator Beck Basin Study Area in the San Juan Mountains, southwestern Colorado, and at the distributed scale across 25 years at Reynolds Creek Experimental Watershed, Idaho. We then compare distributed energy balance and snowmelt results across the ASO measurement record in the Tuolumne Basin in the Sierra Nevada Mountains, California, for model runs with and without integrated snow albedo from ASO.

  12. Local operators in kinetic wealth distribution

    NASA Astrophysics Data System (ADS)

    Andrecut, M.

    2016-05-01

    The statistical mechanics approach to wealth distribution is based on the conservative kinetic multi-agent model for money exchange, where the local interaction rule between the agents is analogous to the elastic particle scattering process. Here, we discuss the role of a class of conservative local operators, and we show that, depending on the values of their parameters, they can be used to generate all the relevant distributions. We also show numerically that in order to generate the power-law tail, an heterogeneous risk aversion model is required. By changing the parameters of these operators, one can also fine tune the resulting distributions in order to provide support for the emergence of a more egalitarian wealth distribution.

  13. Advances in distributed watershed modeling: a review and application of the AgroEcoSystem-Watershed (AgES-W) model

    USDA-ARS?s Scientific Manuscript database

    Progress in the understanding of physical, chemical, and biological processes influencing water quality, coupled with advancements in the collection and analysis of hydrologic data, provide opportunities for significant innovations in the manner and level with which watershed-scale processes may be ...

  14. Challenges and progress in distributed watershed modeling: applications of the AgroEcoSystem-Watershed (AgES-W) model

    USDA-ARS?s Scientific Manuscript database

    Progress in the understanding of physical, chemical, and biological processes influencing water quality, coupled with advances in the collection and analysis of hydrologic data, provide opportunities for significant innovations in the manner and level with which watershed-scale processes may be quan...

  15. Wolf Creek Research Basin Cold REgion Process Studies - 1992-2003

    NASA Astrophysics Data System (ADS)

    Janowicz, R.; Hedstrom, N.; Pomeroy, J.; Granger, R.; Carey, S.

    2004-12-01

    The development of hydrological models in northern regions are complicated by cold region processes. Sparse vegetation influences snowpack accumulation, redistribution and melt, frozen ground effects infiltration and runoff and cold soils in the summer effect evapotranspiration rates. Situated in the upper Yukon River watershed, the 195 km2 Wolf Creek Research Basin was instrumented in 1992 to calibrate hydrologic flow models, and has since evolved into a comprehensive study of cold region processes and linkages, contributing significantly to hydrological and climate change modelling. Studies include those of precipitation distribution, snowpack accumulation and redistribution, energy balance, snowmelt infiltration, and water balance. Studies of the spatial variability of hydrometeorological data demonstrate the importance of physical parameters on their distribution and control on runoff processes. Many studies have also identified the complex interaction of several of the physical parameters, including topography, vegetation and frozen ground (seasonal or permafrost) as important. They also show that there is a fundamental, underlying spatial structure to the watershed that must be adequately represented in parameterization schemes for scaling and watershed modelling. The specific results of numerous studies are presented.

  16. The Influence of Welding Parameters on the Nugget Formation of Resistance Spot Welding of Inconel 625 Sheets

    NASA Astrophysics Data System (ADS)

    Rezaei Ashtiani, Hamid Reza; Zarandooz, Roozbeh

    2015-09-01

    A 2D axisymmetric electro-thermo-mechanical finite element (FE) model is developed to investigate the effect of current intensity, welding time, and electrode tip diameter on temperature distributions and nugget size in resistance spot welding (RSW) process of Inconel 625 superalloy sheets using ABAQUS commercial software package. The coupled electro-thermal analysis and uncoupled thermal-mechanical analysis are used for modeling process. In order to improve accuracy of simulation, material properties including physical, thermal, and mechanical properties have been considered to be temperature dependent. The thickness and diameter of computed weld nuggets are compared with experimental results and good agreement is observed. So, FE model developed in this paper provides prediction of quality and shape of the weld nuggets and temperature distributions with variation of each process parameter, suitably. Utilizing this FE model assists in adjusting RSW parameters, so that expensive experimental process can be avoided. The results show that increasing welding time and current intensity lead to an increase in the nugget size and electrode indentation, whereas increasing electrode tip diameter decreases nugget size and electrode indentation.

  17. Multistage degradation modeling for BLDC motor based on Wiener process

    NASA Astrophysics Data System (ADS)

    Yuan, Qingyang; Li, Xiaogang; Gao, Yuankai

    2018-05-01

    Brushless DC motors are widely used, and their working temperatures, regarding as degradation processes, are nonlinear and multistage. It is necessary to establish a nonlinear degradation model. In this research, our study was based on accelerated degradation data of motors, which are their working temperatures. A multistage Wiener model was established by using the transition function to modify linear model. The normal weighted average filter (Gauss filter) was used to improve the results of estimation for the model parameters. Then, to maximize likelihood function for parameter estimation, we used numerical optimization method- the simplex method for cycle calculation. Finally, the modeling results show that the degradation mechanism changes during the degradation of the motor with high speed. The effectiveness and rationality of model are verified by comparison of the life distribution with widely used nonlinear Wiener model, as well as a comparison of QQ plots for residual. Finally, predictions for motor life are gained by life distributions in different times calculated by multistage model.

  18. Ecological Niche Modeling of Risk Factors for H7N9 Human Infection in China

    PubMed Central

    Xu, Min; Cao, Chunxiang; Li, Qun; Jia, Peng; Zhao, Jian

    2016-01-01

    China was attacked by a serious influenza A (H7N9) virus in 2013. The first human infection case was confirmed in Shanghai City and soon spread across most of eastern China. Using the methods of Geographic Information Systems (GIS) and ecological niche modeling (ENM), this research quantitatively analyzed the relationships between the H7N9 occurrence and the main environmental factors, including meteorological variables, human population density, bird migratory routes, wetland distribution, and live poultry farms, markets, and processing factories. Based on these relationships the probability of the presence of H7N9 was predicted. Results indicated that the distribution of live poultry processing factories, farms, and human population density were the top three most important determinants of the H7N9 human infection. The relative contributions to the model of live poultry processing factories, farms and human population density were 39.9%, 17.7% and 17.7%, respectively, while the maximum temperature of the warmest month and mean relative humidity had nearly no contribution to the model. The paper has developed an ecological niche model (ENM) that predicts the spatial distribution of H7N9 cases in China using environmental variables. The area under the curve (AUC) values of the model were greater than 0.9 (0.992 for the training samples and 0.961 for the test data). The findings indicated that most of the high risk areas were distributed in the Yangtze River Delta. These findings have important significance for the Chinese government to enhance the environmental surveillance at multiple human poultry interfaces in the high risk area. PMID:27322296

  19. Ecological Niche Modeling of Risk Factors for H7N9 Human Infection in China.

    PubMed

    Xu, Min; Cao, Chunxiang; Li, Qun; Jia, Peng; Zhao, Jian

    2016-06-16

    China was attacked by a serious influenza A (H7N9) virus in 2013. The first human infection case was confirmed in Shanghai City and soon spread across most of eastern China. Using the methods of Geographic Information Systems (GIS) and ecological niche modeling (ENM), this research quantitatively analyzed the relationships between the H7N9 occurrence and the main environmental factors, including meteorological variables, human population density, bird migratory routes, wetland distribution, and live poultry farms, markets, and processing factories. Based on these relationships the probability of the presence of H7N9 was predicted. Results indicated that the distribution of live poultry processing factories, farms, and human population density were the top three most important determinants of the H7N9 human infection. The relative contributions to the model of live poultry processing factories, farms and human population density were 39.9%, 17.7% and 17.7%, respectively, while the maximum temperature of the warmest month and mean relative humidity had nearly no contribution to the model. The paper has developed an ecological niche model (ENM) that predicts the spatial distribution of H7N9 cases in China using environmental variables. The area under the curve (AUC) values of the model were greater than 0.9 (0.992 for the training samples and 0.961 for the test data). The findings indicated that most of the high risk areas were distributed in the Yangtze River Delta. These findings have important significance for the Chinese government to enhance the environmental surveillance at multiple human poultry interfaces in the high risk area.

  20. Comparison of Two Conceptually Different Physically-based Hydrological Models - Looking Beyond Streamflows

    NASA Astrophysics Data System (ADS)

    Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.

    2015-12-01

    Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.

  1. EVALUATION OF MULTIPLE PHARMACOKINETIC MODELING STRUCTURES FOR TRICHLOROETHYLENE

    EPA Science Inventory

    A series of PBPK models were developed for trichloroethylene (TCE) to evaluate biological processes that may affect the absorption, distribution, metabolism and excretion (ADME) of TCE and its metabolites.

  2. A novel multitarget model of radiation-induced cell killing based on the Gaussian distribution.

    PubMed

    Zhao, Lei; Mi, Dong; Sun, Yeqing

    2017-05-07

    The multitarget version of the traditional target theory based on the Poisson distribution is still used to describe the dose-survival curves of cells after ionizing radiation in radiobiology and radiotherapy. However, noting that the usual ionizing radiation damage is the result of two sequential stochastic processes, the probability distribution of the damage number per cell should follow a compound Poisson distribution, like e.g. Neyman's distribution of type A (N. A.). In consideration of that the Gaussian distribution can be considered as the approximation of the N. A. in the case of high flux, a multitarget model based on the Gaussian distribution is proposed to describe the cell inactivation effects in low linear energy transfer (LET) radiation with high dose-rate. Theoretical analysis and experimental data fitting indicate that the present theory is superior to the traditional multitarget model and similar to the Linear - Quadratic (LQ) model in describing the biological effects of low-LET radiation with high dose-rate, and the parameter ratio in the present model can be used as an alternative indicator to reflect the radiation damage and radiosensitivity of the cells. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE PAGES

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    2016-12-07

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  4. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  5. Modeling Neutron stars as r-process sources in Ultra Faint Dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Safarzadeh, Mohammadtaher; Scannapieco, Evan

    2018-06-01

    To explain the high observed abundances of r-process elements in local ultrafaint dwarf (UFD) galaxies, we perform cosmological zoom simulations that include r-process production from neutron star mergers (NSMs). We model star formation stochastically and simulate two different haloes with total masses ≈108 M⊙ at z = 6. We find that the final distribution of [Eu/H] versus [Fe/H] is relatively insensitive to the energy by which the r-process material is ejected into the interstellar medium, but strongly sensitive to the environment in which the NSM event occurs. In one halo, the NSM event takes place at the centre of the stellar distribution, leading to high levels of r-process enrichment such as seen in a local UFD, Reticulum II (Ret II). In a second halo, the NSM event takes place outside of the densest part of the galaxy, leading to a more extended r-process distribution. The subsequent star formation occurs in an interstellar medium with shallow levels of r-process enrichment that results in stars with low levels of [Eu/H] compared to Ret II stars even when the maximum possible r-process mass is assumed to be ejected. This suggests that the natal kicks of neutron stars may also play an important role in determining the r-process abundances in UFD galaxies, a topic that warrants further theoretical investigation.

  6. A Parallel and Distributed Processing Model of Joint Attention, Social-Cognition and Autism

    PubMed Central

    Mundy, Peter; Sullivan, Lisa; Mastergeorge, Ann M.

    2009-01-01

    Scientific Abstract The impaired development of joint attention is a cardinal feature of autism. Therefore, understanding the nature of joint attention is a central to research on this disorder. Joint attention may be best defined in terms of an information processing system that begins to develop by 4–6 months of age. This system integrates the parallel processing of internal information about one’s own visual attention with external information about the visual attention of other people. This type of joint encoding of information about self and other attention requires the activation of a distributed anterior and posterior cortical attention network. Genetic regulation, in conjunction with self-organizing behavioral activity guides the development of functional connectivity in this network. With practice in infancy the joint processing of self-other attention becomes automatically engaged as an executive function. It can be argued that this executive joint-attention is fundamental to human learning, as well as the development of symbolic thought, social-cognition and social-competence throughout the life span. One advantage of this parallel and distributed processing model of joint attention (PDPM) is that it directly connects theory on social pathology to a range of phenomenon in autism associated with neural connectivity, constructivist and connectionist models of cognitive development, early intervention, activity-dependent gene expression, and atypical ocular motor control. PMID:19358304

  7. The nearly neutral and selection theories of molecular evolution under the fisher geometrical framework: substitution rate, population size, and complexity.

    PubMed

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A

    2012-06-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models.

  8. The Nearly Neutral and Selection Theories of Molecular Evolution Under the Fisher Geometrical Framework: Substitution Rate, Population Size, and Complexity

    PubMed Central

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A.

    2012-01-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population’s phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models. PMID:22426879

  9. On the distribution of interspecies correlation for Markov models of character evolution on Yule trees.

    PubMed

    Mulder, Willem H; Crawford, Forrest W

    2015-01-07

    Efforts to reconstruct phylogenetic trees and understand evolutionary processes depend fundamentally on stochastic models of speciation and mutation. The simplest continuous-time model for speciation in phylogenetic trees is the Yule process, in which new species are "born" from existing lineages at a constant rate. Recent work has illuminated some of the structural properties of Yule trees, but it remains mostly unknown how these properties affect sequence and trait patterns observed at the tips of the phylogenetic tree. Understanding the interplay between speciation and mutation under simple models of evolution is essential for deriving valid phylogenetic inference methods and gives insight into the optimal design of phylogenetic studies. In this work, we derive the probability distribution of interspecies covariance under Brownian motion and Ornstein-Uhlenbeck models of phenotypic change on a Yule tree. We compute the probability distribution of the number of mutations shared between two randomly chosen taxa in a Yule tree under discrete Markov mutation models. Our results suggest summary measures of phylogenetic information content, illuminate the correlation between site patterns in sequences or traits of related organisms, and provide heuristics for experimental design and reconstruction of phylogenetic trees. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Nanofibre distribution in composites manufactured with epoxy reinforced with nanofibrillated cellulose: model prediction and verification

    NASA Astrophysics Data System (ADS)

    Aitomäki, Yvonne; Westin, Mikael; Korpimäki, Jani; Oksman, Kristiina

    2016-07-01

    In this study a model based on simple scattering is developed and used to predict the distribution of nanofibrillated cellulose in composites manufactured by resin transfer moulding (RTM) where the resin contains nanofibres. The model is a Monte Carlo based simulation where nanofibres are randomly chosen from probability density functions for length, diameter and orientation. Their movements are then tracked as they advance through a random arrangement of fibres in defined fibre bundles. The results of the model show that the fabric filters the nanofibres within the first 20 µm unless clear inter-bundle channels are available. The volume fraction of the fabric fibres, flow velocity and size of nanofibre influence this to some extent. To verify the model, an epoxy with 0.5 wt.% Kraft Birch nanofibres was made through a solvent exchange route and stained with a colouring agent. This was infused into a glass fibre fabric using an RTM process. The experimental results confirmed the filtering of the nanofibres by the fibre bundles and their penetration in the fabric via the inter-bundle channels. Hence, the model is a useful tool for visualising the distribution of the nanofibres in composites in this manufacturing process.

  11. A Process-Based Transport-Distance Model of Aeolian Transport

    NASA Astrophysics Data System (ADS)

    Naylor, A. K.; Okin, G.; Wainwright, J.; Parsons, A. J.

    2017-12-01

    We present a new approach to modeling aeolian transport based on transport distance. Particle fluxes are based on statistical probabilities of particle detachment and distributions of transport lengths, which are functions of particle size classes. A computational saltation model is used to simulate transport distances over a variety of sizes. These are fit to an exponential distribution, which has the advantages of computational economy, concordance with current field measurements, and a meaningful relationship to theoretical assumptions about mean and median particle transport distance. This novel approach includes particle-particle interactions, which are important for sustaining aeolian transport and dust emission. Results from this model are compared with results from both bulk- and particle-sized-specific transport equations as well as empirical wind tunnel studies. The transport-distance approach has been successfully used for hydraulic processes, and extending this methodology from hydraulic to aeolian transport opens up the possibility of modeling joint transport by wind and water using consistent physics. Particularly in nutrient-limited environments, modeling the joint action of aeolian and hydraulic transport is essential for understanding the spatial distribution of biomass across landscapes and how it responds to climatic variability and change.

  12. Validating modelled variable surface saturation in the riparian zone with thermal infrared images

    NASA Astrophysics Data System (ADS)

    Glaser, Barbara; Klaus, Julian; Frei, Sven; Frentress, Jay; Pfister, Laurent; Hopp, Luisa

    2015-04-01

    Variable contributing areas and hydrological connectivity have become prominent new concepts for hydrologic process understanding in recent years. The dynamic connectivity within the hillslope-riparian-stream (HRS) system is known to have a first order control on discharge generation and especially the riparian zone functions as runoff buffering or producing zone. However, despite their importance, the highly dynamic processes of contraction and extension of saturation within the riparian zone and its impact on runoff generation still remain not fully understood. In this study, we analysed the potential of a distributed, fully coupled and physically based model (HydroGeoSphere) to represent the spatial and temporal water flux dynamics of a forested headwater HRS system (6 ha) in western Luxembourg. The model was set up and parameterised under consideration of experimentally-derived knowledge of catchment structure and was run for a period of four years (October 2010 to August 2014). For model evaluation, we especially focused on the temporally varying spatial patterns of surface saturation. We used ground-based thermal infrared (TIR) imagery to map surface saturation with a high spatial and temporal resolution and collected 20 panoramic snapshots of the riparian zone (ca. 10 by 20 m) under different hydrologic conditions. These TIR panoramas were used in addition to several classical discharge and soil moisture time series for a spatially-distributed model validation. In a manual calibration process we optimised model parameters (e.g. porosity, saturated hydraulic conductivity, evaporation depth) to achieve a better agreement between observed and modelled discharges and soil moistures. The subsequent validation of surface saturation patterns by a visual comparison of processed TIR panoramas and corresponding model output panoramas revealed an overall good accordance for all but one region that was always too dry in the model. However, quantitative comparisons of modelled and observed saturated pixel percentages and of their modelled and measured relationships to concurrent discharges revealed remarkable similarities. During the calibration process we observed that surface saturation patterns were mostly affected by changing the soil properties of the topsoil in the riparian zone, but that the discharge behaviour did not change substantially at the same time. This effect of various spatial patterns occurring concomitant to a nearly unchanged integrated response demonstrates the importance of spatially distributed validation data. Our study clearly benefited from using different kinds of data - spatially integrated and distributed, temporally continuous and discrete - for the model evaluation procedure.

  13. Analytically Solvable Model of Spreading Dynamics with Non-Poissonian Processes

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János

    2014-01-01

    Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.

  14. Distributed Architecture for the Object-Oriented Method for Interoperability

    DTIC Science & Technology

    2003-03-01

    Collaborative Environment. ......................121 Figure V-2. Distributed OOMI And The Collaboration Centric Paradigm. .....................123 Figure V...of systems are formed into a system federation to resolve differences in modeling. An OOMI Integrated Development Environment (OOMI IDE) lends ...space for the creation of possible distributed systems is partitioned into User Centric systems, Processing/Storage Centric systems, Implementation

  15. How School Context and Educator Characteristics Predict Distributed Leadership: A Hierarchical Structural Equation Model with 2013 TALIS Data

    ERIC Educational Resources Information Center

    Liu, Yan; Bellibas, Mehmet Sukru; Printy, Susan

    2018-01-01

    Distributed leadership is a dynamic process and reciprocal interaction of the leader, the subordinates and the situation. This research was inspired by the theoretical framework of Spillane in order to contextualize distributed leadership and compare the variations using the Teaching and Learning International Survey 2013 data. The two-level…

  16. The Impact of Aerosols on Cloud and Precipitation Processes: Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Li, X.; Khain, A.; Simpson, S.

    2004-01-01

    Cloud microphysics are inevitably affected by the smoke particle (CCN, cloud condensation nuclei) size distributions below the clouds. Therefore, size distributions parameterized as spectral bin microphysics are needed to explicitly study the effects of atmospheric aerosol concentration on cloud development, rainfall production, and rainfall rates for convective clouds. Recently, two detailed spectral-bin microphysical schemes were implemented into the Goddard Cumulus Ensemble (GCE) model. The formulation for the explicit spectral-bin microphysical processes is based on solving stochastic kinetic equations for the size distribution functions of water droplets (i.e., cloud droplets and raindrops), and several types of ice particles (i.e., pristine ice crystals (columnar and plate-like), snow (dendrites and aggregates), graupel and frozen drops/hail). Each type is described by a special size distribution function containing many categories (i.e. 33 bins). Atmospheric aerosols are also described using number density size-distribution functions. A spectral-bin microphysical model is very expensive from a computational point of view and has only been implemented into the 2D version of the GCE at the present time. The model is tested by studying the evolution of deep cloud systems in the west Pacific warm pool region, in the sub-tropics (Florida) and in the mid-latitude using identical thermodynamic conditions but with different concentrations of CCN: a low 'clean' concentration and a high 'dirty' concentration.

  17. Relaxation Time Distribution (RTD) of Spectral Induced Polarization (SIP) data from environmental studies

    NASA Astrophysics Data System (ADS)

    Ntarlagiannis, D.; Ustra, A.; Slater, L. D.; Zhang, C.; Mendonça, C. A.

    2015-12-01

    In this work we present an alternative formulation of the Debye Decomposition (DD) of complex conductivity spectra, with a new set of parameters that are directly related to the continuous Debye relaxation model. The procedure determines the relaxation time distribution (RTD) and two frequency-independent parameters that modulate the induced polarization spectra. The distribution of relaxation times quantifies the contribution of each distinct relaxation process, which can in turn be associated with specific polarization processes and characterized in terms of electrochemical and interfacial parameters as derived from mechanistic models. Synthetic tests show that the procedure can successfully fit spectral induced polarization (SIP) data and accurately recover the RTD. The procedure was applied to different data sets, focusing on environmental applications. We focus on data of sand-clay mixtures artificially contaminated with toluene, and crude oil-contaminated sands experiencing biodegradation. The results identify characteristic relaxation times that can be associated with distinct polarization processes resulting from either the contaminant itself or transformations associated with biodegradation. The inversion results provide information regarding the relative strength and dominant relaxation time of these polarization processes.

  18. A new model for impregnation mechanisms in different GF/PP commingled yarns

    NASA Astrophysics Data System (ADS)

    Klinkmüller, V.; Um, M.-K.; Steffens, M.; Friedrich, K.; Kim, B.-S.

    1994-09-01

    Impregnation mechanisms of different kinds of GF/PP commingled yarns have been studied. As the reinforcing fibres were always the same, a global description has been worked out. Two different mathematical approaches for fibre bed permeability (Kozeny-Carman and Gutowski) were compared. The constants of the applied mathematical models have to stay the same if the fibre reeinforcement and the fibre arrangement is the same. Neither the kind of matrix, nor the fibre volume content may change these constants. Differences in the degree of impregnation after the same process conditions can be only due to different sizes of fibre agglomerations, thus the initial distribution of reinforcing fibres and matrix. For an exact determination of impregnation times and conditions the exact distribution of fibres in the intermediate material and after processing has to be known. This distribution is determined by SEM microscopy and data given from the material supplier. The importance of different process parameters, such as temperature, pressure, processing time is weighted by determining the density and mechanical properties of the specimens.

  19. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  20. Insight into nuclear body formation of phytochromes through stochastic modelling and experiment.

    PubMed

    Grima, Ramon; Sonntag, Sebastian; Venezia, Filippo; Kircher, Stefan; Smith, Robert W; Fleck, Christian

    2018-05-01

    Spatial relocalization of proteins is crucial for the correct functioning of living cells. An interesting example of spatial ordering is the light-induced clustering of plant photoreceptor proteins. Upon irradiation by white or red light, the red light-active phytochrome, phytochrome B, enters the nucleus and accumulates in large nuclear bodies. The underlying physical process of nuclear body formation remains unclear, but phytochrome B is thought to coagulate via a simple protein-protein binding process. We measure, for the first time, the distribution of the number of phytochrome B-containing nuclear bodies as well as their volume distribution. We show that the experimental data cannot be explained by a stochastic model of nuclear body formation via simple protein-protein binding processes using physically meaningful parameter values. Rather modelling suggests that the data is consistent with a two step process: a fast nucleation step leading to macroparticles followed by a subsequent slow step in which the macroparticles bind to form the nuclear body. An alternative explanation for the observed nuclear body distribution is that the phytochromes bind to a so far unknown molecular structure. We believe it is likely this result holds more generally for other nuclear body-forming plant photoreceptors and proteins. Creative Commons Attribution license.

  1. Investigating the Relative Contributions of Secondary Ice Formation Processes to Ice Crystal Number Concentrations Within Mixed-Phase Clouds

    NASA Astrophysics Data System (ADS)

    Sullivan, S.; Nenes, A.

    2015-12-01

    Measurements of the in-cloud ice nuclei concentration can be three or four orders of magnitude less than those of the in-cloud ice crystal number concentration. Different secondary formation processes, active after initial ice nucleation, have been proposed to explain this discrepancy, but their relative importance, and even the exact physics of each mechanism, are still unclear. We construct a simple bin microphysics model (2IM) including depositional growth, the Hallett-Mossop process, ice-ice collisions, and ice-ice aggregation, with temperature- and supersaturation-dependent efficiencies for each process. 2IM extends the time-lag collision model of Yano and Phillips to additional bins and incorporates the aspect ratio evolution of Jensen and Harrington. Model output and measured ice crystal size distributions are compared to answer three questions: (1) how important is ice-ice aggregation relative to ice-ice collision around -15°C, where the Hallett-Mossop process is no longer active; (2) what process efficiencies lead to the best reproduction of observed ice crystal size distributions; and (3) does ice crystal aspect ratio affect the dominant secondary formation process. The resulting parameterization is intended for eventual use in larger-scale mixed-phase cloud schemes.

  2. Using modelling to predict impacts of sea level rise and increased turbidity on seagrass distributions in estuarine embayments

    NASA Astrophysics Data System (ADS)

    Davis, Tom R.; Harasti, David; Smith, Stephen D. A.; Kelaher, Brendan P.

    2016-11-01

    Climate change induced sea level rise will affect shallow estuarine habitats, which are already under threat from multiple anthropogenic stressors. Here, we present the results of modelling to predict potential impacts of climate change associated processes on seagrass distributions. We use a novel application of relative environmental suitability (RES) modelling to examine relationships between variables of physiological importance to seagrasses (light availability, wave exposure, and current flow) and seagrass distributions within 5 estuarine embayments. Models were constructed separately for Posidonia australis and Zostera muelleri subsp. capricorni using seagrass data from Port Stephens estuary, New South Wales, Australia. Subsequent testing of models used independent datasets from four other estuarine embayments (Wallis Lake, Lake Illawarra, Merimbula Lake, and Pambula Lake) distributed along 570 km of the east Australian coast. Relative environmental suitability models provided adequate predictions for seagrass distributions within Port Stephens and the other estuarine embayments, indicating that they may have broad regional application. Under the predictions of RES models, both sea level rise and increased turbidity are predicted to cause substantial seagrass losses in deeper estuarine areas, resulting in a net shoreward movement of seagrass beds. Seagrass species distribution models developed in this study provide a valuable tool to predict future shifts in estuarine seagrass distributions, allowing identification of areas for protection, monitoring and rehabilitation.

  3. A Knowledge Generation Model via the Hypernetwork

    PubMed Central

    Liu, Jian-Guo; Yang, Guang-Yong; Hu, Zhao-Long

    2014-01-01

    The influence of the statistical properties of the network on the knowledge diffusion has been extensively studied. However, the structure evolution and the knowledge generation processes are always integrated simultaneously. By introducing the Cobb-Douglas production function and treating the knowledge growth as a cooperative production of knowledge, in this paper, we present two knowledge-generation dynamic evolving models based on different evolving mechanisms. The first model, named “HDPH model,” adopts the hyperedge growth and the hyperdegree preferential attachment mechanisms. The second model, named “KSPH model,” adopts the hyperedge growth and the knowledge stock preferential attachment mechanisms. We investigate the effect of the parameters on the total knowledge stock of the two models. The hyperdegree distribution of the HDPH model can be theoretically analyzed by the mean-field theory. The analytic result indicates that the hyperdegree distribution of the HDPH model obeys the power-law distribution and the exponent is . Furthermore, we present the distributions of the knowledge stock for different parameters . The findings indicate that our proposed models could be helpful for deeply understanding the scientific research cooperation. PMID:24626143

  4. A knowledge generation model via the hypernetwork.

    PubMed

    Liu, Jian-Guo; Yang, Guang-Yong; Hu, Zhao-Long

    2014-01-01

    The influence of the statistical properties of the network on the knowledge diffusion has been extensively studied. However, the structure evolution and the knowledge generation processes are always integrated simultaneously. By introducing the Cobb-Douglas production function and treating the knowledge growth as a cooperative production of knowledge, in this paper, we present two knowledge-generation dynamic evolving models based on different evolving mechanisms. The first model, named "HDPH model," adopts the hyperedge growth and the hyperdegree preferential attachment mechanisms. The second model, named "KSPH model," adopts the hyperedge growth and the knowledge stock preferential attachment mechanisms. We investigate the effect of the parameters (α,β) on the total knowledge stock of the two models. The hyperdegree distribution of the HDPH model can be theoretically analyzed by the mean-field theory. The analytic result indicates that the hyperdegree distribution of the HDPH model obeys the power-law distribution and the exponent is γ = 2 + 1/m. Furthermore, we present the distributions of the knowledge stock for different parameters (α,β). The findings indicate that our proposed models could be helpful for deeply understanding the scientific research cooperation.

  5. Hydrological and water quality processes simulation by the integrated MOHID model

    NASA Astrophysics Data System (ADS)

    Epelde, Ane; Antiguedad, Iñaki; Brito, David; Eduardo, Jauch; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel

    2016-04-01

    Different modelling approaches have been used in recent decades to study the water quality degradation caused by non-point source pollution. In this study, the MOHID fully distributed and physics-based model has been employed to simulate hydrological processes and nitrogen dynamics in a nitrate vulnerable zone: the Alegria River watershed (Basque Country, Northern Spain). The results of this study indicate that the MOHID code is suitable for hydrological processes simulation at the watershed scale, as the model shows satisfactory performance at simulating the discharge (with NSE: 0.74 and 0.76 during calibration and validation periods, respectively). The agronomical component of the code, allowed the simulation of agricultural practices, which lead to adequate crop yield simulation in the model. Furthermore, the nitrogen exportation also shows satisfactory performance (with NSE: 0.64 and 0.69 during calibration and validation periods, respectively). While the lack of field measurements do not allow to evaluate the nutrient cycling processes in depth, it has been observed that the MOHID model simulates the annual denitrification according to general ranges established for agricultural watersheds (in this study, 9 kg N ha-1 year-1). In addition, the model has simulated coherently the spatial distribution of the denitrification process, which is directly linked to the simulated hydrological conditions. Thus, the model has localized the highest rates nearby the discharge zone of the aquifer and also where the aquifer thickness is low. These results evidence the strength of this model to simulate watershed scale hydrological processes as well as the crop production and the agricultural activity derived water quality degradation (considering both nutrient exportation and nutrient cycling processes).

  6. Modeling Best Management Practices (BMPs) with HSPF

    EPA Science Inventory

    The Hydrological Simulation Program-Fortran (HSPF) is a semi-distributed watershed model, which simulates hydrology and water quality processes at user-specified spatial and temporal scales. Although HSPF is a comprehensive and highly flexible model, a number of investigators not...

  7. Chemistry-Transport Modeling of the Satellite Observed Distribution of Tropical Tropospheric Ozone

    NASA Technical Reports Server (NTRS)

    Peters, Wouter; Krol, Maarten; Dentener, Frank; Thompson, Anne M.; Leloeveld, Jos; Bhartia, P. K. (Technical Monitor)

    2002-01-01

    We have compared the 14-year record of satellite derived tropical tropospheric ozone columns (TTOC) from the NIMBUS-7 Total Ozone Mapping Spectrometer (TOMS) to TTOC calculated by a chemistry-transport model (CTM). An objective measure of error, based on the zonal distribution of TTOC in the tropics, is applied to perform this comparison systematically. In addition, the sensitivity of the model to several key processes in the tropics is quantified to select directions for future improvements. The comparisons indicate a widespread, systematic (20%) discrepancy over the tropical Atlantic Ocean, which maximizes during austral Spring. Although independent evidence from ozonesondes shows that some of the disagreement is due to satellite over-estimate of TTOC, the Atlantic mismatch is largely due to a misrepresentation of seasonally recurring processes in the model. Only minor differences between the model and observations over the Pacific occur, mostly due to interannual variability not captured by the model. Although chemical processes determine the TTOC extent, dynamical processes dominate the TTOC distribution, as the use of actual meteorology pertaining to the year of observations always leads to a better agreement with TTOC observations than using a random year or a climatology. The modeled TTOC is remarkably insensitive to many model parameters due to efficient feedbacks in the ozone budget. Nevertheless, the simulations would profit from an improved biomass burning calendar, as well as from an increase in NOX abundances in free tropospheric biomass burning plumes. The model showed the largest response to lightning NOX emissions, but systematic improvements could not be found. The use of multi-year satellite derived tropospheric data to systematically test and improve a CTM is a promising new addition to existing methods of model validation, and is a first step to integrating tropospheric satellite observations into global ozone modeling studies. Conversely,the CTM may suggest improvements to evolving satellite retrievals for tropospheric ozone.

  8. Dynamical behavior of susceptible-infected-recovered-susceptible epidemic model on weighted networks

    NASA Astrophysics Data System (ADS)

    Wu, Qingchu; Zhang, Fei

    2018-02-01

    We study susceptible-infected-recovered-susceptible epidemic model in weighted, regular, and random complex networks. We institute a pairwise-type mathematical model with a general transmission rate to evaluate the influence of the link-weight distribution on the spreading process. Furthermore, we develop a dimensionality reduction approach to derive the condition for the contagion outbreak. Finally, we analyze the influence of the heterogeneity of weight distribution on the outbreak condition for the scenario with a linear transmission rate. Our theoretical analysis is in agreement with stochastic simulations, showing that the heterogeneity of link-weight distribution can have a significant effect on the epidemic dynamics.

  9. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  10. Distributed Peer-to-Peer Target Tracking in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Wang, Sheng; Bi, Dao-Wei; Ma, Jun-Jie

    2007-01-01

    Target tracking is usually a challenging application for wireless sensor networks (WSNs) because it is always computation-intensive and requires real-time processing. This paper proposes a practical target tracking system based on the auto regressive moving average (ARMA) model in a distributed peer-to-peer (P2P) signal processing framework. In the proposed framework, wireless sensor nodes act as peers that perform target detection, feature extraction, classification and tracking, whereas target localization requires the collaboration between wireless sensor nodes for improving the accuracy and robustness. For carrying out target tracking under the constraints imposed by the limited capabilities of the wireless sensor nodes, some practically feasible algorithms, such as the ARMA model and the 2-D integer lifting wavelet transform, are adopted in single wireless sensor nodes due to their outstanding performance and light computational burden. Furthermore, a progressive multi-view localization algorithm is proposed in distributed P2P signal processing framework considering the tradeoff between the accuracy and energy consumption. Finally, a real world target tracking experiment is illustrated. Results from experimental implementations have demonstrated that the proposed target tracking system based on a distributed P2P signal processing framework can make efficient use of scarce energy and communication resources and achieve target tracking successfully.

  11. Lognormal field size distributions as a consequence of economic truncation

    USGS Publications Warehouse

    Attanasi, E.D.; Drew, L.J.

    1985-01-01

    The assumption of lognormal (parent) field size distributions has for a long time been applied to resource appraisal and evaluation of exploration strategy by the petroleum industry. However, frequency distributions estimated with observed data and used to justify this hypotheses are conditional. Examination of various observed field size distributions across basins and over time shows that such distributions should be regarded as the end result of an economic filtering process. Commercial discoveries depend on oil and gas prices and field development costs. Some new fields are eliminated due to location, depths, or water depths. This filtering process is called economic truncation. Economic truncation may occur when predictions of a discovery process are passed through an economic appraisal model. We demonstrate that (1) economic resource appraisals, (2) forecasts of levels of petroleum industry activity, and (3) expected benefits of developing and implementing cost reducing technology are sensitive to assumptions made about the nature of that portion of (parent) field size distribution subject to economic truncation. ?? 1985 Plenum Publishing Corporation.

  12. Sojourning with the Homogeneous Poisson Process.

    PubMed

    Liu, Piaomu; Peña, Edsel A

    2016-01-01

    In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.

  13. A flexible cure rate model for spatially correlated survival data based on generalized extreme value distribution and Gaussian process priors.

    PubMed

    Li, Dan; Wang, Xia; Dey, Dipak K

    2016-09-01

    Our present work proposes a new survival model in a Bayesian context to analyze right-censored survival data for populations with a surviving fraction, assuming that the log failure time follows a generalized extreme value distribution. Many applications require a more flexible modeling of covariate information than a simple linear or parametric form for all covariate effects. It is also necessary to include the spatial variation in the model, since it is sometimes unexplained by the covariates considered in the analysis. Therefore, the nonlinear covariate effects and the spatial effects are incorporated into the systematic component of our model. Gaussian processes (GPs) provide a natural framework for modeling potentially nonlinear relationship and have recently become extremely powerful in nonlinear regression. Our proposed model adopts a semiparametric Bayesian approach by imposing a GP prior on the nonlinear structure of continuous covariate. With the consideration of data availability and computational complexity, the conditionally autoregressive distribution is placed on the region-specific frailties to handle spatial correlation. The flexibility and gains of our proposed model are illustrated through analyses of simulated data examples as well as a dataset involving a colon cancer clinical trial from the state of Iowa. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Distributed snow modeling suitable for use with operational data for the American River watershed.

    NASA Astrophysics Data System (ADS)

    Shamir, E.; Georgakakos, K. P.

    2004-12-01

    The mountainous terrain of the American River watershed (~4300 km2) at the Western slope of the Northern Sierra Nevada is subject to significant variability in the atmospheric forcing that controls the snow accumulation and ablations processes (i.e., precipitation, surface temperature, and radiation). For a hydrologic model that attempts to predict both short- and long-term streamflow discharges, a plausible description of the seasonal and intermittent winter snow pack accumulation and ablation is crucial. At present the NWS-CNRFC operational snow model is implemented in a semi distributed manner (modeling unit of about 100-1000 km2) and therefore lump distinct spatial variability of snow processes. In this study we attempt to account for the precipitation, temperature, and radiation spatial variability by constructing a distributed snow accumulation and melting model suitable for use with commonly available sparse data. An adaptation of the NWS-Snow17 energy and mass balance that is used operationally at the NWS River Forecast Centers is implemented at 1 km2 grid cells with distributed input and model parameters. The input to the model (i.e., precipitation and surface temperature) is interpolated from observed point data. The surface temperature was interpolated over the basin based on adiabatic lapse rates using topographic information whereas the precipitation was interpolated based on maps of climatic mean annual rainfall distribution acquired from PRISM. The model parameters that control the melting rate due to radiation were interpolated based on aspect. The study was conducted for the entire American basin for the snow seasons of 1999-2000. Validation of the Snow Water Equivalent (SWE) prediction is done by comparing to observation from 12 snow Sensors. The Snow Cover Area (SCA) prediction was evaluated by comparing to remotely sensed 500m daily snow cover derived from MODIS. The results that the distribution of snow over the area is well captured and the quantity compared to the snow gauges are well estimated in the high elevation.

  15. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.

  16. Unifying practice schedules in the timescales of motor learning and performance.

    PubMed

    Verhoeven, F Martijn; Newell, Karl M

    2018-06-01

    In this article, we elaborate from a multiple time scales model of motor learning to examine the independent and integrated effects of massed and distributed practice schedules within- and between-sessions on the persistent (learning) and transient (warm-up, fatigue) processes of performance change. The timescales framework reveals the influence of practice distribution on four learning-related processes: the persistent processes of learning and forgetting, and the transient processes of warm-up decrement and fatigue. The superposition of the different processes of practice leads to a unified set of effects for massed and distributed practice within- and between-sessions in learning motor tasks. This analysis of the interaction between the duration of the interval of practice trials or sessions and parameters of the introduced time scale model captures the unified influence of the between trial and session scheduling of practice on learning and performance. It provides a starting point for new theoretically based hypotheses, and the scheduling of practice that minimizes the negative effects of warm-up decrement, fatigue and forgetting while exploiting the positive effects of learning and retention. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Visualization and prediction of supercritical CO 2 distribution in sandstones during drainage: An in situ synchrotron X-ray micro-computed tomography study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan

    Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less

  18. Visualization and prediction of supercritical CO 2 distribution in sandstones during drainage: An in situ synchrotron X-ray micro-computed tomography study

    DOE PAGES

    Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan

    2017-10-21

    Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less

  19. A Modeling Framework for Predicting the Size of Sediments Produced on Hillslopes and Supplied to Channels

    NASA Astrophysics Data System (ADS)

    Sklar, L. S.; Mahmoudi, M.

    2016-12-01

    Landscape evolution models rarely represent sediment size explicitly, despite the importance of sediment size in regulating rates of bedload sediment transport, river incision into bedrock, and many other processes in channels and on hillslopes. A key limitation has been the lack of a general model for predicting the size of sediments produced on hillslopes and supplied to channels. Here we present a framework for such a model, as a first step toward building a `geomorphic transport law' that balances mechanistic realism with computational simplicity and is widely applicable across diverse landscapes. The goal is to take as inputs landscape-scale boundary conditions such as lithology, climate and tectonics, and predict the spatial variation in the size distribution of sediments supplied to channels across catchments. The model framework has two components. The first predicts the initial size distribution of particles produced by erosion of bedrock underlying hillslopes, while the second accounts for the effects of physical and chemical weathering during transport down slopes and delivery to channels. The initial size distribution can be related to the spacing and orientation of fractures within bedrock, which depend on the stresses and deformation experienced during exhumation and on rock resistance to fracture propagation. Other controls on initial size include the sizes of mineral grains in crystalline rocks, the sizes of cemented particles in clastic sedimentary rocks, and the potential for characteristic size distributions produced by tree throw, frost cracking, and other erosional processes. To model how weathering processes transform the initial size distribution we consider the effects of erosion rate and the thickness of soil and weathered bedrock on hillslope residence time. Residence time determines the extent of size reduction, for given values of model terms that represent the potential for chemical and physical weathering. Chemical weathering potential is parameterized in terms of mean annual precipitation and temperature, and the fraction of soluble minerals. Physical weathering potential can be parameterized in terms of topographic attributes, including slope, curvature and aspect. Finally, we compare model predictions with field data from Inyo Creek in the Sierra Nevada Mtns, USA.

  20. A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework

    PubMed Central

    Aslam, Muhammad; Tahir, Muhammad; Hussain, Zawar; Al-Zahrani, Bander

    2015-01-01

    To study lifetimes of certain engineering processes, a lifetime model which can accommodate the nature of such processes is desired. The mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process as compared to simple models. This paper is about studying a 3-component mixture of the Rayleigh distributionsin Bayesian perspective. The censored sampling environment is considered due to its popularity in reliability theory and survival analysis. The expressions for the Bayes estimators and their posterior risks are derived under different scenarios. In case the case that no or little prior information is available, elicitation of hyperparameters is given. To examine, numerically, the performance of the Bayes estimators using non-informative and informative priors under different loss functions, we have simulated their statistical properties for different sample sizes and test termination times. In addition, to highlight the practical significance, an illustrative example based on a real-life engineering data is also given. PMID:25993475

Top