Bogen, K T
2007-01-30
As reflected in the 2005 USEPA Guidelines for Cancer Risk Assessment, some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate ''linear'' (genotoxic) vs. ''nonlinear'' (nongenotoxic) approaches to low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient to parameterize a biologically based model that reliably extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach--similar to that used in reference dose procedures for classic toxicity endpoints--can address MOA uncertainty in a way that avoids explicit modeling of low-dose risk as a function of administered or internal dose. Even when a ''nonlinear'' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was illustrated for the rodent carcinogen naphthalene. Bioassay data, supplemental toxicokinetic data, and related physiologically based pharmacokinetic and 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat-tumor-type specific DMOA-related uncertainty were obtained using a 2-stage model adapted to reflect the empirical link between genotoxic and cytotoxic effects of the most potent identified genotoxic naphthalene metabolites, 1,2- and 1,4-naphthoquinone. Resulting bounds each provided the basis for a corresponding
Bogen, K T
2007-05-11
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage
Stochastic dynamics of cancer initiation
NASA Astrophysics Data System (ADS)
Foo, Jasmine; Leder, Kevin; Michor, Franziska
2011-02-01
Most human cancer types result from the accumulation of multiple genetic and epigenetic alterations in a single cell. Once the first change (or changes) have arisen, tumorigenesis is initiated and the subsequent emergence of additional alterations drives progression to more aggressive and ultimately invasive phenotypes. Elucidation of the dynamics of cancer initiation is of importance for an understanding of tumor evolution and cancer incidence data. In this paper, we develop a novel mathematical framework to study the processes of cancer initiation. Cells at risk of accumulating oncogenic mutations are organized into small compartments of cells and proliferate according to a stochastic process. During each cell division, an (epi)genetic alteration may arise which leads to a random fitness change, drawn from a probability distribution. Cancer is initiated when a cell gains a fitness sufficiently high to escape from the homeostatic mechanisms of the cell compartment. To investigate cancer initiation during a human lifetime, a 'race' between this fitness process and the aging process of the patient is considered; the latter is modeled as a second stochastic Markov process in an aging dimension. This model allows us to investigate the dynamics of cancer initiation and its dependence on the mutational fitness distribution. Our framework also provides a methodology to assess the effects of different life expectancy distributions on lifetime cancer incidence. We apply this methodology to colorectal tumorigenesis while considering life expectancy data of the US population to inform the dynamics of the aging process. We study how the probability of cancer initiation prior to death, the time until cancer initiation, and the mutational profile of the cancer-initiating cell depends on the shape of the mutational fitness distribution and life expectancy of the population.
Stochastic elimination of cancer cells.
Michor, Franziska; Nowak, Martin A; Frank, Steven A; Iwasa, Yoh
2003-01-01
Tissues of multicellular organisms consist of stem cells and differentiated cells. Stem cells divide to produce new stem cells or differentiated cells. Differentiated cells divide to produce new differentiated cells. We show that such a tissue design can reduce the rate of fixation of mutations that increase the net proliferation rate of cells. It has, however, no consequence for the rate of fixation of neutral mutations. We calculate the optimum relative abundance of stem cells that minimizes the rate of generating cancer cells. There is a critical fraction of stem cell divisions that is required for a stochastic elimination ('wash out') of cancer cells. PMID:14561289
A stochastic model for immunotherapy of cancer
Baar, Martina; Coquille, Loren; Mayer, Hannah; Hölzel, Michael; Rogava, Meri; Tüting, Thomas; Bovier, Anton
2016-01-01
We propose an extension of a standard stochastic individual-based model in population dynamics which broadens the range of biological applications. Our primary motivation is modelling of immunotherapy of malignant tumours. In this context the different actors, T-cells, cytokines or cancer cells, are modelled as single particles (individuals) in the stochastic system. The main expansions of the model are distinguishing cancer cells by phenotype and genotype, including environment-dependent phenotypic plasticity that does not affect the genotype, taking into account the effects of therapy and introducing a competition term which lowers the reproduction rate of an individual in addition to the usual term that increases its death rate. We illustrate the new setup by using it to model various phenomena arising in immunotherapy. Our aim is twofold: on the one hand, we show that the interplay of genetic mutations and phenotypic switches on different timescales as well as the occurrence of metastability phenomena raise new mathematical challenges. On the other hand, we argue why understanding purely stochastic events (which cannot be obtained with deterministic models) may help to understand the resistance of tumours to therapeutic approaches and may have non-trivial consequences on tumour treatment protocols. This is supported through numerical simulations. PMID:27063839
Gompertzian stochastic model with delay effect to cervical cancer growth
NASA Astrophysics Data System (ADS)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-01
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
Gompertzian stochastic model with delay effect to cervical cancer growth
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects
NASA Technical Reports Server (NTRS)
Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip
2007-01-01
When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend
Kerns, Sarah L.; Stock, Richard; Stone, Nelson; Buckstein, Michael; Shao, Yongzhao; Campbell, Christopher; Rath, Lynda; De Ruysscher, Dirk; Lammering, Guido; Hixson, Rosetta; Cesaretti, Jamie; Terk, Mitchell; Ostrer, Harry; Rosenstein, Barry S.
2013-01-01
Purpose: To identify single nucleotide polymorphisms (SNPs) associated with development of erectile dysfunction (ED) among prostate cancer patients treated with radiation therapy. Methods and Materials: A 2-stage genome-wide association study was performed. Patients were split randomly into a stage I discovery cohort (132 cases, 103 controls) and a stage II replication cohort (128 cases, 102 controls). The discovery cohort was genotyped using Affymetrix 6.0 genome-wide arrays. The 940 top ranking SNPs selected from the discovery cohort were genotyped in the replication cohort using Illumina iSelect custom SNP arrays. Results: Twelve SNPs identified in the discovery cohort and validated in the replication cohort were associated with development of ED following radiation therapy (Fisher combined P values 2.1 Multiplication-Sign 10{sup -5} to 6.2 Multiplication-Sign 10{sup -4}). Notably, these 12 SNPs lie in or near genes involved in erectile function or other normal cellular functions (adhesion and signaling) rather than DNA damage repair. In a multivariable model including nongenetic risk factors, the odds ratios for these SNPs ranged from 1.6 to 5.6 in the pooled cohort. There was a striking relationship between the cumulative number of SNP risk alleles an individual possessed and ED status (Sommers' D P value = 1.7 Multiplication-Sign 10{sup -29}). A 1-allele increase in cumulative SNP score increased the odds for developing ED by a factor of 2.2 (P value = 2.1 Multiplication-Sign 10{sup -19}). The cumulative SNP score model had a sensitivity of 84% and specificity of 75% for prediction of developing ED at the radiation therapy planning stage. Conclusions: This genome-wide association study identified a set of SNPs that are associated with development of ED following radiation therapy. These candidate genetic predictors warrant more definitive validation in an independent cohort.
Towards Predictive Stochastic Dynamical Modeling of Cancer Genesis and Progression
Ao, P.; Galas, D.; Hood, L.; Yin, L.; Zhu, X.M.
2011-01-01
Based on an innovative endogenous network hypothesis on cancer genesis and progression we have been working towards a quantitative cancer theory along the systems biology perspective. Here we give a brief report on our progress and illustrate that combing ideas from evolutionary and molecular biology, mathematics, engineering, and physics, such quantitative approach is feasible. PMID:20640781
Stochastic Effects in Computational Biology of Space Radiation Cancer Risk
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Pluth, Janis; Harper, Jane; O'Neill, Peter
2007-01-01
Estimating risk from space radiation poses important questions on the radiobiology of protons and heavy ions. We are considering systems biology models to study radiation induced repair foci (RIRF) at low doses, in which less than one-track on average transverses the cell, and the subsequent DNA damage processing and signal transduction events. Computational approaches for describing protein regulatory networks coupled to DNA and oxidative damage sites include systems of differential equations, stochastic equations, and Monte-Carlo simulations. We review recent developments in the mathematical description of protein regulatory networks and possible approaches to radiation effects simulation. These include robustness, which states that regulatory networks maintain their functions against external and internal perturbations due to compensating properties of redundancy and molecular feedback controls, and modularity, which leads to general theorems for considering molecules that interact through a regulatory mechanism without exchange of matter leading to a block diagonal reduction of the connecting pathways. Identifying rate-limiting steps, robustness, and modularity in pathways perturbed by radiation damage are shown to be valid techniques for reducing large molecular systems to realistic computer simulations. Other techniques studied are the use of steady-state analysis, and the introduction of composite molecules or rate-constants to represent small collections of reactants. Applications of these techniques to describe spatial and temporal distributions of RIRF and cell populations following low dose irradiation are described.
Moderate stem-cell telomere shortening rate postpones cancer onset in a stochastic model
NASA Astrophysics Data System (ADS)
Holbek, Simon; Bendtsen, Kristian Moss; Juul, Jeppe
2013-10-01
Mammalian cells are restricted from proliferating indefinitely. Telomeres at the end of each chromosome are shortened at cell division and when they reach a critical length, the cell will enter permanent cell cycle arrest—a state known as senescence. This mechanism is thought to be tumor suppressing, as it helps prevent precancerous cells from dividing uncontrollably. Stem cells express the enzyme telomerase, which elongates the telomeres, thereby postponing senescence. However, unlike germ cells and most types of cancer cells, stem cells only express telomerase at levels insufficient to fully maintain the length of their telomeres, leading to a slow decline in proliferation potential. It is not yet fully understood how this decline influences the risk of cancer and the longevity of the organism. We here develop a stochastic model to explore the role of telomere dynamics in relation to both senescence and cancer. The model describes the accumulation of cancerous mutations in a multicellular organism and creates a coherent theoretical framework for interpreting the results of several recent experiments on telomerase regulation. We demonstrate that the longest average cancer-free lifespan before cancer onset is obtained when stem cells start with relatively long telomeres that are shortened at a steady rate at cell division. Furthermore, the risk of cancer early in life can be reduced by having a short initial telomere length. Finally, our model suggests that evolution will favor a shorter than optimal average cancer-free lifespan in order to postpone cancer onset until late in life.
NASA Astrophysics Data System (ADS)
Warren, Patrick B.
2009-09-01
A recently proposed model for skin cell proliferation [E. Clayton , Nature (London) 446, 185 (2007)] is extended to incorporate mitotic autoregulation, and hence homeostasis as a fixed point of the dynamics. Unlimited cell proliferation in such a model can be viewed as a model for carcinogenesis. One way in which this can arise is homeostatic metastability, in which the cell populations escape from the homeostatic basin of attraction by a large but rare stochastic fluctuation. Such an event can be viewed as the final step in a multistage model of carcinogenesis. Homeostatic metastability offers a possible explanation for the peculiar epidemiology of lung cancer in ex-smokers.
NASA Astrophysics Data System (ADS)
Zamani Dahaj, Seyed Alireza; Kumar, Niraj; Sundaram, Bala; Celli, Jonathan; Kulkarni, Rahul
The phenotypic heterogeneity of cancer cells is critical to their survival under stress. A significant contribution to heterogeneity of cancer calls derives from the epithelial-mesenchymal transition (EMT), a conserved cellular program that is crucial for embryonic development. Several studies have investigated the role of EMT in growth of early stage tumors into invasive malignancies. Also, EMT has been closely associated with the acquisition of chemoresistance properties in cancer cells. Motivated by these studies, we analyze multi-phenotype stochastic models of the evolution of cancers cell populations under stress. We derive analytical results for time-dependent probability distributions that provide insights into the competing rates underlying phenotypic switching (e.g. during EMT) and the corresponding survival of cancer cells. Experimentally, we evaluate these model-based predictions by imaging human pancreatic cancer cell lines grown with and without cytotoxic agents and measure growth kinetics, survival, morphological changes and (terminal evaluation of) biomarkers with associated epithelial and mesenchymal phenotypes. The results derived suggest approaches for distinguishing between adaptation and selection scenarios for survival in the presence of external stresses.
Figueredo, Grazziela P; Siebers, Peer-Olaf; Owen, Markus R; Reps, Jenna; Aickelin, Uwe
2014-01-01
There is great potential to be explored regarding the use of agent-based modelling and simulation as an alternative paradigm to investigate early-stage cancer interactions with the immune system. It does not suffer from some limitations of ordinary differential equation models, such as the lack of stochasticity, representation of individual behaviours rather than aggregates and individual memory. In this paper we investigate the potential contribution of agent-based modelling and simulation when contrasted with stochastic versions of ODE models using early-stage cancer examples. We seek answers to the following questions: (1) Does this new stochastic formulation produce similar results to the agent-based version? (2) Can these methods be used interchangeably? (3) Do agent-based models outcomes reveal any benefit when compared to the Gillespie results? To answer these research questions we investigate three well-established mathematical models describing interactions between tumour cells and immune elements. These case studies were re-conceptualised under an agent-based perspective and also converted to the Gillespie algorithm formulation. Our interest in this work, therefore, is to establish a methodological discussion regarding the usability of different simulation approaches, rather than provide further biological insights into the investigated case studies. Our results show that it is possible to obtain equivalent models that implement the same mechanisms; however, the incapacity of the Gillespie algorithm to retain individual memory of past events affects the similarity of some results. Furthermore, the emergent behaviour of ABMS produces extra patters of behaviour in the system, which was not obtained by the Gillespie algorithm. PMID:24752131
Zhu, Peican; Aliabadi, Hamidreza Montazeri; Uludağ, Hasan; Han, Jie
2016-01-01
The investigation of vulnerable components in a signaling pathway can contribute to development of drug therapy addressing aberrations in that pathway. Here, an original signaling pathway is derived from the published literature on breast cancer models. New stochastic logical models are then developed to analyze the vulnerability of the components in multiple signalling sub-pathways involved in this signaling cascade. The computational results are consistent with the experimental results, where the selected proteins were silenced using specific siRNAs and the viability of the cells were analyzed 72 hours after silencing. The genes elF4E and NFkB are found to have nearly no effect on the relative cell viability and the genes JAK2, Stat3, S6K, JUN, FOS, Myc, and Mcl1 are effective candidates to influence the relative cell growth. The vulnerabilities of some targets such as Myc and S6K are found to vary significantly depending on the weights of the sub-pathways; this will be indicative of the chosen target to require customization for therapy. When these targets are utilized, the response of breast cancers from different patients will be highly variable because of the known heterogeneities in signaling pathways among the patients. The targets whose vulnerabilities are invariably high might be more universally acceptable targets. PMID:26988076
A stochastic Markov chain model to describe lung cancer growth and metastasis.
Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila A; Nieva, Jorge; Kuhn, Peter
2012-01-01
A stochastic Markov chain model for metastatic progression is developed for primary lung cancer based on a network construction of metastatic sites with dynamics modeled as an ensemble of random walkers on the network. We calculate a transition matrix, with entries (transition probabilities) interpreted as random variables, and use it to construct a circular bi-directional network of primary and metastatic locations based on postmortem tissue analysis of 3827 autopsies on untreated patients documenting all primary tumor locations and metastatic sites from this population. The resulting 50 potential metastatic sites are connected by directed edges with distributed weightings, where the site connections and weightings are obtained by calculating the entries of an ensemble of transition matrices so that the steady-state distribution obtained from the long-time limit of the Markov chain dynamical system corresponds to the ensemble metastatic distribution obtained from the autopsy data set. We condition our search for a transition matrix on an initial distribution of metastatic tumors obtained from the data set. Through an iterative numerical search procedure, we adjust the entries of a sequence of approximations until a transition matrix with the correct steady-state is found (up to a numerical threshold). Since this constrained linear optimization problem is underdetermined, we characterize the statistical variance of the ensemble of transition matrices calculated using the means and variances of their singular value distributions as a diagnostic tool. We interpret the ensemble averaged transition probabilities as (approximately) normally distributed random variables. The model allows us to simulate and quantify disease progression pathways and timescales of progression from the lung position to other sites and we highlight several key findings based on the model. PMID:22558094
Hermann, Philipp; Mrkvička, Tomáš; Mattfeldt, Torsten; Minárová, Mária; Helisová, Kateřina; Nicolis, Orietta; Wartner, Fabian; Stehlík, Milan
2015-08-15
Fractals are models of natural processes with many applications in medicine. The recent studies in medicine show that fractals can be applied for cancer detection and the description of pathological architecture of tumors. This fact is not surprising, as due to the irregular structure, cancerous cells can be interpreted as fractals. Inspired by Sierpinski carpet, we introduce a flexible parametric model of random carpets. Randomization is introduced by usage of binomial random variables. We provide an algorithm for estimation of parameters of the model and illustrate theoretical and practical issues in generation of Sierpinski gaskets and Hausdorff measure calculations. Stochastic geometry models can also serve as models for binary cancer images. Recently, a Boolean model was applied on the 200 images of mammary cancer tissue and 200 images of mastopathic tissue. Here, we describe the Quermass-interaction process, which can handle much more variations in the cancer data, and we apply it to the images. It was found out that mastopathic tissue deviates significantly stronger from Quermass-interaction process, which describes interactions among particles, than mammary cancer tissue does. The Quermass-interaction process serves as a model describing the tissue, which structure is broken to a certain level. However, random fractal model fits well for mastopathic tissue. We provide a novel discrimination method between mastopathic and mammary cancer tissue on the basis of complex wavelet-based self-similarity measure with classification rates more than 80%. Such similarity measure relates to Hurst exponent and fractional Brownian motions. The R package FractalParameterEstimation is developed and introduced in the paper. PMID:25847279
Mattfeldt, T; Gottfried, H; Schmidt, V; Kestler, H A
2000-05-01
Stereology and stochastic geometry can be used as auxiliary tools for diagnostic purposes in tumour pathology. Whether first-order parameters or stochastic-geometric functions are more important for the classification of the texture of biological tissues is not known. In the present study, volume and surface area per unit reference volume, the pair correlation function and the centred quadratic contact density function of epithelium were estimated in three case series of benign and malignant lesions of glandular tissues. The information provided by the latter functions was summarized by the total absolute areas between the estimated curves and their horizontal reference lines. These areas are considered as indicators of deviation of the tissue texture from a completely uncorrelated volume process and from the Boolean model with convex grains, respectively. We used both areas and the first-order parameters for the classification of cases using artificial neural networks (ANNs). Learning vector quantization and multilayer feedforward networks with backpropagation were applied as neural paradigms. Applications included distinction between mastopathy and mammary cancer (40 cases), between benign prostatic hyperplasia and prostatic cancer (70 cases) and between chronic pancreatitis and pancreatic cancer (60 cases). The same data sets were also classified with linear discriminant analysis. The stereological estimates in combination with ANNs or discriminant analysis provided high accuracy in the classification of individual cases. The question of which category of estimator is the most informative cannot be answered globally, but must be explored empirically for each specific data set. Using learning vector quantization, better results could often be obtained than by multilayer feedforward networks with backpropagation. PMID:10810010
Analysis of retinoblastoma age incidence data using a fully stochastic cancer model
Little, Mark P.; Kleinerman, Ruth A.; Stiller, Charles A.; Li, Guangquan; Kroll, Mary E.; Murphy, Michael F.G.
2011-01-01
Retinoblastoma (RB) is an important ocular malignancy of childhood. It has been commonly accepted for some time that knockout of the two alleles of the RB1 gene is the principal molecular target associated with the occurrence of RB. In this paper, we examine the validity of the two-hit theory for retinoblastoma by comparing the fit of a stochastic model with two or more mutational stages. Unlike many such models, our model assumes a fully stochastic stem cell compartment, which is crucial to its behavior. Models are fitted to a population-based dataset comprising 1,553 cases of retinoblastoma for the period 1962–2000 in Great Britain (England, Scotland, Wales). The population incidence of retinoblastoma is best described by a fully stochastic model with two stages, although models with a deterministic stem cell compartment yield equivalent fit; models with three or more stages fit much less well. The results strongly suggest that knockout of the two alleles of the RB1 gene is necessary and may be largely sufficient for the development of retinoblastoma, in support of Knudson’s two-hit hypothesis. PMID:21387305
Stochastic Tunneling and Metastable States During the Somatic Evolution of Cancer
Ashcroft, Peter; Michor, Franziska; Galla, Tobias
2015-01-01
Tumors initiate when a population of proliferating cells accumulates a certain number and type of genetic and/or epigenetic alterations. The population dynamics of such sequential acquisition of (epi)genetic alterations has been the topic of much investigation. The phenomenon of stochastic tunneling, where an intermediate mutant in a sequence does not reach fixation in a population before generating a double mutant, has been studied using a variety of computational and mathematical methods. However, the field still lacks a comprehensive analytical description since theoretical predictions of fixation times are available only for cases in which the second mutant is advantageous. Here, we study stochastic tunneling in a Moran model. Analyzing the deterministic dynamics of large populations we systematically identify the parameter regimes captured by existing approaches. Our analysis also reveals fitness landscapes and mutation rates for which finite populations are found in long-lived metastable states. These are landscapes in which the final mutant is not the most advantageous in the sequence, and resulting metastable states are a consequence of a mutation–selection balance. The escape from these states is driven by intrinsic noise, and their location affects the probability of tunneling. Existing methods no longer apply. In these regimes it is the escape from the metastable states that is the key bottleneck; fixation is no longer limited by the emergence of a successful mutant lineage. We used the so-called Wentzel–Kramers–Brillouin method to compute fixation times in these parameter regimes, successfully validated by stochastic simulations. Our work fills a gap left by previous approaches and provides a more comprehensive description of the acquisition of multiple mutations in populations of somatic cells. PMID:25624316
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y.; Cho, Young-Bin; Islam, Mohammad K.
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxels on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.
Solan, Eilon; Vieille, Nicolas
2015-01-01
In 1953, Lloyd Shapley contributed his paper “Stochastic games” to PNAS. In this paper, he defined the model of stochastic games, which were the first general dynamic model of a game to be defined, and proved that it admits a stationary equilibrium. In this Perspective, we summarize the historical context and the impact of Shapley’s contribution. PMID:26556883
The National Center for Environmental Assessment (NCEA) has conducted and supported research addressing uncertainties in 2-stage clonal growth models for cancer as applied to formaldehyde. In this report, we summarized publications resulting from this research effort, discussed t...
Evaluation of 2-Stage Injection Technique in Children.
Sandeep, Valasingam; Kumar, Manikya; Jyostna, P; Duggi, Vijay
2016-01-01
Effective pain control during local anesthetic injection is the cornerstone of behavior guidance in pediatric dentistry. The aim of this study was to evaluate the practical efficacy of a 2-stage injection technique in reducing injection pain in children. This was a split-mouth, randomized controlled crossover trial. One hundred cooperative children aged 7 to 13 years in need of bilateral local anesthetic injections (inferior alveolar nerve block, posterior superior alveolar nerve block, or maxillary and mandibular buccal infiltrations) for restorative, endodontic, and extraction treatments were recruited for the study. Children were randomly allocated to receive either the 2-stage injection technique or conventional technique at the first appointment. The other technique was used at the successive visit after 1 week. Subjective and objective evaluation of pain was done using the Wong-Baker FACES Pain Rating Scale (FPS) and Sound Eye Motor (SEM) scale, respectively. The comparison of pain scores was done by Wilcoxon sign-rank test. Both FPS and SEM scores were significantly lower when the 2-stage injection technique of local anesthetic nerve block/infiltration was used compared with the conventional technique. The 2-stage injection technique is a simple and effective means of reducing injection pain in children. PMID:26866405
2–stage stochastic Runge–Kutta for stochastic delay differential equations
Rosli, Norhayati; Jusoh Awang, Rahimah; Bahar, Arifah; Yeak, S. H.
2015-05-15
This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.
Kossenko, M M; Hoffman, D A; Thomas, T L
2000-07-01
The Mayak Industrial Association, located in the South Ural Mountains, began operation in 1948 and was the first Russian site for the production and separation of plutonium. During the early days of operation, technological failures resulted in the release of large amounts of radioactive waste into the Techa River. Residents who lived in villages on the banks of the Techa and Iset Rivers were exposed to varying levels of radioactivity. The objective of this study is to assess stochastic (carcinogenic) effects in populations exposed to offsite releases of radioactive materials from the Mayak nuclear facility in Russia. Subjects of the present study are those individuals who lived during the period January 1950 through December 1960 in any of the exposed villages along the Techa River in Chelyabinsk Oblast. Death certificates and cancer incidence data have been routinely collected in the past from a five-rayon catchment area of Chelyabinsk Oblast. The registry of exposed residents along the Techa River assembled and maintained by the Urals Research Center for Radiation Medicine for the past 40 y is the basis for identifying study subjects for this project. Specific study objectives are to evaluate the incidence of cancer among current and former residents of Chelyabinsk Oblast who are in the exposed Techa River cohort; integrate results from the dose-reconstruction study to estimate doses for risk assessment; and develop a structure for maintaining continued follow-up of the cohort for cancer incidence. In the earlier part of our collaborative effort, the focus has been to enhance the cancer morbidity registry by updating it with cancer cases diagnosed through 1997, to conduct a series of validation procedures to ensure completeness and accuracy of the registry, and to reduce the numbers of subjects lost to follow-up. A feasibility study to determine cancer morbidity in migrants from the catchment area has been proposed. Our preliminary analyses of cancer morbidity
Direct vs 2-stage approaches to structured motif finding
2012-01-01
Background The notion of DNA motif is a mathematical abstraction used to model regions of the DNA (known as Transcription Factor Binding Sites, or TFBSs) that are bound by a given Transcription Factor to regulate gene expression or repression. In turn, DNA structured motifs are a mathematical counterpart that models sets of TFBSs that work in concert in the gene regulations processes of higher eukaryotic organisms. Typically, a structured motif is composed of an ordered set of isolated (or simple) motifs, separated by a variable, but somewhat constrained number of “irrelevant” base-pairs. Discovering structured motifs in a set of DNA sequences is a computationally hard problem that has been addressed by a number of authors using either a direct approach, or via the preliminary identification and successive combination of simple motifs. Results We describe a computational tool, named SISMA, for the de-novo discovery of structured motifs in a set of DNA sequences. SISMA is an exact, enumerative algorithm, meaning that it finds all the motifs conforming to the specifications. It does so in two stages: first it discovers all the possible component simple motifs, then combines them in a way that respects the given constraints. We developed SISMA mainly with the aim of understanding the potential benefits of such a 2-stage approach w.r.t. direct methods. In fact, no 2-stage software was available for the general problem of structured motif discovery, but only a few tools that solved restricted versions of the problem. We evaluated SISMA against other published tools on a comprehensive benchmark made of both synthetic and real biological datasets. In a significant number of cases, SISMA outperformed the competitors, exhibiting a good performance also in most of the cases in which it was inferior. Conclusions A reflection on the results obtained lead us to conclude that a 2-stage approach can be implemented with many advantages over direct approaches. Some of these
NASA Astrophysics Data System (ADS)
Eichhorn, Ralf; Aurell, Erik
2014-04-01
'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response
Bisognano, J.; Leemann, C.
1982-03-01
Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron.
2012-01-01
Background Reaction-diffusion based models have been widely used in the literature for modeling the growth of solid tumors. Many of the current models treat both diffusion/consumption of nutrients and cell proliferation. The majority of these models use classical transport/mass conservation equations for describing the distribution of molecular species in tumor spheroids, and the Fick's law for describing the flux of uncharged molecules (i.e oxygen, glucose). Commonly, the equations for the cell movement and proliferation are first order differential equations describing the rate of change of the velocity of the cells with respect to the spatial coordinates as a function of the nutrient's gradient. Several modifications of these equations have been developed in the last decade to explicitly indicate that the tumor includes cells, interstitial fluids and extracellular matrix: these variants provided a model of tumor as a multiphase material with these as the different phases. Most of the current reaction-diffusion tumor models are deterministic and do not model the diffusion as a local state-dependent process in a non-homogeneous medium at the micro- and meso-scale of the intra- and inter-cellular processes, respectively. Furthermore, a stochastic reaction-diffusion model in which diffusive transport of the molecular species of nutrients and chemotherapy drugs as well as the interactions of the tumor cells with these species is a novel approach. The application of this approach to he scase of non-small cell lung cancer treated with gemcitabine is also novel. Methods We present a stochastic reaction-diffusion model of non-small cell lung cancer growth in the specification formalism of the tool Redi, we recently developed for simulating reaction-diffusion systems. We also describe how a spatial gradient of nutrients and oncological drugs affects the tumor progression. Our model is based on a generalization of the Fick's first diffusion law that allows to model
The 2-stage liver transplant: 3 clinical scenarios.
Gedik, Ender; Bıçakçıoğlu, Murat; Otan, Emrah; İlksen Toprak, Hüseyin; Işık, Burak; Aydın, Cemalettin; Kayaalp, Cüneyt; Yılmaz, Sezai
2015-04-01
The main goal of 2-stage liver transplant is to provide time to obtain a new liver source. We describe our experience of 3 patients with 3 different clinical conditions. A 57-year-old man was retransplanted successfully with this technique due to hepatic artery thrombosis. However, a 38-year-old woman with fulminant toxic hepatitis and a 5-year-old-boy with abdominal trauma had poor outcome. This technique could serve as a rescue therapy for liver transplant patients who have toxic liver syndrome or abdominal trauma. These patients required intensive support during long anhepatic states. The transplant team should decide early whether to use this technique before irreversible conditions develop. PMID:25894175
Rood, A S; McGavran, P D; Aanenson, J W; Till, J E
2001-08-01
Carbon tetrachloride is a degreasing agent that was used at the Rocky Flats Plant (RFP) in Colorado to clean product components and equipment. The chemical is considered a volatile organic compound and a probable human carcinogen. During the time the plant operated (1953-1989), most of the carbon tetrachloride was released to the atmosphere through building exhaust ducts. A smaller amount was released to the air via evaporation from open-air burn pits and ground-surface discharge points. Airborne releases from the plant were conservatively estimated to be equivalent to the amount of carbon tetrachloride consumed annually by the plant, which was estimated to be between 3.6 and 180 Mg per year. This assumption was supported by calculations that showed that most of the carbon tetrachloride discharged to the ground surface would subsequently be released to the atmosphere. Atmospheric transport of carbon tetrachloride from the plant to the surrounding community was estimated using a Gaussian Puff dispersion model (RATCHET). Time-integrated concentrations were estimated for nine hypothetical but realistic exposure scenarios that considered variation in lifestyle, location, age, and gender. Uncertainty distributions were developed for cancer slope factors and atmospheric dispersion factors. These uncertainties were propagated through to the final risk estimate using Monte Carlo techniques. The geometric mean risk estimates varied from 5.2 x 10(-6) for a hypothetical rancher or laborer working near the RFP to 3.4 x 10(-9) for an infant scenario. The distribution of incremental lifetime cancer incidence risk for the hypothetical rancher was between 1.3 x 10(-6) (5% value) and 2.1 x 10(-5) (95% value). These estimates are similar to or exceed estimated cancer risks posed by releases of radionuclides from the site. PMID:11726020
Blaskiewicz, M.
2011-01-01
Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.
Brennan,J.M.; Blaskiewicz, M. M.; Severino, F.
2009-05-04
After the success of longitudinal stochastic cooling of bunched heavy ion beam in RHIC, transverse stochastic cooling in the vertical plane of Yellow ring was installed and is being commissioned with proton beam. This report presents the status of the effort and gives an estimate, based on simulation, of the RHIC luminosity with stochastic cooling in all planes.
... Leukemia Liver cancer Non-Hodgkin lymphoma Ovarian cancer Pancreatic cancer Testicular cancer Thyroid cancer Uterine cancer ... have any symptoms. In certain cancers, such as pancreatic cancer, symptoms often do not start until the disease ...
Fluctuations as stochastic deformation.
Kazinski, P O
2008-04-01
A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium. PMID:18517590
Fluctuations as stochastic deformation
NASA Astrophysics Data System (ADS)
Kazinski, P. O.
2008-04-01
A notion of stochastic deformation is introduced and the corresponding algebraic deformation procedure is developed. This procedure is analogous to the deformation of an algebra of observables like deformation quantization, but for an imaginary deformation parameter (the Planck constant). This method is demonstrated on diverse relativistic and nonrelativistic models with finite and infinite degrees of freedom. It is shown that under stochastic deformation the model of a nonrelativistic particle interacting with the electromagnetic field on a curved background passes into the stochastic model described by the Fokker-Planck equation with the diffusion tensor being the inverse metric tensor. The first stochastic correction to the Newton equations for this system is found. The Klein-Kramers equation is also derived as the stochastic deformation of a certain classical model. Relativistic generalizations of the Fokker-Planck and Klein-Kramers equations are obtained by applying the procedure of stochastic deformation to appropriate relativistic classical models. The analog of the Fokker-Planck equation associated with the stochastic Lorentz-Dirac equation is derived too. The stochastic deformation of the models of a free scalar field and an electromagnetic field is investigated. It turns out that in the latter case the obtained stochastic model describes a fluctuating electromagnetic field in a transparent medium.
A Stochastic Employment Problem
ERIC Educational Resources Information Center
Wu, Teng
2013-01-01
The Stochastic Employment Problem(SEP) is a variation of the Stochastic Assignment Problem which analyzes the scenario that one assigns balls into boxes. Balls arrive sequentially with each one having a binary vector X = (X[subscript 1], X[subscript 2],...,X[subscript n]) attached, with the interpretation being that if X[subscript i] = 1 the ball…
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Stochastic Processes in Electrochemistry.
Singh, Pradyumna S; Lemay, Serge G
2016-05-17
Stochastic behavior becomes an increasingly dominant characteristic of electrochemical systems as we probe them on the smallest scales. Advances in the tools and techniques of nanoelectrochemistry dictate that stochastic phenomena will become more widely manifest in the future. In this Perspective, we outline the conceptual tools that are required to analyze and understand this behavior. We draw on examples from several specific electrochemical systems where important information is encoded in, and can be derived from, apparently random signals. This Perspective attempts to serve as an accessible introduction to understanding stochastic phenomena in electrochemical systems and outlines why they cannot be understood with conventional macroscopic descriptions. PMID:27120701
Spring, William Joseph
2009-04-13
We consider quantum analogues of n-parameter stochastic processes, associated integrals and martingale properties extending classical results obtained in [1, 2, 3], and quantum results in [4, 5, 6, 7, 8, 9, 10].
Dynamics of Double Stochastic Operators
NASA Astrophysics Data System (ADS)
Saburov, Mansoor
2016-03-01
A double stochastic operator is a generalization of a double stochastic matrix. In this paper, we study the dynamics of double stochastic operators. We give a criterion for a regularity of a double stochastic operator in terms of absences of its periodic points. We provide some examples to insure that, in general, a trajectory of a double stochastic operator may converge to any interior point of the simplex.
... body. Cancerous cells are also called malignant cells. Causes Cancer grows out of cells in the body. Normal ... of many cancers remains unknown. The most common cause of cancer-related death is lung cancer. In the U.S., ...
NASA Astrophysics Data System (ADS)
Venturi, Daniele
2005-11-01
Stochastic bifurcations and stability of natural convective flows in 2d and 3d enclosures are investigated by the multi-element generalized polynomial chaos (ME-gPC) method (Xiu and Karniadakis, SISC, vol. 24, 2002). The Boussinesq approximation for the variation of physical properties is assumed. The stability analysis is first carried out in a deterministic sense, to determine steady state solutions and primary and secondary bifurcations. Stochastic simulations are then conducted around discontinuities and transitional regimes. It is found that these highly non-linear phenomena can be efficiently captured by the ME-gPC method. Finally, the main findings of the stochastic analysis and their implications for heat transfer will be discussed.
Stochastic Feedforward Control Technique
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1990-01-01
Class of commanded trajectories modeled as stochastic process. Advanced Transport Operating Systems (ATOPS) research and development program conducted by NASA Langley Research Center aimed at developing capabilities for increases in capacities of airports, safe and accurate flight in adverse weather conditions including shear, winds, avoidance of wake vortexes, and reduced consumption of fuel. Advances in techniques for design of modern controls and increased capabilities of digital flight computers coupled with accurate guidance information from Microwave Landing System (MLS). Stochastic feedforward control technique developed within context of ATOPS program.
NASA Astrophysics Data System (ADS)
Pierret, Frédéric
2016-02-01
We derived the equations of Celestial Mechanics governing the variation of the orbital elements under a stochastic perturbation, thereby generalizing the classical Gauss equations. Explicit formulas are given for the semimajor axis, the eccentricity, the inclination, the longitude of the ascending node, the pericenter angle, and the mean anomaly, which are expressed in term of the angular momentum vector H per unit of mass and the energy E per unit of mass. Together, these formulas are called the stochastic Gauss equations, and they are illustrated numerically on an example from satellite dynamics.
Stochastic modeling of rainfall
Guttorp, P.
1996-12-31
We review several approaches in the literature for stochastic modeling of rainfall, and discuss some of their advantages and disadvantages. While stochastic precipitation models have been around at least since the 1850`s, the last two decades have seen an increased development of models based (more or less) on the physical processes involved in precipitation. There are interesting questions of scale and measurement that pertain to these modeling efforts. Recent modeling efforts aim at including meteorological variables, and may be useful for regional down-scaling of general circulation models.
STOCHASTIC COOLING FOR BUNCHED BEAMS.
BLASKIEWICZ, M.
2005-05-16
Problems associated with bunched beam stochastic cooling are reviewed. A longitudinal stochastic cooling system for RHIC is under construction and has been partially commissioned. The state of the system and future plans are discussed.
Stochastic entrainment of a stochastic oscillator.
Wang, Guanyu; Peskin, Charles S
2015-11-01
In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs. PMID:26651734
Stochastic Models of Human Growth.
ERIC Educational Resources Information Center
Goodrich, Robert L.
Stochastic difference equations of the Box-Jenkins form provide an adequate family of models on which to base the stochastic theory of human growth processes, but conventional time series identification methods do not apply to available data sets. A method to identify structure and parameters of stochastic difference equation models of human…
Focus on stochastic thermodynamics
NASA Astrophysics Data System (ADS)
Van den Broeck, Christian; Sasa, Shin-ichi; Seifert, Udo
2016-02-01
We introduce the thirty papers collected in this ‘focus on’ issue. The contributions explore conceptual issues within and around stochastic thermodynamics, use this framework for the theoretical modeling and experimental investigation of specific systems, and provide further perspectives on and for this active field.
Tollestrup, A.V.; Dugan, G
1983-12-01
Major headings in this review include: proton sources; antiproton production; antiproton sources and Liouville, the role of the Debuncher; transverse stochastic cooling, time domain; the accumulator; frequency domain; pickups and kickers; Fokker-Planck equation; calculation of constants in the Fokker-Planck equation; and beam feedback. (GHT)
2-stage revision of 120 deep infected hip and knee prostheses using gentamicin-PMMA beads.
Janssen, Daniël M C; Geurts, Jan A P; Jütten, Liesbeth M C; Walenkamp, Geert H I M
2016-08-01
Background and purpose - A 2-stage revision is the most common treatment for late deep prosthesis-related infections and in all cases of septic loosening. However, there is no consensus about the optimal interval between the 2 stages. Patients and methods - We retrospectively studied 120 deep infections of total hip (n = 95) and knee (n = 25) prostheses that had occurred over a period of 25 years. The mean follow-up time was 5 (2-20) years. All infections had been treated with extraction, 1 or more debridements with systemic antibiotics, and implantation of gentamicin-PMMA beads. There had been different time intervals between extraction and reimplantation: median 14 (11-47) days for short-term treatment with uninterrupted hospital stay, and 7 (3-22) months for long-term treatment with temporary discharge. We analyzed the outcome regarding resolution of the infection and clinical results. Results - 88% (105/120) of the infections healed, with no difference in healing rate between short- and long-term treatment. 82 prostheses were reimplanted. In the most recent decade, we treated patients more often with a long-term treatment but reduced the length of time between the extraction and the reimplantation. More reimplantations were performed in long-term treatments than in short-term treatments, despite more having difficult-to-treat infections with worse soft-tissue condition. Interpretation - Patient, wound, and infection considerations resulted in an individualized treatment with different intervals between stages. The 2-stage revision treatment in combination with local gentamicin-PMMA beads gave good results even with difficult prosthesis infections and gentamicin-resistant bacteria. PMID:26822990
Adaptive stochastic cellular automata: Applications
NASA Astrophysics Data System (ADS)
Qian, S.; Lee, Y. C.; Jones, R. D.; Barnes, C. W.; Flake, G. W.; O'Rourke, M. K.; Lee, K.; Chen, H. H.; Sun, G. Z.; Zhang, Y. Q.; Chen, D.; Giles, C. L.
1990-09-01
The stochastic learning cellular automata model has been applied to the problem of controlling unstable systems. Two example unstable systems studied are controlled by an adaptive stochastic cellular automata algorithm with an adaptive critic. The reinforcement learning algorithm and the architecture of the stochastic CA controller are presented. Learning to balance a single pole is discussed in detail. Balancing an inverted double pendulum highlights the power of the stochastic CA approach. The stochastic CA model is compared to conventional adaptive control and artificial neural network approaches.
Stochastic computing with biomolecular automata
NASA Astrophysics Data System (ADS)
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-07-01
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure.
Ishibashi, Tomoko; Ishikawa, Seiji; Suzuki, Akiko; Miyawaki, Yutaka; Kawano, Tatsuyuki; Makita, Koshi
2016-02-15
Tracheogastric tube fistulas are rare but fatal complications after esophagectomy. Anesthetic management for a patient with this complication is challenging because air leakage and mechanical ventilation may cause aspiration. We present a case report of the anesthetic management of a patient having 2-stage surgical repair combined with endoscopic mucosal resection for a giant carinal tracheogastric tube fistula. The first stage was separation of the gastric tube above the fistula with spontaneous breathing under local anesthesia and sedation. The second stage was complete separation and reconstruction of the digestive tract under epidural and general anesthesia with spontaneous breathing and pressure support before insertion of a decompression tube. PMID:26862719
Stochastic response surface methodology: A study in the human health area
Oliveira, Teresa A. Oliveira, Amílcar; Leal, Conceição
2015-03-10
In this paper we review Stochastic Response Surface Methodology as a tool for modeling uncertainty in the context of Risk Analysis. An application in the survival analysis in the breast cancer context is implemented with R software.
Aerodynamic characteristics of the National Launch System (NLS) 1 1/2 stage launch vehicle
NASA Technical Reports Server (NTRS)
Springer, A. M.; Pokora, D. C.
1994-01-01
The National Aeronautics and Space Administration (NASA) is studying ways of assuring more reliable and cost effective means to space. One launch system studied was the NLS which included the l l/2 stage vehicle. This document encompasses the aerodynamic characteristics of the 1 l/2 stage vehicle. To support the detailed configuration definition two wind tunnel tests were conducted in the NASA Marshall Space Flight Center's 14x14-Inch Trisonic Wind Tunnel during 1992. The tests were a static stability and a pressure test, each utilizing 0.004 scale models. The static stability test resulted in the forces and moments acting on the vehicle. The aerodynamics for the reference configuration with and without feedlines and an evaluation of three proposed engine shroud configurations were also determined. The pressure test resulted in pressure distributions over the reference vehicle with and without feedlines including the reference engine shrouds. These pressure distributions were integrated and balanced to the static stability coefficients resulting in distributed aerodynamic loads on the vehicle. The wind tunnel tests covered a Mach range of 0.60 to 4.96. These ascent flight aerodynamic characteristics provide the basis for trajectory and performance analysis, loads determination, and guidance and control evaluation.
Samuelson, P A
1971-02-01
Because a commodity like wheat can be carried forward from one period to the next, speculative arbitrage serves to link its prices at different points of time. Since, however, the size of the harvest depends on complicated probability processes impossible to forecast with certainty, the minimal model for understanding market behavior must involve stochastic processes. The present study, on the basis of the axiom that it is the expected rather than the known-for-certain prices which enter into all arbitrage relations and carryover decisions, determines the behavior of price as the solution to a stochastic-dynamic-programming problem. The resulting stationary time series possesses an ergodic state and normative properties like those often observed for real-world bourses. PMID:16591903
Stochastic ice stream dynamics.
Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca
2016-08-01
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution. PMID:27457960
Stochastic ice stream dynamics
NASA Astrophysics Data System (ADS)
Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca
2016-08-01
Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.
VAWT stochastic wind simulator
Strickland, J.H.
1987-04-01
A stochastic wind simulation for VAWTs (VSTOC) has been developed which yields turbulent wind-velocity fluctuations for rotationally sampled points. This allows three-component wind-velocity fluctuations to be simulated at specified nodal points on the wind-turbine rotor. A first-order convection scheme is used which accounts for the decrease in streamwise velocity as the flow passes through the wind-turbine rotor. The VSTOC simulation is independent of the particular analytical technique used to predict the aerodynamic and performance characteristics of the turbine. The VSTOC subroutine may be used simply as a subroutine in a particular VAWT prediction code or it may be used as a subroutine in an independent processor. The independent processor is used to interact with a version of the VAWT prediction code which is segmented into deterministic and stochastic modules. Using VSTOC in this fashion is very efficient with regard to decreasing computer time for the overall calculation process.
BLASKIEWICZ,M.BRENNAN,J.M.CAMERON,P.WEI,J.
2003-05-12
Emittance growth due to Intra-Beam Scattering significantly reduces the heavy ion luminosity lifetime in RHIC. Stochastic cooling of the stored beam could improve things considerably by counteracting IBS and preventing particles from escaping the rf bucket [1]. High frequency bunched-beam stochastic cooling is especially challenging but observations of Schottky signals in the 4-8 GHz band indicate that conditions are favorable in RHIC [2]. We report here on measurements of the longitudinal beam transfer function carried out with a pickup kicker pair on loan from FNAL TEVATRON. Results imply that for ions a coasting beam description is applicable and we outline some general features of a viable momentum cooling system for RHIC.
Dorogovtsev, Andrei A
2010-06-29
For sets in a Hilbert space the concept of quadratic entropy is introduced. It is shown that this entropy is finite for the range of a stochastic flow of Brownian particles on R. This implies, in particular, the fact that the total time of the free travel in the Arratia flow of all particles that started from a bounded interval is finite. Bibliography: 10 titles.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
... your life Being exposed to chemicals that can cause cancer Being at risk for skin cancer Depending on ... than nonsmokers. Other forms of tobacco can also cause cancer, such as cigars, chewing tobacco and snuff. If ...
Quantum Spontaneous Stochasticity
NASA Astrophysics Data System (ADS)
Drivas, Theodore; Eyink, Gregory
Classical Newtonian dynamics is expected to be deterministic, but recent fluid turbulence theory predicts that a particle advected at high Reynolds-numbers by ''nearly rough'' flows moves nondeterministically. Small stochastic perturbations to the flow velocity or to the initial data lead to persistent randomness, even in the limit where the perturbations vanish! Such ``spontaneous stochasticity'' has profound consequences for astrophysics, geophysics, and our daily lives. We show that a similar effect occurs with a quantum particle in a ''nearly rough'' force, for the semi-classical (large-mass) limit, where spreading of the wave-packet is usually expected to be negligible and dynamics to be deterministic Newtonian. Instead, there are non-zero probabilities to observe multiple, non-unique solutions of the classical equations. Although the quantum wave-function remains split, rapid phase oscillations prevent any coherent superposition of the branches. Classical spontaneous stochasticity has not yet been seen in controlled laboratory experiments of fluid turbulence, but the corresponding quantum effects may be observable by current techniques. We suggest possible experiments with neutral atomic-molecular systems in repulsive electric dipole potentials.
Unsteady hot streak simulation through a 1-1/2 stage turbine engine
NASA Astrophysics Data System (ADS)
Takahashi, R. K.; Ni, R. H.
1991-06-01
The temperature redistribution process in a 1-1/2 stage turbine (consisting of a first stator, first rotor, and second stator) was analyzed using an unsteady 3D Euler flow solver. The study concentrated on tracking a hot streak from the inlet of the first stator to the exit of the second stator. The redistribution of the hot streak in the second stator passage was very different from that in the rotor passage, with no signs of temperature segregation in the second stator passage, and with rotor-generated vortices which persist through the second stator passage and partake in redistributing the remains of the hot streak. The unsteady code predicts different time-averaged temperatures and secondary flow in the second stator passage than in the steady multistage code, although the steady code may be sufficient for predicting time-averaged pressure loadings on both rotor and second stator airfoils, and time-averaged secondary flow vortices in the rotor passage.
Qualitatively stability of nonstandard 2-stage explicit Runge-Kutta methods of order two
NASA Astrophysics Data System (ADS)
Khalsaraei, M. M.; Khodadosti, F.
2016-02-01
When one solves differential equations, modeling physical phenomena, it is of great importance to take physical constraints into account. More precisely, numerical schemes have to be designed such that discrete solutions satisfy the same constraints as exact solutions. Nonstandard finite differences (NSFDs) schemes can improve the accuracy and reduce computational costs of traditional finite difference schemes. In addition NSFDs produce numerical solutions which also exhibit essential properties of solution. In this paper, a class of nonstandard 2-stage Runge-Kutta methods of order two (we call it nonstandard RK2) is considered. The preservation of some qualitative properties by this class of methods are discussed. In order to illustrate our results, we provide some numerical examples.
The contemporary role of 1 vs. 2-stage repair for proximal hypospadias
Dason, Shawn; Wong, Nathan
2014-01-01
This review discusses the most commonly employed techniques in the repair of proximal hypospadias, highlighting the advantages and disadvantages of single versus staged surgical techniques. Hypospadias can have a spectrum of severity with a urethral meatus ranging from the perineum to the glans. Associated abnormalities are commonly found with proximal hypospadias and encompass a large spectrum, including ventral curvature (VC) up to 50 degrees or more, ventral skin deficiency, a flattened glans, penile torsion and penoscrotal transposition. Our contemporary understanding of hypospadiology is comprised of a foundation built by experts who have described a number of techniques and their outcomes, combined with survey data detailing practice patterns. The two largest components of hypospadias repair include repair of VC and urethroplasty. VC greater than 20 degrees is considered clinically relevant to warrant surgical correction. To repair VC, the penis is first degloved—a procedure that may reduce or remove curvature by itself in some cases. Residual curvature is then repaired with dorsal plication techniques, transection of the urethral plate, and/or ventral lengthening techniques. Urethroplasty takes the form of 1- or 2-stage repairs. One-stage options include the tubularized incised urethroplasty (TIP) or various graft or flap-based techniques. Two-stage options also include grafts or flaps, including oral mucosal and preputial skin grafting. One stage repairs are an attractive option in that they may reduce cost, hospital stay, anesthetic risks, and time to the final result. The downside is that these repairs require mastery of multiple techniques may be more complex, and—depending on technique—have higher complication rates. Two-stage repairs are preferred by the majority of surveyed hypospadiologists. The 2-stage repair is versatile and has satisfactory outcomes, but necessitates a second procedure. Given the lack of clear high-quality evidence
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
Stochastic calculus in physics
Fox, R.F.
1987-03-01
The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations.
Stochastic ontogenetic growth model
NASA Astrophysics Data System (ADS)
West, B. J.; West, D.
2012-02-01
An ontogenetic growth model (OGM) for a thermodynamically closed system is generalized to satisfy both the first and second law of thermodynamics. The hypothesized stochastic ontogenetic growth model (SOGM) is shown to entail the interspecies allometry relation by explicitly averaging the basal metabolic rate and the total body mass over the steady-state probability density for the total body mass (TBM). This is the first derivation of the interspecies metabolic allometric relation from a dynamical model and the asymptotic steady-state distribution of the TBM is fit to data and shown to be inverse power law.
Stochastic thermodynamics of resetting
NASA Astrophysics Data System (ADS)
Fuchs, Jaco; Goldt, Sebastian; Seifert, Udo
2016-03-01
Stochastic dynamics with random resetting leads to a non-equilibrium steady state. Here, we consider the thermodynamics of resetting by deriving the first and second law for resetting processes far from equilibrium. We identify the contributions to the entropy production of the system which arise due to resetting and show that they correspond to the rate with which information is either erased or created. Using Landauer's principle, we derive a bound on the amount of work that is required to maintain a resetting process. We discuss different regimes of resetting, including a Maxwell demon scenario where heat is extracted from a bath at constant temperature.
Stochastic Quantum Gas Dynamics
NASA Astrophysics Data System (ADS)
Proukakis, Nick P.; Cockburn, Stuart P.
2010-03-01
We study the dynamics of weakly-interacting finite temperature Bose gases via the Stochastic Gross-Pitaevskii equation (SGPE). As a first step, we demonstrate [jointly with A. Negretti (Ulm, Germany) and C. Henkel (Potsdam, Germany)] that the SGPE provides a significantly better method for generating an equilibrium state than the number-conserving Bogoliubov method (except for low temperatures and small atom numbers). We then study [jointly with H. Nistazakis and D.J. Frantzeskakis (University of Athens, Greece), P.G.Kevrekidis (University of Massachusetts) and T.P. Horikis (University of Ioannina, Greece)] the dynamics of dark solitons in elongated finite temperature condensates. We demonstrate numerical shot-to-shot variations in soliton trajectories (S.P. Cockburn et al., arXiv:0909.1660.), finding individual long-lived trajectories as in experiments. In our simulations, these variations arise from fluctuations in the phase and density of the underlying medium. We provide a detailed statistical analysis, proposing regimes for the controlled experimental demonstration of this effect; we also discuss the extent to which simpler models can be used to mimic the features of ensemble-averaged stochastic trajectories.
Stochastic power flow modeling
Not Available
1980-06-01
The stochastic nature of customer demand and equipment failure on large interconnected electric power networks has produced a keen interest in the accurate modeling and analysis of the effects of probabilistic behavior on steady state power system operation. The principle avenue of approach has been to obtain a solution to the steady state network flow equations which adhere both to Kirchhoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques. Clearly the need of the present is to develop sound techniques for producing meaningful data to serve as input. This research has addressed this end and serves to bridge the gap between electric demand modeling, equipment failure analysis, etc., and the area of algorithm development. Therefore, the scope of this work lies squarely on developing an efficient means of producing sensible input information in the form of probability distributions for the many types of solution algorithms that have been developed. Two major areas of development are described in detail: a decomposition of stochastic processes which gives hope of stationarity, ergodicity, and perhaps even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.
Stochastic blind motion deblurring.
Xiao, Lei; Gregson, James; Heide, Felix; Heidrich, Wolfgang
2015-10-01
Blind motion deblurring from a single image is a highly under-constrained problem with many degenerate solutions. A good approximation of the intrinsic image can, therefore, only be obtained with the help of prior information in the form of (often nonconvex) regularization terms for both the intrinsic image and the kernel. While the best choice of image priors is still a topic of ongoing investigation, this research is made more complicated by the fact that historically each new prior requires the development of a custom optimization method. In this paper, we develop a stochastic optimization method for blind deconvolution. Since this stochastic solver does not require the explicit computation of the gradient of the objective function and uses only efficient local evaluation of the objective, new priors can be implemented and tested very quickly. We demonstrate that this framework, in combination with different image priors produces results with Peak Signal-to-Noise Ratio (PSNR) values that match or exceed the results obtained by much more complex state-of-the-art blind motion deblurring algorithms. PMID:25974941
NASA Technical Reports Server (NTRS)
Whitney, W. J.; Behning, F. P.; Moffitt, T. P.; Hotz, G. M.
1980-01-01
The stage group performance of a 4 1/2 stage turbine with an average stage loading factor of 4.66 and high specific work output was determined in cold air at design equivalent speed. The four stage turbine configuration produced design equivalent work output with an efficiency of 0.856; a barely discernible difference from the 0.855 obtained for the complete 4 1/2 stage turbine in a previous investigation. The turbine was designed and the procedure embodied the following design features: (1) controlled vortex flow, (2) tailored radial work distribution, and (3) control of the location of the boundary-layer transition point on the airfoil suction surface. The efficiency forecast for the 4 1/2 stage turbine was 0.886, and the value predicted using a reference method was 0.862. The stage group performance results were used to determine the individual stage efficiencies for the condition at which design 4 1/2 stage work output was obtained. The efficiencies of stages one and four were about 0.020 lower than the predicted value, that of stage two was 0.014 lower, and that of stage three was about equal to the predicted value. Thus all the stages operated reasonably close to their expected performance levels, and the overall (4 1/2 stage) performance was not degraded by any particularly inefficient component.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.
Variance decomposition in stochastic simulators
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Variance decomposition in stochastic simulators
NASA Astrophysics Data System (ADS)
Le Maître, O. P.; Knio, O. M.; Moraes, A.
2015-06-01
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.
Unsteady Aero Computation of a 1 1/2 Stage Large Scale Rotating Turbine
NASA Technical Reports Server (NTRS)
To, Wai-Ming
2012-01-01
This report is the documentation of the work performed for the Subsonic Rotary Wing Project under the NASA s Fundamental Aeronautics Program. It was funded through Task Number NNC10E420T under GESS-2 Contract NNC06BA07B in the period of 10/1/2010 to 8/31/2011. The objective of the task is to provide support for the development of variable speed power turbine technology through application of computational fluid dynamics analyses. This includes work elements in mesh generation, multistage URANS simulations, and post-processing of the simulation results for comparison with the experimental data. The unsteady CFD calculations were performed with the TURBO code running in multistage single passage (phase lag) mode. Meshes for the blade rows were generated with the NASA developed TCGRID code. The CFD performance is assessed and improvements are recommended for future research in this area. For that, the United Technologies Research Center's 1 1/2 stage Large Scale Rotating Turbine was selected to be the candidate engine configuration for this computational effort because of the completeness and availability of the data.
A 2-stage strategy updating rule promotes cooperation in the prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Fang, Xiang-Sheng; Zhu, Ping; Liu, Run-Ran; Liu, En-Yu; Wei, Gui-Yi
2012-10-01
In this study, we propose a spatial prisoner's dilemma game model with a 2-stage strategy updating rule, and focus on the cooperation behavior of the system. In the first stage, i.e., the pre-learning stage, a focal player decides whether to update his strategy according to the pre-learning factor β and the payoff difference between himself and the average of his neighbors. If the player makes up his mind to update, he enters into the second stage, i.e., the learning stage, and adopts a strategy of a randomly selected neighbor according to the standard Fermi updating rule. The simulation results show that the cooperation level has a non-trivial dependence on the pre-learning factor. Generally, the cooperation frequency decreases as the pre-learning factor increases; but a high cooperation level can be obtained in the intermediate region of -3 < β < -1. We then give some explanations via studying the co-action of pre-learning and learning. Our results may sharpen the understanding of the influence of the strategy updating rule on evolutionary games.
Stochastic models of gene expression and post-transcriptional regulation
NASA Astrophysics Data System (ADS)
Pendar, Hodjat; Kulkarni, Rahul; Jia, Tao
2011-10-01
The intrinsic stochasticity of gene expression can give rise to phenotypic heterogeneity in a population of genetically identical cells. Correspondingly, there is considerable interest in understanding how different molecular mechanisms impact the 'noise' in gene expression. Of particular interest are post-transcriptional regulatory mechanisms involving genes called small RNAs, which control important processes such as development and cancer. We propose and analyze general stochastic models of gene expression and derive exact analytical expressions quantifying the noise in protein distributions [1]. Focusing on specific regulatory mechanisms, we analyze a general model for post-transcriptional regulation of stochastic gene expression [2]. The results obtained provide new insights into the role of post-transcriptional regulation in controlling the noise in gene expression. [4pt] [1] T. Jia and R. V. Kulkarni, Phys. Rev. Lett.,106, 058102 (2011) [0pt] [2] T. Jia and R. V. Kulkarni, Phys. Rev. Lett., 105, 018101 (2010)
NASA Astrophysics Data System (ADS)
Umut Caglar, Mehmet; Pal, Ranadip
2012-10-01
Biological systems are inherently stochastic such that they require the use of probabilistic models to understand and simulate their behaviors. However, stochastic models are extremely complex and computationally expensive which restricts their application to smaller order systems. Probabilistic modeling of larger systems can help to recognize the underlying mechanisms of complex diseases, including cancer. The fine-scale stochastic behavior of genetic regulatory networks is often modeled using stochastic master equations. The inherently high computational complexity of the stochastic master equation simulation presents a challenge in its application to biological system modeling even when the model parameters can be properly estimated. In this article, we present a new approach to stochastic model simulation based on Kronecker product analysis and approximation of Zassenhaus formula for matrix exponentials. Simulation results illustrate the comparative performance of our modeling approach to stochastic master equations with significantly lower computational complexity. We also provide a stochastic upper bound on the deviation of the steady state distribution of our model from the steady state distribution of the stochastic master equation.
Cancer begins in your cells, which are the building blocks of your body. Normally, your body forms ... be benign or malignant. Benign tumors aren't cancer while malignant ones are. Cells from malignant tumors ...
Biochemical simulations: stochastic, approximate stochastic and hybrid approaches
2009-01-01
Computer simulations have become an invaluable tool to study the sometimes counterintuitive temporal dynamics of (bio-)chemical systems. In particular, stochastic simulation methods have attracted increasing interest recently. In contrast to the well-known deterministic approach based on ordinary differential equations, they can capture effects that occur due to the underlying discreteness of the systems and random fluctuations in molecular numbers. Numerous stochastic, approximate stochastic and hybrid simulation methods have been proposed in the literature. In this article, they are systematically reviewed in order to guide the researcher and help her find the appropriate method for a specific problem. PMID:19151097
Stochastic reconstruction of sandstones
Manwart; Torquato; Hilfer
2000-07-01
A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples. PMID:11088546
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
A stochastic multi-symplectic scheme for stochastic Maxwell equations with additive noise
Hong, Jialin; Zhang, Liying
2014-07-01
In this paper we investigate a stochastic multi-symplectic method for stochastic Maxwell equations with additive noise. Based on the stochastic version of variational principle, we find a way to obtain the stochastic multi-symplectic structure of three-dimensional (3-D) stochastic Maxwell equations with additive noise. We propose a stochastic multi-symplectic scheme and show that it preserves the stochastic multi-symplectic conservation law and the local and global stochastic energy dissipative properties, which the equations themselves possess. Numerical experiments are performed to verify the numerical behaviors of the stochastic multi-symplectic scheme.
de la Peña-López, Roberto; Remolina-Bonilla, Yuly Andrea
2016-09-01
Cancer is a group of diseases which represents a significant public health problem in Mexico and worldwide. In Mexico neoplasms are the second leading cause of death. An increased morbidity and mortality are expected in the next decades. Several preventable risk factors for cancer development have been identified, the most relevant including tobacco use, which accounts for 30% of the cancer cases; and obesity, associated to another 30%. These factors, in turn, are related to sedentarism, alcohol abuse and imbalanced diets. Some agents are well knokn to cause cancer such as ionizing radiation, viruses such as the papilloma virus (HPV) and hepatitis virus (B and C), and more recently environmental pollution exposure and red meat consumption have been pointed out as carcinogens by the International Agency for Research in Cancer (IARC). The scientific evidence currently available is insufficient to consider milk either as a risk factor or protective factor against different types of cancer. PMID:27603890
Stochastic roots of growth phenomena
NASA Astrophysics Data System (ADS)
De Lauro, E.; De Martino, S.; De Siena, S.; Giorno, V.
2014-05-01
We show that the Gompertz equation describes the evolution in time of the median of a geometric stochastic process. Therefore, we induce that the process itself generates the growth. This result allows us further to exploit a stochastic variational principle to take account of self-regulation of growth through feedback of relative density variations. The conceptually well defined framework so introduced shows its usefulness by suggesting a form of control of growth by exploiting external actions.
Stochastic superparameterization in quasigeostrophic turbulence
Grooms, Ian; Majda, Andrew J.
2014-08-15
In this article we expand and develop the authors' recent proposed methodology for efficient stochastic superparameterization algorithms for geophysical turbulence. Geophysical turbulence is characterized by significant intermittent cascades of energy from the unresolved to the resolved scales resulting in complex patterns of waves, jets, and vortices. Conventional superparameterization simulates large scale dynamics on a coarse grid in a physical domain, and couples these dynamics to high-resolution simulations on periodic domains embedded in the coarse grid. Stochastic superparameterization replaces the nonlinear, deterministic eddy equations on periodic embedded domains by quasilinear stochastic approximations on formally infinite embedded domains. The result is a seamless algorithm which never uses a small scale grid and is far cheaper than conventional SP, but with significant success in difficult test problems. Various design choices in the algorithm are investigated in detail here, including decoupling the timescale of evolution on the embedded domains from the length of the time step used on the coarse grid, and sensitivity to certain assumed properties of the eddies (e.g. the shape of the assumed eddy energy spectrum). We present four closures based on stochastic superparameterization which elucidate the properties of the underlying framework: a ‘null hypothesis’ stochastic closure that uncouples the eddies from the mean, a stochastic closure with nonlinearly coupled eddies and mean, a nonlinear deterministic closure, and a stochastic closure based on energy conservation. The different algorithms are compared and contrasted on a stringent test suite for quasigeostrophic turbulence involving two-layer dynamics on a β-plane forced by an imposed background shear. The success of the algorithms developed here suggests that they may be fruitfully applied to more realistic situations. They are expected to be particularly useful in providing accurate and
Brennan J. M.; Blaskiewicz, M.; Mernick, K.
2012-05-20
The full 6-dimensional [x,x'; y,y'; z,z'] stochastic cooling system for RHIC was completed and operational for the FY12 Uranium-Uranium collider run. Cooling enhances the integrated luminosity of the Uranium collisions by a factor of 5, primarily by reducing the transverse emittances but also by cooling in the longitudinal plane to preserve the bunch length. The components have been deployed incrementally over the past several runs, beginning with longitudinal cooling, then cooling in the vertical planes but multiplexed between the Yellow and Blue rings, next cooling both rings simultaneously in vertical (the horizontal plane was cooled by betatron coupling), and now simultaneous horizontal cooling has been commissioned. The system operated between 5 and 9 GHz and with 3 x 10{sup 8} Uranium ions per bunch and produces a cooling half-time of approximately 20 minutes. The ultimate emittance is determined by the balance between cooling and emittance growth from Intra-Beam Scattering. Specific details of the apparatus and mathematical techniques for calculating its performance have been published elsewhere. Here we report on: the method of operation, results with beam, and comparison of results to simulations.
A prospective randomized study of 1- and 2-stage sinus inlay bone grafts: 1-year follow-up.
Wannfors, K; Johansson, B; Hallman, M; Strandkvist, T
2000-01-01
The purpose of the present study was to compare the success of and surgical differences between 1- and 2-stage sinus inlay bone grafts and implants after 1 year in function. The individual risk for implant failure in grafted areas among 1-stage patients was about twice the risk in 2-stage patients (odds ratio 2.3, CI 0.6; 8.5). The risk for implant failure in non-grafted areas was significantly lower (P < .05) than in grafted areas, regardless of the technique used. Forty edentulous patients, selected according to strict inclusion criteria from consecutive referrals, were allocated to one or other of the 2 sinus-inlay procedures. Twenty patients received bone blocks fixed by implants to the residual alveolar crest in a 1-stage procedure (group 1). In another 20 patients, particulated bone was condensed against the antral floor and left to heal for 6 months before implants were placed (group 2). An almost equal number of implants was placed in the patients of each group, 76 in the 1-stage procedure and 74 in the 2-stage procedure. Additionally, 72 and 66 implants were placed in the anterior non-grafted regions of group 1 and group 2 patients, respectively. After 1 year in function, a total of 20 implants failed in 1-stage patients, versus 11 in 2-stage patients. Sixteen and 8 implants, respectively, of these were placed in grafted bone. All but one 1-stage patient received the planned fixed prosthetic restorations, but 1 restoration was redesigned after the first year in function because of a functionally unacceptable prosthetic design. At the 1-year follow-up, one 2-stage patient lost her prosthesis as the result of multiple implant failures. Bruxism and postoperative infections were the only parameters that could be related to implant failure, however, depending on the statistical method used. PMID:11055129
Stacking with stochastic cooling
NASA Astrophysics Data System (ADS)
Caspers, Fritz; Möhl, Dieter
2004-10-01
Accumulation of large stacks of antiprotons or ions with the aid of stochastic cooling is more delicate than cooling a constant intensity beam. Basically the difficulty stems from the fact that the optimized gain and the cooling rate are inversely proportional to the number of particles 'seen' by the cooling system. Therefore, to maintain fast stacking, the newly injected batch has to be strongly 'protected' from the Schottky noise of the stack. Vice versa the stack has to be efficiently 'shielded' against the high gain cooling system for the injected beam. In the antiproton accumulators with stacking ratios up to 105 the problem is solved by radial separation of the injection and the stack orbits in a region of large dispersion. An array of several tapered cooling systems with a matched gain profile provides a continuous particle flux towards the high-density stack core. Shielding of the different systems from each other is obtained both through the spatial separation and via the revolution frequencies (filters). In the 'old AA', where the antiproton collection and stacking was done in one single ring, the injected beam was further shielded during cooling by means of a movable shutter. The complexity of these systems is very high. For more modest stacking ratios, one might use azimuthal rather than radial separation of stack and injected beam. Schematically half of the circumference would be used to accept and cool new beam and the remainder to house the stack. Fast gating is then required between the high gain cooling of the injected beam and the low gain stack cooling. RF-gymnastics are used to merge the pre-cooled batch with the stack, to re-create free space for the next injection, and to capture the new batch. This scheme is less demanding for the storage ring lattice, but at the expense of some reduction in stacking rate. The talk reviews the 'radial' separation schemes and also gives some considerations to the 'azimuthal' schemes.
NASA Technical Reports Server (NTRS)
Whitney, W. J.
1977-01-01
The stage work distribution among the three stages was very close to the design value. The specific work output-mass flow characteristics of the three stages were closely matched. The efficiency of the 3 1/2 stage turbine at design specific work output and design speed was within 0.008 of the estimated value, and this agreement was felt to demonstrate the adequacy of the prediction method in the high stage loading factor regime.
A Stochastic Collocation Algorithm for Uncertainty Analysis
NASA Technical Reports Server (NTRS)
Mathelin, Lionel; Hussaini, M. Yousuff; Zang, Thomas A. (Technical Monitor)
2003-01-01
This report describes a stochastic collocation method to adequately handle a physically intrinsic uncertainty in the variables of a numerical simulation. For instance, while the standard Galerkin approach to Polynomial Chaos requires multi-dimensional summations over the stochastic basis functions, the stochastic collocation method enables to collapse those summations to a one-dimensional summation only. This report furnishes the essential algorithmic details of the new stochastic collocation method and provides as a numerical example the solution of the Riemann problem with the stochastic collocation method used for the discretization of the stochastic parameters.
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Stochastic models: theory and simulation.
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Stochastic simulation in systems biology
Székely, Tamás; Burrage, Kevin
2014-01-01
Natural systems are, almost by definition, heterogeneous: this can be either a boon or an obstacle to be overcome, depending on the situation. Traditionally, when constructing mathematical models of these systems, heterogeneity has typically been ignored, despite its critical role. However, in recent years, stochastic computational methods have become commonplace in science. They are able to appropriately account for heterogeneity; indeed, they are based around the premise that systems inherently contain at least one source of heterogeneity (namely, intrinsic heterogeneity). In this mini-review, we give a brief introduction to theoretical modelling and simulation in systems biology and discuss the three different sources of heterogeneity in natural systems. Our main topic is an overview of stochastic simulation methods in systems biology. There are many different types of stochastic methods. We focus on one group that has become especially popular in systems biology, biochemistry, chemistry and physics. These discrete-state stochastic methods do not follow individuals over time; rather they track only total populations. They also assume that the volume of interest is spatially homogeneous. We give an overview of these methods, with a discussion of the advantages and disadvantages of each, and suggest when each is more appropriate to use. We also include references to software implementations of them, so that beginners can quickly start using stochastic methods for practical problems of interest. PMID:25505503
Variance decomposition in stochastic simulators.
Le Maître, O P; Knio, O M; Moraes, A
2015-06-28
This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418
NASA Technical Reports Server (NTRS)
Lacksonen, Thomas A.
1994-01-01
Small space flight project design at NASA Langley Research Center goes through a multi-phase process from preliminary analysis to flight operations. The process insures that each system achieves its technical objectives with demonstrated quality and within planned budgets and schedules. A key technical component of early phases is decision analysis, which is a structure procedure for determining the best of a number of feasible concepts based upon project objectives. Feasible system concepts are generated by the designers and analyzed for schedule, cost, risk, and technical measures. Each performance measure value is normalized between the best and worst values and a weighted average score of all measures is calculated for each concept. The concept(s) with the highest scores are retained, while others are eliminated from further analysis. This project automated and enhanced the decision analysis process. Automation of the decision analysis process was done by creating a user-friendly, menu-driven, spreadsheet macro based decision analysis software program. The program contains data entry dialog boxes, automated data and output report generation, and automated output chart generation. The enhancements to the decision analysis process permit stochastic data entry and analysis. Rather than enter single measure values, the designers enter the range and most likely value for each measure and concept. The data can be entered at the system or subsystem level. System level data can be calculated as either sum, maximum, or product functions of the subsystem data. For each concept, the probability distributions are approximated for each measure and the total score for each concept as either constant, triangular, normal, or log-normal distributions. Based on these distributions, formulas are derived for the probability that the concept meets any given constraint, the probability that the concept meets all constraints, and the probability that the concept is within a given
Correlation functions in stochastic inflation
NASA Astrophysics Data System (ADS)
Vennin, Vincent; Starobinsky, Alexei A.
2015-09-01
Combining the stochastic and formalisms, we derive non-perturbative analytical expressions for all correlation functions of scalar perturbations in single-field, slow-roll inflation. The standard, classical formulas are recovered as saddle-point limits of the full results. This yields a classicality criterion that shows that stochastic effects are small only if the potential is sub-Planckian and not too flat. The saddle-point approximation also provides an expansion scheme for calculating stochastic corrections to observable quantities perturbatively in this regime. In the opposite regime, we show that a strong suppression in the power spectrum is generically obtained, and we comment on the physical implications of this effect.
Stochastic determination of matrix determinants
NASA Astrophysics Data System (ADS)
Dorn, Sebastian; Enßlin, Torsten A.
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.
Nonlinear optimization for stochastic simulations.
Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane; Ammerlahn, Heidi R.
2003-12-01
This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
Stochastic determination of matrix determinants.
Dorn, Sebastian; Ensslin, Torsten A
2015-07-01
Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination. PMID:26274302
Mechanical autonomous stochastic heat engines
NASA Astrophysics Data System (ADS)
Serra-Garcia, Marc; Foehr, Andre; Moleron, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara; . Team
Stochastic heat engines extract work from the Brownian motion of a set of particles out of equilibrium. So far, experimental demonstrations of stochastic heat engines have required extreme operating conditions or nonautonomous external control systems. In this talk, we will present a simple, purely classical, autonomous stochastic heat engine that uses the well-known tension induced nonlinearity in a string. Our engine operates between two heat baths out of equilibrium, and transfers energy from the hot bath to a work reservoir. This energy transfer occurs even if the work reservoir is at a higher temperature than the hot reservoir. The talk will cover a theoretical investigation and experimental results on a macroscopic setup subject to external noise excitations. This system presents an opportunity for the study of non equilibrium thermodynamics and is an interesting candidate for innovative energy conversion devices.
Stochastic Control of Pharmacokinetic Systems
Schumitzky, Alan; Milman, Mark; Katz, Darryl; D'Argenio, David Z.; Jelliffe, Roger W.
1983-01-01
The application of stochastic control theory to the clinical problem of designing a dosage regimen for a pharmacokinetic system is considered. This involves defining a patient-dependent pharmacokinetic model and a clinically appropriate therapeutic goal. Most investigators have attacked the dosage regimen problem by first estimating the values of the patient's unknown model parameters and then controlling the system as if those parameter estimates were in fact the true values. We have developed an alternative approach utilizing stochastic control theory in which the estimation and control phases of the problem are not separated. Mathematical results are given which show that this approach yields significant potential improvement in attaining, for example, therapeutic serum level goals over methods in which estimation and control are separated. Finally, a computer simulation is given for the optimal stochastic control of an aminoglycoside regimen which shows that this approach is feasible for practical applications.
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on
QB1 - Stochastic Gene Regulation
Munsky, Brian
2012-07-23
Summaries of this presentation are: (1) Stochastic fluctuations or 'noise' is present in the cell - Random motion and competition between reactants, Low copy, quantization of reactants, Upstream processes; (2) Fluctuations may be very important - Cell-to-cell variability, Cell fate decisions (switches), Signal amplification or damping, stochastic resonances; and (3) Some tools are available to mode these - Kinetic Monte Carlo simulations (SSA and variants), Moment approximation methods, Finite State Projection. We will see how modeling these reactions can tell us more about the underlying processes of gene regulation.
Stochastic modeling of Lagrangian accelerations
NASA Astrophysics Data System (ADS)
Reynolds, Andy
2002-11-01
It is shown how Sawford's second-order Lagrangian stochastic model (Phys. Fluids A 3, 1577-1586, 1991) for fluid-particle accelerations can be combined with a model for the evolution of the dissipation rate (Pope and Chen, Phys. Fluids A 2, 1437-1449, 1990) to produce a Lagrangian stochastic model that is consistent with both the measured distribution of Lagrangian accelerations (La Porta et al., Nature 409, 1017-1019, 2001) and Kolmogorov's similarity theory. The later condition is found not to be satisfied when a constant dissipation rate is employed and consistency with prescribed acceleration statistics is enforced through fulfilment of a well-mixed condition.
Stochastic Cooling Developments at GSI
Nolden, F.; Beckert, K.; Beller, P.; Dolinskii, A.; Franzke, B.; Jandewerth, U.; Nesmiyan, I.; Peschke, C.; Petri, P.; Steck, M.; Caspers, F.; Moehl, D.; Thorndahl, L.
2006-03-20
Stochastic Cooling is presently used at the existing storage ring ESR as a first stage of cooling for secondary heavy ion beams. In the frame of the FAIR project at GSI, stochastic cooling is planned to play a major role for the preparation of high quality antiproton and rare isotope beams. The paper describes the existing ESR system, the first stage cooling system at the planned Collector Ring, and will also cover first steps toward the design of an antiproton collection system at the planned RESR ring.
Stochastic Optimization of Complex Systems
Birge, John R.
2014-03-20
This project focused on methodologies for the solution of stochastic optimization problems based on relaxation and penalty methods, Monte Carlo simulation, parallel processing, and inverse optimization. The main results of the project were the development of a convergent method for the solution of models that include expectation constraints as in equilibrium models, improvement of Monte Carlo convergence through the use of a new method of sample batch optimization, the development of new parallel processing methods for stochastic unit commitment models, and the development of improved methods in combination with parallel processing for incorporating automatic differentiation methods into optimization.
Some remarks on Nelson's stochastic field
NASA Astrophysics Data System (ADS)
Lim, S. C.
1980-09-01
An attempt to extend Nelson's stochastic quantization procedure to tensor fields indicates that the results of Guerra et al. on the connection between a euclidean Markov scalar field and a stochastic scalar field fails to hold for tensor fields.
Theory, technology, and technique of stochastic cooling
Marriner, J.
1993-10-01
The theory and technological implementation of stochastic cooling is described. Theoretical and technological limitations are discussed. Data from existing stochastic cooling systems are shown to illustrate some useful techniques.
Partial ASL extensions for stochastic programming.
Energy Science and Technology Software Center (ESTSC)
2010-03-31
partially completed extensions for stochastic programming to the AMPL/solver interface library (ASL).modeling and experimenting with stochastic recourse problems. This software is not primarily for military applications
The Hamiltonian Mechanics of Stochastic Acceleration
Burby, J. W.
2013-07-17
We show how to nd the physical Langevin equation describing the trajectories of particles un- dergoing collisionless stochastic acceleration. These stochastic di erential equations retain not only one-, but two-particle statistics, and inherit the Hamiltonian nature of the underlying microscopic equations. This opens the door to using stochastic variational integrators to perform simulations of stochastic interactions such as Fermi acceleration. We illustrate the theory by applying it to two example problems.
Stochastically forced zonal flows
NASA Astrophysics Data System (ADS)
Srinivasan, Kaushik
an approximate equation for the vorticity correlation function that is then solved perturbatively. The Reynolds stress of the pertubative solution can then be expressed as a function of the mean-flow and its y-derivatives. In particular, it is shown that as long as the forcing breaks mirror-symmetry, the Reynolds stress has a wave-like term, as a result of which the mean-flow is governed by a dispersive wave equation. In a separate study, Reynolds stress induced by an anisotropically forced unbounded Couette flow with uniform shear gamma, on a beta-plane, is calculated in conjunction with the eddy diffusivity of a co-evolving passive tracer. The flow is damped by linear drag on a time scale mu--1. The stochastic forcing is controlled by a parameter alpha, that characterizes whether eddies are elongated along the zonal direction (alpha < 0), the meridional direction (alpha > 0) or are isotropic (alpha = 0). The Reynolds stress varies linearly with alpha and non-linearly and non-monotonically with gamma; but the Reynolds stress is independent of beta. For positive values of alpha, the Reynolds stress displays an "anti-frictional" effect (energy is transferred from the eddies to the mean flow) and a frictional effect for negative values of alpha. With gamma = beta =0, the meridional tracer eddy diffusivity is v'2/(2mu), where v' is the meridional eddy velocity. In general, beta and gamma suppress the diffusivity below v'2/(2mu).
Stability of stochastic switched SIRS models
NASA Astrophysics Data System (ADS)
Meng, Xiaoying; Liu, Xinzhi; Deng, Feiqi
2011-11-01
Stochastic stability problems of a stochastic switched SIRS model with or without distributed time delay are considered. By utilizing the Lyapunov methods, sufficient stability conditions of the disease-free equilibrium are established. Stability conditions about the subsystem of the stochastic switched SIRS systems are also obtained.
Stochastic architecture for Hopfield neural nets
NASA Technical Reports Server (NTRS)
Pavel, Sandy
1992-01-01
An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.
Stochastic Resonance and Information Processing
NASA Astrophysics Data System (ADS)
Nicolis, C.
2014-12-01
A dynamical system giving rise to multiple steady states and subjected to noise and a periodic forcing is analyzed from the standpoint of information theory. It is shown that stochastic resonance has a clearcut signature on information entropy, information transfer and other related quantities characterizing information transduction within the system.
Stochastic-field cavitation model
Dumond, J.; Magagnato, F.; Class, A.
2013-07-15
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Stochastic resonance on a circle
Wiesenfeld, K. ); Pierson, D.; Pantazelou, E.; Dames, C.; Moss, F. )
1994-04-04
We describe a new realization of stochastic resonance, applicable to a broad class of systems, based on an underlying excitable dynamics with deterministic reinjection. A simple but general theory of such single-trigger'' systems is compared with analog simulations of the Fitzhugh-Nagumo model, as well as experimental data obtained from stimulated sensory neurons in the crayfish.
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth. PMID:25062238
Stochastic cooling: recent theoretical directions
Bisognano, J.
1983-03-01
A kinetic-equation derivation of the stochastic-cooling Fokker-Planck equation of correlation is introduced to describe both the Schottky spectrum and signal suppression. Generalizations to nonlinear gain and coupling between degrees of freedom are presented. Analysis of bunch beam cooling is included.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
NASA Astrophysics Data System (ADS)
Zhang, Ming
2015-10-01
A theory of 2-stage acceleration of Galactic cosmic rays in supernova remnants is proposed. The first stage is accomplished by the supernova shock front, where a power-law spectrum is established up to a certain cutoff energy. It is followed by stochastic acceleration with compressible waves/turbulence in the downstream medium. With a broad \\propto {k}-2 spectrum for the compressible plasma fluctuations, the rate of stochastic acceleration is constant over a wide range of particle momentum. In this case, the stochastic acceleration process extends the power-law spectrum cutoff energy of Galactic cosmic rays to the knee without changing the spectral slope. This situation happens as long as the rate of stochastic acceleration is faster than 1/5 of the adiabatic cooling rate. A steeper spectrum of compressible plasma fluctuations that concentrate their power in long wavelengths will accelerate cosmic rays to the knee with a small bump before its cutoff in the comic-ray energy spectrum. This theory does not require a strong amplification of the magnetic field in the upstream interstellar medium in order to accelerate cosmic rays to the knee energy.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Stochastic resonance in visual sensitivity.
Kundu, Ajanta; Sarkar, Sandip
2015-04-01
It is well known from psychophysical studies that stochastic resonance, in its simplest threshold paradigm, can be used as a tool to measure the detection sensitivity to fine details in noise contaminated stimuli. In the present manuscript, we report simulation studies conducted in the similar threshold paradigm of stochastic resonance. We have estimated the contrast sensitivity in detecting noisy sine-wave stimuli, with varying area and spatial frequency, as a function of noise strength. In all the cases, the measured sensitivity attained a peak at intermediate noise strength, which indicate the occurrence of stochastic resonance. The peak sensitivity exhibited a strong dependence on area and spatial frequency of the stimulus. We show that the peak contrast sensitivity varies with spatial frequency in a nonmonotonic fashion and the qualitative nature of the sensitivity variation is in good agreement with human contrast sensitivity function. We also demonstrate that the peak sensitivity first increases and then saturates with increasing area, and this result is in line with the results of psychophysical experiments. Additionally, we also show that critical area, denoting the saturation of contrast sensitivity, decreases with spatial frequency and the associated maximum contrast sensitivity varies with spatial frequency in a manner that is consistent with the results of psychophysical experiments. In all the studies, the sensitivities were elevated via a nonlinear filtering operation called stochastic resonance. Because of this nonlinear effect, it was not guaranteed that the sensitivities, estimated at each frequency, would be in agreement with the corresponding results of psychophysical experiments; on the contrary, close agreements were observed between our results and the findings of psychophysical investigations. These observations indicate the utility of stochastic resonance in human vision and suggest that this paradigm can be useful in psychophysical studies
Stochastic Models of Quantum Decoherence
NASA Astrophysics Data System (ADS)
Kennerly, Sam
Suppose a single qubit is repeatedly prepared and evolved under imperfectly-controlled conditions. A drunk model represents uncontrolled interactions on each experimental trial as random or stochastic terms in the qubit's Hamiltonian operator. Time evolution of states is generated by a stochastic differential equation whose sample paths evolve according to the Schrodinger equation. For models with Gaussian white noise which is independent of the qubit's state, the expectation value of the solution obeys a master equation which is identical to the high-temperature limit of the Bloch equation. Drunk models predict that experimental data can appear consistent with decoherence even if qubit states evolve by unitary transformations. Examples are shown in which reversible evolution appears to cause irreversible information loss. This paradox is resolved by distinguishing between the true state of a system and the estimated state inferred from an experimental dataset.
Stochastic scanning multiphoton multifocal microscopy.
Jureller, Justin E; Kim, Hee Y; Scherer, Norbert F
2006-04-17
Multiparticle tracking with scanning confocal and multiphoton fluorescence imaging is increasingly important for elucidating biological function, as in the transport of intracellular cargo-carrying vesicles. We demonstrate a simple rapid-sampling stochastic scanning multifocal multiphoton microscopy (SS-MMM) fluorescence imaging technique that enables multiparticle tracking without specialized hardware at rates 1,000 times greater than conventional single point raster scanning. Stochastic scanning of a diffractive optic generated 10x10 hexagonal array of foci with a white noise driven galvanometer yields a scan pattern that is random yet space-filling. SS-MMM creates a more uniformly sampled image with fewer spatio-temporal artifacts than obtained by conventional or multibeam raster scanning. SS-MMM is verified by simulation and experimentally demonstrated by tracking microsphere diffusion in solution. PMID:19516485
Stochastic thermodynamics with information reservoirs.
Barato, Andre C; Seifert, Udo
2014-10-01
We generalize stochastic thermodynamics to include information reservoirs. Such information reservoirs, which can be modeled as a sequence of bits, modify the second law. For example, work extraction from a system in contact with a single heat bath becomes possible if the system also interacts with an information reservoir. We obtain an inequality, and the corresponding fluctuation theorem, generalizing the standard entropy production of stochastic thermodynamics. From this inequality we can derive an information processing entropy production, which gives the second law in the presence of information reservoirs. We also develop a systematic linear response theory for information processing machines. For a unicyclic machine powered by an information reservoir, the efficiency at maximum power can deviate from the standard value of 1/2. For the case where energy is consumed to erase the tape, the efficiency at maximum erasure rate is found to be 1/2. PMID:25375481
Stochastic cooling technology at Fermilab
NASA Astrophysics Data System (ADS)
Pasquinelli, Ralph J.
2004-10-01
The first antiproton cooling systems were installed and commissioned at Fermilab in 1984-1985. In the interim period, there have been several major upgrades, system improvements, and complete reincarnation of cooling systems. This paper will present some of the technology that was pioneered at Fermilab to implement stochastic cooling systems in both the Antiproton Source and Recycler accelerators. Current performance data will also be presented.
Stochastic background of atmospheric cascades
NASA Astrophysics Data System (ADS)
Wilk, G.; WŁOdarczyk, Z.
1993-06-01
Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions.
Stochastic neural nets and vision
NASA Astrophysics Data System (ADS)
Fall, Thomas C.
1991-03-01
A stochastic neural net shares with the normally defined neural nets the concept that information is processed by a system consisting of a set of nodes (neurons) connected by weighted links (axons). The normal neural net takes in inputs on an initial layer of neurons which fire appropriately; a neuron of the next layer fires depending on the sum of weights of the axons leading to it from fired neurons of the first layer. The stochastic neural net differs in that the neurons are more complex and that the vision activity is a dynamic process. The first layer (viewing layer) of neurons fires stochastically based on the average brightness of the area it sees and then has a refractory period. The viewing layer looks at the image for several clock cycles. The effect is like those photo sensitive sunglasses that darken in bright light. The neurons over the bright areas are most likely in a refractory period (and this can't fire) and the neurons over the dark areas are not. Now if we move the sensing layer with respect to the image so that a portion of the neurons formerly over the dark are now over the bright, they will likely all fire on that first cycle. Thus, on that cycle, one would see a flash from that portion significantly stronger than surrounding regions. Movement the other direction would produce a patch that is darker, but this effect is not as noticeable. These effects are collected in a collection layer. This paper will discuss the use of the stochastic neural net for edge detection and segmentation of some simple images.
Stochastic background of atmospheric cascades
Wilk, G. ); Wlodarczyk, Z. )
1993-06-15
Fluctuations in the atmospheric cascades developing during the propagation of very high energy cosmic rays through the atmosphere are investigated using stochastic branching model of pure birth process with immigration. In particular, we show that the multiplicity distributions of secondaries emerging from gamma families are much narrower than those resulting from hadronic families. We argue that the strong intermittent like behaviour found recently in atmospheric families results from the fluctuations in the cascades themselves and are insensitive to the details of elementary interactions.
Discrete stability in stochastic programming
Lepp, R.
1994-12-31
In this lecture we study stability properties of stochastic programs with recourse where the probability measure is approximated by a sequence of weakly convergent discrete measures. Such discrete approximation approach gives us a possibility to analyze explicitly the behavior of the second stage correction function. The approach is based on modern functional analytical methods of an approximation of extremum problems in function spaces, especially on the notion of the discrete convergence of vectors to an essentially bounded measurable function.
Symmetry and Stochastic Gene Regulation
NASA Astrophysics Data System (ADS)
Ramos, Alexandre F.; Hornos, José E. M.
2007-09-01
Lorentz-like noncompact Lie symmetry SO(2,1) is found in a spin-boson stochastic model for gene expression. The invariant of the algebra characterizes the switch decay to equilibrium. The azimuthal eigenvalue describes the affinity between the regulatory protein and the gene operator site. Raising and lowering operators are constructed and their actions increase or decrease the affinity parameter. The classification of the noise regime of the gene arises from the group theoretical numbers.
Mechanical Autonomous Stochastic Heat Engine.
Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara
2016-07-01
Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir. PMID:27419553
Multiple fields in stochastic inflation
NASA Astrophysics Data System (ADS)
Assadullahi, Hooshyar; Firouzjahi, Hassan; Noorbala, Mahdiyar; Vennin, Vincent; Wands, David
2016-06-01
Stochastic effects in multi-field inflationary scenarios are investigated. A hierarchy of diffusion equations is derived, the solutions of which yield moments of the numbers of inflationary e-folds. Solving the resulting partial differential equations in multi-dimensional field space is more challenging than the single-field case. A few tractable examples are discussed, which show that the number of fields is, in general, a critical parameter. When more than two fields are present for instance, the probability to explore arbitrarily large-field regions of the potential, otherwise inaccessible to single-field dynamics, becomes non-zero. In some configurations, this gives rise to an infinite mean number of e-folds, regardless of the initial conditions. Another difference with respect to single-field scenarios is that multi-field stochastic effects can be large even at sub-Planckian energy. This opens interesting new possibilities for probing quantum effects in inflationary dynamics, since the moments of the numbers of e-folds can be used to calculate the distribution of primordial density perturbations in the stochastic-δ N formalism.
Mechanical Autonomous Stochastic Heat Engine
NASA Astrophysics Data System (ADS)
Serra-Garcia, Marc; Foehr, André; Molerón, Miguel; Lydon, Joseph; Chong, Christopher; Daraio, Chiara
2016-07-01
Stochastic heat engines are devices that generate work from random thermal motion using a small number of highly fluctuating degrees of freedom. Proposals for such devices have existed for more than a century and include the Maxwell demon and the Feynman ratchet. Only recently have they been demonstrated experimentally, using, e.g., thermal cycles implemented in optical traps. However, recent experimental demonstrations of classical stochastic heat engines are nonautonomous, since they require an external control system that prescribes a heating and cooling cycle and consume more energy than they produce. We present a heat engine consisting of three coupled mechanical resonators (two ribbons and a cantilever) subject to a stochastic drive. The engine uses geometric nonlinearities in the resonating ribbons to autonomously convert a random excitation into a low-entropy, nonpassive oscillation of the cantilever. The engine presents the anomalous heat transport property of negative thermal conductivity, consisting in the ability to passively transfer energy from a cold reservoir to a hot reservoir.
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution
2-Stage Classification Modeling
Energy Science and Technology Software Center (ESTSC)
1994-11-01
CIRCUIT2.4 is used to design optimum two-stage classification configurations and operating conditions for energy conservation. It permits simulation of five basic grinding-classification circuits, including one single-stage and four two-stage classification arrangements. Hydrocyclones, spiral classifiers, and sieve band screens can be simulated, and the user may choose the combination of devices for the flowsheet simulation. In addition, the user may select from four classification modeling methods to achieve the goals of a simulation project using themore » most familiar concepts. Circuit performance is modeled based on classification parameters or equipment operating conditions. A modular approach was taken in designing the program, which allows future addition of other models with relatively minor changes.« less
Long time behaviour of a stochastic nanoparticle
NASA Astrophysics Data System (ADS)
Étoré, Pierre; Labbé, Stéphane; Lelong, Jérôme
2014-09-01
In this article, we are interested in the behaviour of a single ferromagnetic mono-domain particle submitted to an external field with a stochastic perturbation. This model is the first step toward the mathematical understanding of thermal effects on a ferromagnet. In a first part, we present the stochastic model and prove that the associated stochastic differential equation is well defined. The second part is dedicated to the study of the long time behaviour of the magnetic moment and in the third part we prove that the stochastic perturbation induces a non-reversibility phenomenon. Last, we illustrate these results through numerical simulations of our stochastic model. The main results presented in this article are on the one hand the rate of convergence of the magnetization toward the unique stable equilibrium of the deterministic model and on the other hand a sharp estimate of the hysteresis phenomenon induced by the stochastic perturbation (remember that with no perturbation, the magnetic moment remains constant).
Generalized spectral decomposition for stochastic nonlinear problems
Nouy, Anthony Le Maitre, Olivier P.
2009-01-10
We present an extension of the generalized spectral decomposition method for the resolution of nonlinear stochastic problems. The method consists in the construction of a reduced basis approximation of the Galerkin solution and is independent of the stochastic discretization selected (polynomial chaos, stochastic multi-element or multi-wavelets). Two algorithms are proposed for the sequential construction of the successive generalized spectral modes. They involve decoupled resolutions of a series of deterministic and low-dimensional stochastic problems. Compared to the classical Galerkin method, the algorithms allow for significant computational savings and require minor adaptations of the deterministic codes. The methodology is detailed and tested on two model problems, the one-dimensional steady viscous Burgers equation and a two-dimensional nonlinear diffusion problem. These examples demonstrate the effectiveness of the proposed algorithms which exhibit convergence rates with the number of modes essentially dependent on the spectrum of the stochastic solution but independent of the dimension of the stochastic approximation space.
Ant colony optimization and stochastic gradient descent.
Meuleau, Nicolas; Dorigo, Marco
2002-01-01
In this article, we study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques. PMID:12171633
Stochastic Vorticity and Associated Filtering Theory
Amirdjanova, A.; Kallianpur, G.
2002-12-19
The focus of this work is on a two-dimensional stochastic vorticity equation for an incompressible homogeneous viscous fluid. We consider a signed measure-valued stochastic partial differential equation for a vorticity process based on the Skorohod-Ito evolution of a system of N randomly moving point vortices. A nonlinear filtering problem associated with the evolution of the vorticity is considered and a corresponding Fujisaki-Kallianpur-Kunita stochastic differential equation for the optimal filter is derived.
Stochastic Turing patterns on a network.
Asslani, Malbor; Di Patti, Francesca; Fanelli, Duccio
2012-10-01
The process of stochastic Turing instability on a scale-free network is discussed for a specific case study: the stochastic Brusselator model. The system is shown to spontaneously differentiate into activator-rich and activator-poor nodes outside the region of parameters classically deputed to the deterministic Turing instability. This phenomenon, as revealed by direct stochastic simulations, is explained analytically and eventually traced back to the finite-size corrections stemming from the inherent graininess of the scrutinized medium. PMID:23214650
Stochastic Turing patterns on a network
NASA Astrophysics Data System (ADS)
Asslani, Malbor; Di Patti, Francesca; Fanelli, Duccio
2012-10-01
The process of stochastic Turing instability on a scale-free network is discussed for a specific case study: the stochastic Brusselator model. The system is shown to spontaneously differentiate into activator-rich and activator-poor nodes outside the region of parameters classically deputed to the deterministic Turing instability. This phenomenon, as revealed by direct stochastic simulations, is explained analytically and eventually traced back to the finite-size corrections stemming from the inherent graininess of the scrutinized medium.
Stochastics In Circumplanetary Dust Dynamics
NASA Astrophysics Data System (ADS)
Spahn, F.; Krivov, A. V.; Sremcevic, M.; Schwarz, U.; Kurths, J.
Charged dust grains in circumplanetary environments experience, beyond various de- terministic forces, also stochastic perturbations: E.g., fluctuations of the magnetic field, the charge of the grains etc. Here, we investigate the dynamics of a dust population in a circular orbit around the planet which is perturbed by a stochastic magnetic field B , modeled by an isotropi- cally Gaussian white noise. The resulting perturbation equations give rise to a modi- 2 fied diffusion of the inclinations and eccentricities x D [t +/- sin[2nt]/(2n)] (x - alias for eccentricity e and the inclination i, t - time). The diffusion coefficient is found to be D = [G]2/n, where the gyrofrequency and the orbital frequency are denoted by G, and n, respectively. This behavior has been checked by numerical experiments. We have chosen dust grains (1µm in radius) initially moving in circular orbits around a planet (Jupiter) and integrated numerically their trajectories over their typical lifetimes (100 years). The particles were exposed to a Gaussian fluctuating magnetic field B obeying the same statistical properties as in the analytical treatment. In this case, the theoretical 2 findings have been confirmed according to x D t with a diffusion coefficient of D G/n. 2 The theoretical studies showed the statistical properties of B being of decisive im- portance. To this aim, we analyzed the magnetic field data measured by the Galileo magnetometer at Jupiter and found almost Gaussian fluctuations of about 5 % of the mean field and exponentially decaying correlations. This results in a diffusion in the space of orbital elements of at least 1...5 % (variations of inclinations and eccentric- ity) over the lifetime of the dust grains. For smaller dusty motes stochastics might well dominate the dynamics.
Hamilton's principle in stochastic mechanics
NASA Astrophysics Data System (ADS)
Pavon, Michele
1995-12-01
In this paper we establish three variational principles that provide new foundations for Nelson's stochastic mechanics in the case of nonrelativistic particles without spin. The resulting variational picture is much richer and of a different nature with respect to the one previously considered in the literature. We first develop two stochastic variational principles whose Hamilton-Jacobi-like equations are precisely the two coupled partial differential equations that are obtained from the Schrödinger equation (Madelung equations). The two problems are zero-sum, noncooperative, stochastic differential games that are familiar in the control theory literature. They are solved here by means of a new, absolutely elementary method based on Lagrange functionals. For both games the saddle-point equilibrium solution is given by the Nelson's process and the optimal controls for the two competing players are precisely Nelson's current velocity v and osmotic velocity u, respectively. The first variational principle includes as special cases both the Guerra-Morato variational principle [Phys. Rev. D 27, 1774 (1983)] and Schrödinger original variational derivation of the time-independent equation. It also reduces to the classical least action principle when the intensity of the underlying noise tends to zero. It appears as a saddle-point action principle. In the second variational principle the action is simply the difference between the initial and final configurational entropy. It is therefore a saddle-point entropy production principle. From the variational principles it follows, in particular, that both v(x,t) and u(x,t) are gradients of appropriate principal functions. In the variational principles, the role of the background noise has the intuitive meaning of attempting to contrast the more classical mechanical features of the system by trying to maximize the action in the first principle and by trying to increase the entropy in the second. Combining the two variational
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Stochastic thermodynamics of information processing
NASA Astrophysics Data System (ADS)
Cardoso Barato, Andre
2015-03-01
We consider two recent advancements on theoretical aspects of thermodynamics of information processing. First we show that the theory of stochastic thermodynamics can be generalized to include information reservoirs. These reservoirs can be seen as a sequence of bits which has its Shannon entropy changed due to the interaction with the system. Second we discuss bipartite systems, which provide a convenient description of Maxwell's demon. Analyzing a special class of bipartite systems we show that they can be used to study cellular information processing, allowing for the definition of an entropic rate that quantifies how much a cell learns about a fluctuating external environment and that is bounded by the thermodynamic entropy production.
Stochastic solution to quantum dynamics
NASA Technical Reports Server (NTRS)
John, Sarah; Wilson, John W.
1994-01-01
The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.
Constrained Stochastic Extended Redundancy Analysis.
DeSarbo, Wayne S; Hwang, Heungsun; Stadler Blank, Ashley; Kappe, Eelco
2015-06-01
We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA). PMID:24327066
Image-based histologic grade estimation using stochastic geometry analysis
NASA Astrophysics Data System (ADS)
Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.
2011-03-01
Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.
Stochastic models for cell division
NASA Astrophysics Data System (ADS)
Stukalin, Evgeny; Sun, Sean
2013-03-01
The probability of cell division per unit time strongly depends of age of cells, i.e., time elapsed since their birth. The theory of cell populations in the age-time representation is systematically applied for modeling cell division for different spreads in generation times. We use stochastic simulations to address the same issue at the level of individual cells. Our approach unlike deterministic theory enables to analyze the size fluctuations of cell colonies at different growth conditions (in the absence and in the presence of cell death, for initially synchronized and asynchronous cell populations, for conditions of restricted growth). We find the simple quantitative relation between the asymptotic values of relative size fluctuations around mean values for initially synchronized cell populations under growth and the coefficients of variation of generation times. Effect of initial age distribution for asynchronous growth of cell cultures is also studied by simulations. The influence of constant cell death on fluctuations of sizes of cell populations is found to be essential even for small cell death rates, i.e., for realistic growth conditions. The stochastic model is generalized for biologically relevant case that involves both cell reproduction and cell differentiation.
Stochastic models of viral infection
NASA Astrophysics Data System (ADS)
Chou, Tom
2009-03-01
We develop biophysical models of viral infections from a stochastic process perspective. The entry of enveloped viruses is treated as a stochastic multiple receptor and coreceptor engagement process that can lead to membrane fusion or endocytosis. The probabilities of entry via fusion and endocytosis are computed as functions of the receptor/coreceptor engagement rates. Since membrane fusion and endocytosis entry pathways can lead to very different infection outcomes, we delineate the parameter regimes conducive to each entry pathway. After entry, viral material is biochemically processed and degraded as it is transported towards the nucleus. Productive infections occur only when the material reaches the nucleus in the proper biochemical state. Thus, entry into the nucleus in an infectious state requires the proper timing of the cytoplasmic transport process. We compute the productive infection probability and show its nonmonotonic dependence on both transport speeds and biochemical transformation rates. Our results carry subtle consequences on the dosage and efficacy of antivirals such as reverse transcription inhibitors.
Stochastic Methods for Aircraft Design
NASA Technical Reports Server (NTRS)
Pelz, Richard B.; Ogot, Madara
1998-01-01
The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.
Numerical tests of stochastic tomography
NASA Astrophysics Data System (ADS)
Ru-Shan, Wu; Xiao-Bi, Xie
1991-05-01
The method of stochastic tomography proposed by Wu is tested numerically. This method reconstructs the heterospectra (power spectra of heterogeneities) at all depths of a non-uniform random medium using measured joint transverse-angular coherence functions (JTACF) of transmission fluctuations on an array. The inversion method is based on a constrained least-squares inversion implemented via the singular value decomposition. The inversion is also applicable to reconstructions using transverse coherence functions (TCF) or angular coherence functions (ACF); these are merely special cases of JTACF. Through the analysis of sampling functions and singular values, and through numerical examples of reconstruction using theoretically generated coherence functions, we compare the resolution and robustness of reconstructions using TCF, ACF and JTACF. The JTACF can `focus' the coherence analysis at different depths and therefore has a better depth resolution than TCF and ACF. In addition, the JTACF contains much more information than the sum of TCF and ACF, and has much better noise resistance properties than TCF and ACF. Inversion of JTACF can give a reliable reconstruction of heterospectra at different depths even for data with 20% noise contamination. This demonstrates the feasibility of stochastic tomography using JTACF.
RHIC stochastic cooling motion control
Gassner, D.; DeSanto, L.; Olsen, R.H.; Fu, W.; Brennan, J.M.; Liaw, CJ; Bellavia, S.; Brodowski, J.
2011-03-28
Relativistic Heavy Ion Collider (RHIC) beams are subject to Intra-Beam Scattering (IBS) that causes an emittance growth in all three-phase space planes. The only way to increase integrated luminosity is to counteract IBS with cooling during RHIC stores. A stochastic cooling system for this purpose has been developed, it includes moveable pick-ups and kickers in the collider that require precise motion control mechanics, drives and controllers. Since these moving parts can limit the beam path aperture, accuracy and reliability is important. Servo, stepper, and DC motors are used to provide actuation solutions for position control. The choice of motion stage, drive motor type, and controls are based on needs defined by the variety of mechanical specifications, the unique performance requirements, and the special needs required for remote operations in an accelerator environment. In this report we will describe the remote motion control related beam line hardware, position transducers, rack electronics, and software developed for the RHIC stochastic cooling pick-ups and kickers.
Stochastic Modeling of Laminar-Turbulent Transition
NASA Technical Reports Server (NTRS)
Rubinstein, Robert; Choudhari, Meelan
2002-01-01
Stochastic versions of stability equations are developed in order to develop integrated models of transition and turbulence and to understand the effects of uncertain initial conditions on disturbance growth. Stochastic forms of the resonant triad equations, a high Reynolds number asymptotic theory, and the parabolized stability equations are developed.
Variational principles for stochastic fluid dynamics
Holm, Darryl D.
2015-01-01
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations.
Bunched Beam Stochastic Cooling and Coherent Lines
Blaskiewicz, M.; Brennan, J. M.
2006-03-20
Strong coherent signals complicate bunched beam stochastic cooling, and development of the longitudinal stochastic cooling system for RHIC required dealing with coherence in heavy ion beams. Studies with proton beams revealed additional forms of coherence. This paper presents data and analysis for both sorts of beams.
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Attainability analysis in stochastic controlled systems
Ryashko, Lev
2015-03-10
A control problem for stochastically forced nonlinear continuous-time systems is considered. We propose a method for construction of the regulator that provides a preassigned probabilistic distribution of random states in stochastic equilibrium. Geometric criteria of the controllability are obtained. Constructive technique for the specification of attainability sets is suggested.
Stochastic ion acceleration by beating electrostatic waves.
Jorns, B; Choueiri, E Y
2013-01-01
A study is presented of the stochasticity in the orbit of a single, magnetized ion produced by the particle's interaction with two beating electrostatic waves whose frequencies differ by the ion cyclotron frequency. A second-order Lie transform perturbation theory is employed in conjunction with a numerical analysis of the maximum Lyapunov exponent to determine the velocity conditions under which stochasticity occurs in this dynamical system. Upper and lower bounds in ion velocity are found for stochastic orbits with the lower bound approximately equal to the phase velocity of the slower wave. A threshold condition for the onset of stochasticity that is linear with respect to the wave amplitudes is also derived. It is shown that the onset of stochasticity occurs for beating electrostatic waves at lower total wave energy densities than for the case of a single electrostatic wave or two nonbeating electrostatic waves. PMID:23410446
Wagstaff, Marcus James Dermot; Rooke, Michael; Caplash, Yugesh
2016-01-01
Objectives: To share our experience of an extensive calvarial reconstruction in a severely burn-injured, elderly patient in a 2-stage procedure utilizing a novel biodegradable temporizing matrix (NovoSorb BTM), followed by autograft. Materials and Methods: A 66-year-old patient with 75% full-thickness burns, including 7% total body surface area head and neck, with calvarial exposure of approximately 350 cm2, complicated by acute renal failure and smoke inhalation injury. Exposed calvarium was burred down to diploe and biodegradable temporizing matrix was applied. Over the next 29 days, the biodegradable temporizing matrix integrated by vascular and tissue ingrowth from the diploe. Delamination and grafting occurred, however, at 43 days postimplantation of biodegradable temporizing matrix due to skin graft donor-site constraints. Results: Graft take was complete, yielding a robust and aesthetically pleasing early result (26 days post–graft application). Conclusions: Biodegradable temporizing matrix offers an additional resource for reconstructive surgeons faced with fragile patients and complex wounds. PMID:27222681
Time series modeling with pruned multi-layer perceptron and 2-stage damped least-squares method
NASA Astrophysics Data System (ADS)
Voyant, Cyril; Tamas, Wani; Paoli, Christophe; Balu, Aurélia; Muselli, Marc; Nivet, Marie-Laure; Notton, Gilles
2014-03-01
A Multi-Layer Perceptron (MLP) defines a family of artificial neural networks often used in TS modeling and forecasting. Because of its "black box" aspect, many researchers refuse to use it. Moreover, the optimization (often based on the exhaustive approach where "all" configurations are tested) and learning phases of this artificial intelligence tool (often based on the Levenberg-Marquardt algorithm; LMA) are weaknesses of this approach (exhaustively and local minima). These two tasks must be repeated depending on the knowledge of each new problem studied, making the process, long, laborious and not systematically robust. In this paper a pruning process is proposed. This method allows, during the training phase, to carry out an inputs selecting method activating (or not) inter-nodes connections in order to verify if forecasting is improved. We propose to use iteratively the popular damped least-squares method to activate inputs and neurons. A first pass is applied to 10% of the learning sample to determine weights significantly different from 0 and delete other. Then a classical batch process based on LMA is used with the new MLP. The validation is done using 25 measured meteorological TS and cross-comparing the prediction results of the classical LMA and the 2-stage LMA.
Stochastic models of neuronal dynamics
Harrison, L.M; David, O; Friston, K.J
2005-01-01
Cortical activity is the product of interactions among neuronal populations. Macroscopic electrophysiological phenomena are generated by these interactions. In principle, the mechanisms of these interactions afford constraints on biologically plausible models of electrophysiological responses. In other words, the macroscopic features of cortical activity can be modelled in terms of the microscopic behaviour of neurons. An evoked response potential (ERP) is the mean electrical potential measured from an electrode on the scalp, in response to some event. The purpose of this paper is to outline a population density approach to modelling ERPs. We propose a biologically plausible model of neuronal activity that enables the estimation of physiologically meaningful parameters from electrophysiological data. The model encompasses four basic characteristics of neuronal activity and organization: (i) neurons are dynamic units, (ii) driven by stochastic forces, (iii) organized into populations with similar biophysical properties and response characteristics and (iv) multiple populations interact to form functional networks. This leads to a formulation of population dynamics in terms of the Fokker–Planck equation. The solution of this equation is the temporal evolution of a probability density over state-space, representing the distribution of an ensemble of trajectories. Each trajectory corresponds to the changing state of a neuron. Measurements can be modelled by taking expectations over this density, e.g. mean membrane potential, firing rate or energy consumption per neuron. The key motivation behind our approach is that ERPs represent an average response over many neurons. This means it is sufficient to model the probability density over neurons, because this implicitly models their average state. Although the dynamics of each neuron can be highly stochastic, the dynamics of the density is not. This means we can use Bayesian inference and estimation tools that have
Stochastic inflation and nonlinear gravity
NASA Astrophysics Data System (ADS)
Salopek, D. S.; Bond, J. R.
1991-02-01
We show how nonlinear effects of the metric and scalar fields may be included in stochastic inflation. Our formalism can be applied to non-Gaussian fluctuation models for galaxy formation. Fluctuations with wavelengths larger than the horizon length are governed by a network of Langevin equations for the physical fields. Stochastic noise terms arise from quantum fluctuations that are assumed to become classical at horizon crossing and that then contribute to the background. Using Hamilton-Jacobi methods, we solve the Arnowitt-Deser-Misner constraint equations which allows us to separate the growing modes from the decaying ones in the drift phase following each stochastic impulse. We argue that the most reasonable choice of time hypersurfaces for the Langevin system during inflation is T=ln(Ha), where H and a are the local values of the Hubble parameter and the scale factor, since T is the natural time for evolving the short-wavelength scalar field fluctuations in an inhomogeneous background. We derive a Fokker-Planck equation which describes how the probability distribution of scalar field values at a given spatial point evolves in T. Analytic Green's-function solutions obtained for a single scalar field self-interacting through an exponential potential are used to demonstrate (1) if the initial condition of the Hubble parameter is chosen to be consistent with microwave-background limits, H(φ0)/mρ<~10-4, then the fluctuations obey Gaussian statistics to a high precision, independent of the time hypersurface choice and operator-ordering ambiguities in the Fokker-Planck equation, and (2) for scales much larger than our present observable patch of the Universe, the distribution is non-Gaussian, with a tail extending to large energy densities; although there are no observable manifestations, it does show eternal inflation. Lattice simulations of our Langevin network for the exponential potential demonstrate how spatial correlations are incorporated. An initially
... Cancer? Breast Cancer Colon/Rectum Cancer Lung Cancer Prostate Cancer Skin Cancer Show All Cancer Types News and Features Cancer Glossary ACS Bookstore Cancer Information Cancer Basics Cancer Prevention & Detection Signs & Symptoms of Cancer Treatments & Side Effects ...
NASA Astrophysics Data System (ADS)
Mel'nikov, A. V.
1996-10-01
Contents Introduction Chapter I. Basic notions and results from contemporary martingale theory §1.1. General notions of the martingale theory §1.2. Convergence (a.s.) of semimartingales. The strong law of large numbers and the law of the iterated logarithm Chapter II. Stochastic differential equations driven by semimartingales §2.1. Basic notions and results of the theory of stochastic differential equations driven by semimartingales §2.2. The method of monotone approximations. Existence of strong solutions of stochastic equations with non-smooth coefficients §2.3. Linear stochastic equations. Properties of stochastic exponentials §2.4. Linear stochastic equations. Applications to models of the financial market Chapter III. Procedures of stochastic approximation as solutions of stochastic differential equations driven by semimartingales §3.1. Formulation of the problem. A general model and its relation to the classical one §3.2. A general description of the approach to the procedures of stochastic approximation. Convergence (a.s.) and asymptotic normality §3.3. The Gaussian model of stochastic approximation. Averaged procedures and their effectiveness Chapter IV. Statistical estimation in regression models with martingale noises §4.1. The formulation of the problem and classical regression models §4.2. Asymptotic properties of MLS-estimators. Strong consistency, asymptotic normality, the law of the iterated logarithm §4.3. Regression models with deterministic regressors §4.4. Sequential MLS-estimators with guaranteed accuracy and sequential statistical inferences Bibliography
Multiscale Stochastic Simulation and Modeling
James Glimm; Xiaolin Li
2006-01-10
Acceleration driven instabilities of fluid mixing layers include the classical cases of Rayleigh-Taylor instability, driven by a steady acceleration and Richtmyer-Meshkov instability, driven by an impulsive acceleration. Our program starts with high resolution methods of numerical simulation of two (or more) distinct fluids, continues with analytic analysis of these solutions, and the derivation of averaged equations. A striking achievement has been the systematic agreement we obtained between simulation and experiment by using a high resolution numerical method and improved physical modeling, with surface tension. Our study is accompanies by analysis using stochastic modeling and averaged equations for the multiphase problem. We have quantified the error and uncertainty using statistical modeling methods.
Stochastic thermodynamics for active matter
NASA Astrophysics Data System (ADS)
Speck, Thomas
2016-05-01
The theoretical understanding of active matter, which is driven out of equilibrium by directed motion, is still fragmental and model oriented. Stochastic thermodynamics, on the other hand, is a comprehensive theoretical framework for driven systems that allows to define fluctuating work and heat. We apply these definitions to active matter, assuming that dissipation can be modelled by effective non-conservative forces. We show that, through the work, conjugate extensive and intensive observables can be defined even in non-equilibrium steady states lacking a free energy. As an illustration, we derive the expressions for the pressure and interfacial tension of active Brownian particles. The latter becomes negative despite the observed stable phase separation. We discuss this apparent contradiction, highlighting the role of fluctuations, and we offer a tentative explanation.
Stochastic sensing through covalent interactions
Bayley, Hagan; Shin, Seong-Ho; Luchian, Tudor; Cheley, Stephen
2013-03-26
A system and method for stochastic sensing in which the analyte covalently bonds to the sensor element or an adaptor element. If such bonding is irreversible, the bond may be broken by a chemical reagent. The sensor element may be a protein, such as the engineered P.sub.SH type or .alpha.HL protein pore. The analyte may be any reactive analyte, including chemical weapons, environmental toxins and pharmaceuticals. The analyte covalently bonds to the sensor element to produce a detectable signal. Possible signals include change in electrical current, change in force, and change in fluorescence. Detection of the signal allows identification of the analyte and determination of its concentration in a sample solution. Multiple analytes present in the same solution may be detected.
Stochastic dynamics of dengue epidemics
NASA Astrophysics Data System (ADS)
de Souza, David R.; Tomé, Tânia; Pinho, Suani T. R.; Barreto, Florisneide R.; de Oliveira, Mário J.
2013-01-01
We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, such as dengue, and the threshold of the disease. The coexistence space is composed of two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice versa, so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible, for any death rate of infected mosquitoes.
Thermodynamics of stochastic Turing machines
NASA Astrophysics Data System (ADS)
Strasberg, Philipp; Cerrillo, Javier; Schaller, Gernot; Brandes, Tobias
2015-10-01
In analogy to Brownian computers we explicitly show how to construct stochastic models which mimic the behavior of a general-purpose computer (a Turing machine). Our models are discrete state systems obeying a Markovian master equation, which are logically reversible and have a well-defined and consistent thermodynamic interpretation. The resulting master equation, which describes a simple one-step process on an enormously large state space, allows us to thoroughly investigate the thermodynamics of computation for this situation. Especially in the stationary regime we can well approximate the master equation by a simple Fokker-Planck equation in one dimension. We then show that the entropy production rate at steady state can be made arbitrarily small, but the total (integrated) entropy production is finite and grows logarithmically with the number of computational steps.
Heuristic-biased stochastic sampling
Bresina, J.L.
1996-12-31
This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.
Thermodynamics of stochastic Turing machines.
Strasberg, Philipp; Cerrillo, Javier; Schaller, Gernot; Brandes, Tobias
2015-10-01
In analogy to Brownian computers we explicitly show how to construct stochastic models which mimic the behavior of a general-purpose computer (a Turing machine). Our models are discrete state systems obeying a Markovian master equation, which are logically reversible and have a well-defined and consistent thermodynamic interpretation. The resulting master equation, which describes a simple one-step process on an enormously large state space, allows us to thoroughly investigate the thermodynamics of computation for this situation. Especially in the stationary regime we can well approximate the master equation by a simple Fokker-Planck equation in one dimension. We then show that the entropy production rate at steady state can be made arbitrarily small, but the total (integrated) entropy production is finite and grows logarithmically with the number of computational steps. PMID:26565165
Stochastic inversion by ray continuation
Haas, A.; Viallix
1989-05-01
The conventional tomographic inversion consists in minimizing residuals between measured and modelled traveltimes. The process tends to be unstable and some additional constraints are required to stabilize it. The stochastic formulation generalizes the technique and sets it on firmer theoretical bases. The Stochastic Inversion by Ray Continuation (SIRC) is a probabilistic approach, which takes a priori geological information into account and uses probability distributions to characterize data correlations and errors. It makes it possible to tie uncertainties to the results. The estimated parameters are interval velocities and B-spline coefficients used to represent smoothed interfaces. Ray tracing is done by a continuation technique between source and receives. The ray coordinates are computed from one path to the next by solving a linear system derived from Fermat's principle. The main advantages are fast computations, accurate traveltimes and derivatives. The seismic traces are gathered in CMPs. For a particular CMP, several reflecting elements are characterized by their time gradient measured on the stacked section, and related to a mean emergence direction. The program capabilities are tested on a synthetic example as well as on a field example. The strategy consists in inverting the parameters for one layer, then for the next one down. An inversion step is divided in two parts. First the parameters for the layer concerned are inverted, while the parameters for the upper layers remain fixed. Then all the parameters are reinverted. The velocity-depth section computed by the program together with the corresponding errors can be used directly for the interpretation, as an initial model for depth migration or for the complete inversion program under development.
Extinction of metastable stochastic populations.
Assaf, Michael; Meerson, Baruch
2010-02-01
We investigate the phenomenon of extinction of a long-lived self-regulating stochastic population, caused by intrinsic (demographic) noise. Extinction typically occurs via one of two scenarios depending on whether the absorbing state n=0 is a repelling (scenario A) or attracting (scenario B) point of the deterministic rate equation. In scenario A the metastable stochastic population resides in the vicinity of an attracting fixed point next to the repelling point n=0 . In scenario B there is an intermediate repelling point n=n1 between the attracting point n=0 and another attracting point n=n2 in the vicinity of which the metastable population resides. The crux of the theory is a dissipative variant of WKB (Wentzel-Kramers-Brillouin) approximation which assumes that the typical population size in the metastable state is large. Starting from the master equation, we calculate the quasistationary probability distribution of the population sizes and the (exponentially long) mean time to extinction for each of the two scenarios. When necessary, the WKB approximation is complemented (i) by a recursive solution of the quasistationary master equation at small n and (ii) by the van Kampen system-size expansion, valid near the fixed points of the deterministic rate equation. The theory yields both entropic barriers to extinction and pre-exponential factors, and holds for a general set of multistep processes when detailed balance is broken. The results simplify considerably for single-step processes and near the characteristic bifurcations of scenarios A and B. PMID:20365539
Multiple Stochastic Point Processes in Gene Expression
NASA Astrophysics Data System (ADS)
Murugan, Rajamanickam
2008-04-01
We generalize the idea of multiple-stochasticity in chemical reaction systems to gene expression. Using Chemical Langevin Equation approach we investigate how this multiple-stochasticity can influence the overall molecular number fluctuations. We show that the main sources of this multiple-stochasticity in gene expression could be the randomness in transcription and translation initiation times which in turn originates from the underlying bio-macromolecular recognition processes such as the site-specific DNA-protein interactions and therefore can be internally regulated by the supra-molecular structural factors such as the condensation/super-coiling of DNA. Our theory predicts that (1) in case of gene expression system, the variances ( φ) introduced by the randomness in transcription and translation initiation-times approximately scales with the degree of condensation ( s) of DNA or mRNA as φ ∝ s -6. From the theoretical analysis of the Fano factor as well as coefficient of variation associated with the protein number fluctuations we predict that (2) unlike the singly-stochastic case where the Fano factor has been shown to be a monotonous function of translation rate, in case of multiple-stochastic gene expression the Fano factor is a turn over function with a definite minimum. This in turn suggests that the multiple-stochastic processes can also be well tuned to behave like a singly-stochastic point processes by adjusting the rate parameters.
Solving stochastic epidemiological models using computer algebra
NASA Astrophysics Data System (ADS)
Hincapie, Doracelly; Ospina, Juan
2011-06-01
Mathematical modeling in Epidemiology is an important tool to understand the ways under which the diseases are transmitted and controlled. The mathematical modeling can be implemented via deterministic or stochastic models. Deterministic models are based on short systems of non-linear ordinary differential equations and the stochastic models are based on very large systems of linear differential equations. Deterministic models admit complete, rigorous and automatic analysis of stability both local and global from which is possible to derive the algebraic expressions for the basic reproductive number and the corresponding epidemic thresholds using computer algebra software. Stochastic models are more difficult to treat and the analysis of their properties requires complicated considerations in statistical mathematics. In this work we propose to use computer algebra software with the aim to solve epidemic stochastic models such as the SIR model and the carrier-borne model. Specifically we use Maple to solve these stochastic models in the case of small groups and we obtain results that do not appear in standard textbooks or in the books updated on stochastic models in epidemiology. From our results we derive expressions which coincide with those obtained in the classical texts using advanced procedures in mathematical statistics. Our algorithms can be extended for other stochastic models in epidemiology and this shows the power of computer algebra software not only for analysis of deterministic models but also for the analysis of stochastic models. We also perform numerical simulations with our algebraic results and we made estimations for the basic parameters as the basic reproductive rate and the stochastic threshold theorem. We claim that our algorithms and results are important tools to control the diseases in a globalized world.
Lobikin, Maria; Lobo, Daniel; Blackiston, Douglas J; Martyniuk, Christopher J; Tkachenko, Elizabeth; Levin, Michael
2015-10-01
Experimentally induced depolarization of resting membrane potential in "instructor cells" in Xenopus laevis embryos causes hyperpigmentation in an all-or-none fashion in some tadpoles due to excess proliferation and migration of melanocytes. We showed that this stochastic process involved serotonin signaling, adenosine 3',5'-monophosphate (cAMP), and the transcription factors cAMP response element-binding protein (CREB), Sox10, and Slug. Transcriptional microarray analysis of embryos taken at stage 15 (early neurula) and stage 45 (free-swimming tadpole) revealed changes in the abundance of 45 and 517 transcripts, respectively, between control embryos and embryos exposed to the instructor cell-depolarizing agent ivermectin. Bioinformatic analysis revealed that the human homologs of some of the differentially regulated genes were associated with cancer, consistent with the induced arborization and invasive behavior of converted melanocytes. We identified a physiological circuit that uses serotonergic signaling between instructor cells, melanotrope cells of the pituitary, and melanocytes to control the proliferation, cell shape, and migration properties of the pigment cell pool. To understand the stochasticity and properties of this multiscale signaling system, we applied a computational machine-learning method that iteratively explored network models to reverse-engineer a stochastic dynamic model that recapitulated the frequency of the all-or-none hyperpigmentation phenotype produced in response to various pharmacological and molecular genetic manipulations. This computational approach may provide insight into stochastic cellular decision-making that occurs during normal development and pathological conditions, such as cancer. PMID:26443706
Immigration-extinction dynamics of stochastic populations
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Ovaskainen, Otso
2013-07-01
How high should be the rate of immigration into a stochastic population in order to significantly reduce the probability of observing the population become extinct? Is there any relation between the population size distributions with and without immigration? Under what conditions can one justify the simple patch occupancy models, which ignore the population distribution and its dynamics in a patch, and treat a patch simply as either occupied or empty? We answer these questions by exactly solving a simple stochastic model obtained by adding a steady immigration to a variant of the Verhulst model: a prototypical model of an isolated stochastic population.
A multilevel stochastic collocation method for SPDEs
Gunzburger, Max; Jantsch, Peter; Teckentrup, Aretha; Webster, Clayton
2015-03-10
We present a multilevel stochastic collocation method that, as do multilevel Monte Carlo methods, uses a hierarchy of spatial approximations to reduce the overall computational complexity when solving partial differential equations with random inputs. For approximation in parameter space, a hierarchy of multi-dimensional interpolants of increasing fidelity are used. Rigorous convergence and computational cost estimates for the new multilevel stochastic collocation method are derived and used to demonstrate its advantages compared to standard single-level stochastic collocation approximations as well as multilevel Monte Carlo methods.
Stochastic system identification in structural dynamics
Safak, Erdal
1988-01-01
Recently, new identification methods have been developed by using the concept of optimal-recursive filtering and stochastic approximation. These methods, known as stochastic identification, are based on the statistical properties of the signal and noise, and do not require the assumptions of current methods. The criterion for stochastic system identification is that the difference between the recorded output and the output from the identified system (i.e., the residual of the identification) should be equal to white noise. In this paper, first a brief review of the theory is given. Then, an application of the method is presented by using ambient vibration data from a nine-story building.
Stochastic string models with continuous semimartingales
NASA Astrophysics Data System (ADS)
Bueno-Guerrero, Alberto; Moreno, Manuel; Navas, Javier F.
2015-09-01
This paper reformulates the stochastic string model of Santa-Clara and Sornette using stochastic calculus with continuous semimartingales. We present some new results, such as: (a) the dynamics of the short-term interest rate, (b) the PDE that must be satisfied by the bond price, and (c) an analytic expression for the price of a European bond call option. Additionally, we clarify some important features of the stochastic string model and show its relevance to price derivatives and the equivalence with an infinite dimensional HJM model to price European options.
Stochastic deformation of a thermodynamic symplectic structure
NASA Astrophysics Data System (ADS)
Kazinski, P. O.
2009-01-01
A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation is analogous to the deformation of an algebra of observables such as deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transformations and gauge fields is given. An application of the formalism to a description of systems with distributed parameters in a local thermodynamic equilibrium is considered.
Stochastic deformation of a thermodynamic symplectic structure.
Kazinski, P O
2009-01-01
A stochastic deformation of a thermodynamic symplectic structure is studied. The stochastic deformation is analogous to the deformation of an algebra of observables such as deformation quantization, but for an imaginary deformation parameter (the Planck constant). Gauge symmetries of thermodynamics and corresponding stochastic mechanics, which describes fluctuations of a thermodynamic system, are revealed and gauge fields are introduced. A physical interpretation to the gauge transformations and gauge fields is given. An application of the formalism to a description of systems with distributed parameters in a local thermodynamic equilibrium is considered. PMID:19256999
Topological charge conservation in stochastic optical fields
NASA Astrophysics Data System (ADS)
Roux, Filippus S.
2016-05-01
The fact that phase singularities in scalar stochastic optical fields are topologically conserved implies the existence of an associated conserved current, which can be expressed in terms of local correlation functions of the optical field and its transverse derivatives. Here, we derive the topological charge current for scalar stochastic optical fields and show that it obeys a conservation equation. We use the expression for the topological charge current to investigate the topological charge flow in inhomogeneous stochastic optical fields with a one-dimensional topological charge density.
Stochastic Satbility and Performance Robustness of Linear Multivariable Systems
NASA Technical Reports Server (NTRS)
Ryan, Laurie E.; Stengel, Robert F.
1990-01-01
Stochastic robustness, a simple technique used to estimate the robustness of linear, time invariant systems, is applied to a single-link robot arm control system. Concepts behind stochastic stability robustness are extended to systems with estimators and to stochastic performance robustness. Stochastic performance robustness measures based on classical design specifications are introduced, and the relationship between stochastic robustness measures and control system design parameters are discussed. The application of stochastic performance robustness, and the relationship between performance objectives and design parameters are demonstrated by means of example. The results prove stochastic robustness to be a good overall robustness analysis method that can relate robustness characteristics to control system design parameters.
Stochastic pump effect and geometric phases in dissipative and stochastic systems
Sinitsyn, Nikolai
2008-01-01
The success of Berry phases in quantum mechanics stimulated the study of similar phenomena in other areas of physics, including the theory of living cell locomotion and motion of patterns in nonlinear media. More recently, geometric phases have been applied to systems operating in a strongly stochastic environment, such as molecular motors. We discuss such geometric effects in purely classical dissipative stochastic systems and their role in the theory of the stochastic pump effect (SPE).
Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.
2009-01-01
Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Stochastic resonance during a polymer translocation process
NASA Astrophysics Data System (ADS)
Mondal, Debasish; Muthukumar, Murugappan
We study the translocation of a flexible polymer in a confined geometry subjected to a time-periodic external drive to explore stochastic resonance. We describe the equilibrium translocation process in terms of a Fokker-Planck description and use a discrete two-state model to describe the effect of the external driving force on the translocation dynamics. We observe that no stochastic resonance is possible if the associated free-energy barrier is purely entropic in nature. The polymer chain experiences a stochastic resonance effect only in presence of an energy threshold in terms of polymer-pore interaction. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.
Stochastic differential equation model to Prendiville processes
Granita; Bahar, Arifah
2015-10-22
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.
Quadratic Stochastic Operators with Countable State Space
NASA Astrophysics Data System (ADS)
Ganikhodjaev, Nasir
2016-03-01
In this paper, we provide the classes of Poisson and Geometric quadratic stochastic operators with countable state space, study the dynamics of these operators and discuss their application to economics.
Stochasticity in plant cellular growth and patterning
Meyer, Heather M.; Roeder, Adrienne H. K.
2014-01-01
Plants, along with other multicellular organisms, have evolved specialized regulatory mechanisms to achieve proper tissue growth and morphogenesis. During development, growing tissues generate specialized cell types and complex patterns necessary for establishing the function of the organ. Tissue growth is a tightly regulated process that yields highly reproducible outcomes. Nevertheless, the underlying cellular and molecular behaviors are often stochastic. Thus, how does stochasticity, together with strict genetic regulation, give rise to reproducible tissue development? This review draws examples from plants as well as other systems to explore stochasticity in plant cell division, growth, and patterning. We conclude that stochasticity is often needed to create small differences between identical cells, which are amplified and stabilized by genetic and mechanical feedback loops to begin cell differentiation. These first few differentiating cells initiate traditional patterning mechanisms to ensure regular development. PMID:25250034
Extending Stochastic Network Calculus to Loss Analysis
Yu, Li; Zheng, Jun
2013-01-01
Loss is an important parameter of Quality of Service (QoS). Though stochastic network calculus is a very useful tool for performance evaluation of computer networks, existing studies on stochastic service guarantees mainly focused on the delay and backlog. Some efforts have been made to analyse loss by deterministic network calculus, but there are few results to extend stochastic network calculus for loss analysis. In this paper, we introduce a new parameter named loss factor into stochastic network calculus and then derive the loss bound through the existing arrival curve and service curve via this parameter. We then prove that our result is suitable for the networks with multiple input flows. Simulations show the impact of buffer size, arrival traffic, and service on the loss factor. PMID:24228019
Communication: Embedded fragment stochastic density functional theory
Neuhauser, Daniel; Baer, Roi; Rabani, Eran
2014-07-28
We develop a method in which the electronic densities of small fragments determined by Kohn-Sham density functional theory (DFT) are embedded using stochastic DFT to form the exact density of the full system. The new method preserves the scaling and the simplicity of the stochastic DFT but cures the slow convergence that occurs when weakly coupled subsystems are treated. It overcomes the spurious charge fluctuations that impair the applications of the original stochastic DFT approach. We demonstrate the new approach on a fullerene dimer and on clusters of water molecules and show that the density of states and the total energy can be accurately described with a relatively small number of stochastic orbitals.
Stochastic structure formation in random media
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.
2016-01-01
Stochastic structure formation in random media is considered using examples of elementary dynamical systems related to the two-dimensional geophysical fluid dynamics (Gaussian random fields) and to stochastically excited dynamical systems described by partial differential equations (lognormal random fields). In the latter case, spatial structures (clusters) may form with a probability of one in almost every system realization due to rare events happening with vanishing probability. Problems involving stochastic parametric excitation occur in fluid dynamics, magnetohydrodynamics, plasma physics, astrophysics, and radiophysics. A more complicated stochastic problem dealing with anomalous structures on the sea surface (rogue waves) is also considered, where the random Gaussian generation of sea surface roughness is accompanied by parametric excitation.
Synchronization of noisy systems by stochastic signals
Neiman, A.; Schimansky-Geier, L.; Moss, F.; Schimansky-Geier, L.; Shulgin, B.; Collins, J.J.
1999-07-01
We study, in terms of synchronization, the {ital nonlinear response} of noisy bistable systems to a stochastic external signal, represented by Markovian dichotomic noise. We propose a general kinetic model which allows us to conduct a full analytical study of the nonlinear response, including the calculation of cross-correlation measures, the mean switching frequency, and synchronization regions. Theoretical results are compared with numerical simulations of a noisy overdamped bistable oscillator. We show that dichotomic noise can instantaneously synchronize the switching process of the system. We also show that synchronization is most pronounced at an optimal noise level{emdash}this effect connects this phenomenon with aperiodic stochastic resonance. Similar synchronization effects are observed for a stochastic neuron model stimulated by a stochastic spike train. {copyright} {ital 1999} {ital The American Physical Society}
Stochastic description of quantum Brownian dynamics
NASA Astrophysics Data System (ADS)
Yan, Yun-An; Shao, Jiushu
2016-08-01
Classical Brownian motion has well been investigated since the pioneering work of Einstein, which inspired mathematicians to lay the theoretical foundation of stochastic processes. A stochastic formulation for quantum dynamics of dissipative systems described by the system-plus-bath model has been developed and found many applications in chemical dynamics, spectroscopy, quantum transport, and other fields. This article provides a tutorial review of the stochastic formulation for quantum dissipative dynamics. The key idea is to decouple the interaction between the system and the bath by virtue of the Hubbard-Stratonovich transformation or Itô calculus so that the system and the bath are not directly entangled during evolution, rather they are correlated due to the complex white noises introduced. The influence of the bath on the system is thereby defined by an induced stochastic field, which leads to the stochastic Liouville equation for the system. The exact reduced density matrix can be calculated as the stochastic average in the presence of bath-induced fields. In general, the plain implementation of the stochastic formulation is only useful for short-time dynamics, but not efficient for long-time dynamics as the statistical errors go very fast. For linear and other specific systems, the stochastic Liouville equation is a good starting point to derive the master equation. For general systems with decomposable bath-induced processes, the hierarchical approach in the form of a set of deterministic equations of motion is derived based on the stochastic formulation and provides an effective means for simulating the dissipative dynamics. A combination of the stochastic simulation and the hierarchical approach is suggested to solve the zero-temperature dynamics of the spin-boson model. This scheme correctly describes the coherent-incoherent transition (Toulouse limit) at moderate dissipation and predicts a rate dynamics in the overdamped regime. Challenging problems
Complexity and synchronization in stochastic chaotic systems
NASA Astrophysics Data System (ADS)
Son Dang, Thai; Palit, Sanjay Kumar; Mukherjee, Sayan; Hoang, Thang Manh; Banerjee, Santo
2016-02-01
We investigate the complexity of a hyperchaotic dynamical system perturbed by noise and various nonlinear speech and music signals. The complexity is measured by the weighted recurrence entropy of the hyperchaotic and stochastic systems. The synchronization phenomenon between two stochastic systems with complex coupling is also investigated. These criteria are tested on chaotic and perturbed systems by mean conditional recurrence and normalized synchronization error. Numerical results including surface plots, normalized synchronization errors, complexity variations etc show the effectiveness of the proposed analysis.
Desynchronization of stochastically synchronized chemical oscillators
Snari, Razan; Tinsley, Mark R. E-mail: kshowalt@wvu.edu; Faramarzi, Sadegh; Showalter, Kenneth E-mail: kshowalt@wvu.edu; Wilson, Dan; Moehlis, Jeff; Netoff, Theoden Ivan
2015-12-15
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
Structural model uncertainty in stochastic simulation
McKay, M.D.; Morrison, J.D.
1997-09-01
Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.
Desynchronization of stochastically synchronized chemical oscillators.
Snari, Razan; Tinsley, Mark R; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth
2015-12-01
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed. PMID:26723155
Desynchronization of stochastically synchronized chemical oscillators
NASA Astrophysics Data System (ADS)
Snari, Razan; Tinsley, Mark R.; Wilson, Dan; Faramarzi, Sadegh; Netoff, Theoden Ivan; Moehlis, Jeff; Showalter, Kenneth
2015-12-01
Experimental and theoretical studies are presented on the design of perturbations that enhance desynchronization in populations of oscillators that are synchronized by periodic entrainment. A phase reduction approach is used to determine optimal perturbation timing based upon experimentally measured phase response curves. The effectiveness of the perturbation waveforms is tested experimentally in populations of periodically and stochastically synchronized chemical oscillators. The relevance of the approach to therapeutic methods for disrupting phase coherence in groups of stochastically synchronized neuronal oscillators is discussed.
Sequential decision analysis for nonstationary stochastic processes
NASA Technical Reports Server (NTRS)
Schaefer, B.
1974-01-01
A formulation of the problem of making decisions concerning the state of nonstationary stochastic processes is given. An optimal decision rule, for the case in which the stochastic process is independent of the decisions made, is derived. It is shown that this rule is a generalization of the Bayesian likelihood ratio test; and an analog to Wald's sequential likelihood ratio test is given, in which the optimal thresholds may vary with time.
Stability of Stochastic Neutral Cellular Neural Networks
NASA Astrophysics Data System (ADS)
Chen, Ling; Zhao, Hongyong
In this paper, we study a class of stochastic neutral cellular neural networks. By constructing a suitable Lyapunov functional and employing the nonnegative semi-martingale convergence theorem we give some sufficient conditions ensuring the almost sure exponential stability of the networks. The results obtained are helpful to design stability of networks when stochastic noise is taken into consideration. Finally, two examples are provided to show the correctness of our analysis.
Automated Flight Routing Using Stochastic Dynamic Programming
NASA Technical Reports Server (NTRS)
Ng, Hok K.; Morando, Alex; Grabbe, Shon
2010-01-01
Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.
Stochastic resonance during a polymer translocation process.
Mondal, Debasish; Muthukumar, M
2016-04-14
We have studied the occurrence of stochastic resonance when a flexible polymer chain undergoes a single-file translocation through a nano-pore separating two spherical cavities, under a time-periodic external driving force. The translocation of the chain is controlled by a free energy barrier determined by chain length, pore length, pore-polymer interaction, and confinement inside the donor and receiver cavities. The external driving force is characterized by a frequency and amplitude. By combining the Fokker-Planck formalism for polymer translocation and a two-state model for stochastic resonance, we have derived analytical formulas for criteria for emergence of stochastic resonance during polymer translocation. We show that no stochastic resonance is possible if the free energy barrier for polymer translocation is purely entropic in nature. The polymer chain exhibits stochastic resonance only in the presence of an energy threshold in terms of polymer-pore interactions. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly. PMID:27083746
Stochastic resonance during a polymer translocation process
NASA Astrophysics Data System (ADS)
Mondal, Debasish; Muthukumar, M.
2016-04-01
We have studied the occurrence of stochastic resonance when a flexible polymer chain undergoes a single-file translocation through a nano-pore separating two spherical cavities, under a time-periodic external driving force. The translocation of the chain is controlled by a free energy barrier determined by chain length, pore length, pore-polymer interaction, and confinement inside the donor and receiver cavities. The external driving force is characterized by a frequency and amplitude. By combining the Fokker-Planck formalism for polymer translocation and a two-state model for stochastic resonance, we have derived analytical formulas for criteria for emergence of stochastic resonance during polymer translocation. We show that no stochastic resonance is possible if the free energy barrier for polymer translocation is purely entropic in nature. The polymer chain exhibits stochastic resonance only in the presence of an energy threshold in terms of polymer-pore interactions. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.
Stochastic models of intracellular transport
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Newby, Jay M.
2013-01-01
The interior of a living cell is a crowded, heterogenuous, fluctuating environment. Hence, a major challenge in modeling intracellular transport is to analyze stochastic processes within complex environments. Broadly speaking, there are two basic mechanisms for intracellular transport: passive diffusion and motor-driven active transport. Diffusive transport can be formulated in terms of the motion of an overdamped Brownian particle. On the other hand, active transport requires chemical energy, usually in the form of adenosine triphosphate hydrolysis, and can be direction specific, allowing biomolecules to be transported long distances; this is particularly important in neurons due to their complex geometry. In this review a wide range of analytical methods and models of intracellular transport is presented. In the case of diffusive transport, narrow escape problems, diffusion to a small target, confined and single-file diffusion, homogenization theory, and fractional diffusion are considered. In the case of active transport, Brownian ratchets, random walk models, exclusion processes, random intermittent search processes, quasi-steady-state reduction methods, and mean-field approximations are considered. Applications include receptor trafficking, axonal transport, membrane diffusion, nuclear transport, protein-DNA interactions, virus trafficking, and the self-organization of subcellular structures.
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Lower hybrid wavepacket stochasticity revisited
Fuchs, V.; Krlín, L.; Pánek, R.; Preinhaelter, J.; Seidl, J.; Urban, J.
2014-02-12
Analysis is presented in support of the explanation in Ref. [1] for the observation of relativistic electrons during Lower Hybrid (LH) operation in EC pre-heated plasma at the WEGA stellarator [1,2]. LH power from the WEGA TE11 circular waveguide, 9 cm diameter, un-phased, 2.45 GHz antenna, is radiated into a B≅0.5 T, Ðœ„n{sub e}≅5×10{sup 17} 1/m{sup 3} plasma at T{sub e}≅10 eV bulk temperature with an EC generated 50 keV component [1]. The fast electrons cycle around flux or drift surfaces with few collisions, sufficient for randomizing phases but insufficient for slowing fast electrons down, and thus repeatedly interact with the rf field close to the antenna mouth, gaining energy in the process. Our antenna calculations reveal a standing electric field pattern at the antenna mouth, with which we formulate the electron dynamics via a relativistic Hamiltonian. A simple approximation of the equations of motion leads to a relativistic generalization of the area-preserving Fermi-Ulam (F-U) map [3], allowing phase-space global stochasticity analysis. At typical WEGA plasma and antenna conditions, the F-U map predicts an LH driven current of about 230 A, at about 225 W of dissipated power, in good agreement with the measurements and analysis reported in [1].
Stochastic phase-change neurons
NASA Astrophysics Data System (ADS)
Tuma, Tomas; Pantazi, Angeliki; Le Gallo, Manuel; Sebastian, Abu; Eleftheriou, Evangelos
2016-08-01
Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals.
Stochastic phase-change neurons.
Tuma, Tomas; Pantazi, Angeliki; Le Gallo, Manuel; Sebastian, Abu; Eleftheriou, Evangelos
2016-08-01
Artificial neuromorphic systems based on populations of spiking neurons are an indispensable tool in understanding the human brain and in constructing neuromimetic computational systems. To reach areal and power efficiencies comparable to those seen in biological systems, electroionics-based and phase-change-based memristive devices have been explored as nanoscale counterparts of synapses. However, progress on scalable realizations of neurons has so far been limited. Here, we show that chalcogenide-based phase-change materials can be used to create an artificial neuron in which the membrane potential is represented by the phase configuration of the nanoscale phase-change device. By exploiting the physics of reversible amorphous-to-crystal phase transitions, we show that the temporal integration of postsynaptic potentials can be achieved on a nanosecond timescale. Moreover, we show that this is inherently stochastic because of the melt-quench-induced reconfiguration of the atomic structure occurring when the neuron is reset. We demonstrate the use of these phase-change neurons, and their populations, in the detection of temporal correlations in parallel data streams and in sub-Nyquist representation of high-bandwidth signals. PMID:27183057
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Stochastic slowdown in evolutionary processes.
Altrock, Philipp M; Gokhale, Chaitanya S; Traulsen, Arne
2010-07-01
We examine birth-death processes with state dependent transition probabilities and at least one absorbing boundary. In evolution, this describes selection acting on two different types in a finite population where reproductive events occur successively. If the two types have equal fitness the system performs a random walk. If one type has a fitness advantage it is favored by selection, which introduces a bias (asymmetry) in the transition probabilities. How long does it take until advantageous mutants have invaded and taken over? Surprisingly, we find that the average time of such a process can increase, even if the mutant type always has a fitness advantage. We discuss this finding for the Moran process and develop a simplified model which allows a more intuitive understanding. We show that this effect can occur for weak but nonvanishing bias (selection) in the state dependent transition rates and infer the scaling with system size. We also address the Wright-Fisher model commonly used in population genetics, which shows that this stochastic slowdown is not restricted to birth-death processes. PMID:20866666
Stochastic Resonance In Visual Perception
NASA Astrophysics Data System (ADS)
Simonotto, Enrico
1996-03-01
Stochastic resonance (SR) is a well established physical phenomenon wherein some measure of the coherence of a weak signal can be optimized by random fluctuations, or "noise" (K. Wiesenfeld and F. Moss, Nature), 373, 33 (1995). In all experiments to date the coherence has been measured using numerical analysis of the data, for example, signal-to-noise ratios obtained from power spectra. But, can this analysis be replaced by a perceptive task? Previously we had demonstrated this possibility with a numerical model of perceptual bistability applied to the interpretation of ambiguous figures(M. Riani and E. Simonotto, Phys. Rev. Lett.), 72, 3120 (1994). Here I describe an experiment wherein SR is detected in visual perception. A recognizible grayscale photograph was digitized and presented. The picture was then placed beneath a threshold. Every pixel for which the grayscale exceeded the threshold was painted white, and all others black. For large enough threshold, the picture is unrecognizable, but the addition of a random number to every pixel renders it interpretable(C. Seife and M. Roberts, The Economist), 336, 59, July 29 (1995). However the addition of dynamical noise to the pixels much enhances an observer's ability to interpret the picture. Here I report the results of psychophysics experiments wherein the effects of both the intensity of the noise and its correlation time were studied.
A novel stochastic optimization algorithm.
Li, B; Jiang, W
2000-01-01
This paper presents a new stochastic approach SAGACIA based on proper integration of simulated annealing algorithm (SAA), genetic algorithm (GA), and chemotaxis algorithm (CA) for solving complex optimization problems. SAGACIA combines the advantages of SAA, GA, and CA together. It has the following features: (1) it is not the simple mix of SAA, GA, and CA; (2) it works from a population; (3) it can be easily used to solve optimization problems either with continuous variables or with discrete variables, and it does not need coding and decoding,; and (4) it can easily escape from local minima and converge quickly. Good solutions can be obtained in a very short time. The search process of SAGACIA can be explained with Markov chains. In this paper, it is proved that SAGACIA has the property of global asymptotical convergence. SAGACIA has been applied to solve such problems as scheduling, the training of artificial neural networks, and the optimizing of complex functions. In all the test cases, the performance of SAGACIA is better than that of SAA, GA, and CA. PMID:18244742
X. Frank Xu
2010-03-30
Multiscale modeling of stochastic systems, or uncertainty quantization of multiscale modeling is becoming an emerging research frontier, with rapidly growing engineering applications in nanotechnology, biotechnology, advanced materials, and geo-systems, etc. While tremendous efforts have been devoted to either stochastic methods or multiscale methods, little combined work had been done on integration of multiscale and stochastic methods, and there was no method formally available to tackle multiscale problems involving uncertainties. By developing an innovative Multiscale Stochastic Finite Element Method (MSFEM), this research has made a ground-breaking contribution to the emerging field of Multiscale Stochastic Modeling (MSM) (Fig 1). The theory of MSFEM basically decomposes a boundary value problem of random microstructure into a slow scale deterministic problem and a fast scale stochastic one. The slow scale problem corresponds to common engineering modeling practices where fine-scale microstructure is approximated by certain effective constitutive constants, which can be solved by using standard numerical solvers. The fast scale problem evaluates fluctuations of local quantities due to random microstructure, which is important for scale-coupling systems and particularly those involving failure mechanisms. The Green-function-based fast-scale solver developed in this research overcomes the curse-of-dimensionality commonly met in conventional approaches, by proposing a random field-based orthogonal expansion approach. The MSFEM formulated in this project paves the way to deliver the first computational tool/software on uncertainty quantification of multiscale systems. The applications of MSFEM on engineering problems will directly enhance our modeling capability on materials science (composite materials, nanostructures), geophysics (porous media, earthquake), biological systems (biological tissues, bones, protein folding). Continuous development of MSFEM will
Replication timing and its emergence from stochastic processes
Bechhoefer, John; Rhind, Nicholas
2012-01-01
The temporal organization of DNA replication has puzzled cell biologists since before the mechanism of replication was understood. The realization that replication timing correlates with important features, such as transcription, chromatin structure and genome evolution, and is misregulated in cancer and aging has only deepened the fascination. Many ideas about replication timing have been proposed, but most have been short on mechanistic detail. However, recent work has begun to elucidate basic principles of replication timing. In particular, mathematical modeling of replication kinetics in several systems has shown that the reproducible replication timing patterns seen in population studies can be explained by stochastic origin firing at the single-cell level. This work suggests that replication timing need not be controlled by a hierarchical mechanism that imposes replication timing from a central regulator, but instead results from simple rules that affect individual origins. PMID:22520729
Vaginal cancer; Cancer - vagina; Tumor - vaginal ... Most vaginal cancers occur when another cancer, such as cervical or endometrial cancer , spreads. This is called secondary vaginal cancer. Cancer ...
Random musings on stochastics (Lorenz Lecture)
NASA Astrophysics Data System (ADS)
Koutsoyiannis, D.
2014-12-01
In 1960 Lorenz identified the chaotic nature of atmospheric dynamics, thus highlighting the importance of the discovery of chaos by Poincare, 70 years earlier, in the motion of three bodies. Chaos in the macroscopic world offered a natural way to explain unpredictability, that is, randomness. Concurrently with Poincare's discovery, Boltzmann introduced statistical physics, while soon after Borel and Lebesgue laid the foundation of measure theory, later (in 1930s) used by Kolmogorov as the formal foundation of probability theory. Subsequently, Kolmogorov and Khinchin introduced the concepts of stochastic processes and stationarity, and advanced the concept of ergodicity. All these areas are now collectively described by the term "stochastics", which includes probability theory, stochastic processes and statistics. As paradoxical as it may seem, stochastics offers the tools to deal with chaos, even if it results from deterministic dynamics. As chaos entails uncertainty, it is more informative and effective to replace the study of exact system trajectories with that of probability densities. Also, as the exact laws of complex systems can hardly be deduced by synthesis of the detailed interactions of system components, these laws should inevitably be inferred by induction, based on observational data and using statistics. The arithmetic of stochastics is quite different from that of regular numbers. Accordingly, it needs the development of intuition and interpretations which differ from those built upon deterministic considerations. Using stochastic tools in a deterministic context may result in mistaken conclusions. In an attempt to contribute to a more correct interpretation and use of stochastic concepts in typical tasks of nonlinear systems, several examples are studied, which aim (a) to clarify the difference in the meaning of linearity in deterministic and stochastic context; (b) to contribute to a more attentive use of stochastic concepts (entropy, statistical
Stochastic volatility models and Kelvin waves
NASA Astrophysics Data System (ADS)
Lipton, Alex; Sepp, Artur
2008-08-01
We use stochastic volatility models to describe the evolution of an asset price, its instantaneous volatility and its realized volatility. In particular, we concentrate on the Stein and Stein model (SSM) (1991) for the stochastic asset volatility and the Heston model (HM) (1993) for the stochastic asset variance. By construction, the volatility is not sign definite in SSM and is non-negative in HM. It is well known that both models produce closed-form expressions for the prices of vanilla option via the Lewis-Lipton formula. However, the numerical pricing of exotic options by means of the finite difference and Monte Carlo methods is much more complex for HM than for SSM. Until now, this complexity was considered to be an acceptable price to pay for ensuring that the asset volatility is non-negative. We argue that having negative stochastic volatility is a psychological rather than financial or mathematical problem, and advocate using SSM rather than HM in most applications. We extend SSM by adding volatility jumps and obtain a closed-form expression for the density of the asset price and its realized volatility. We also show that the current method of choice for solving pricing problems with stochastic volatility (via the affine ansatz for the Fourier-transformed density function) can be traced back to the Kelvin method designed in the 19th century for studying wave motion problems arising in fluid dynamics.
Stochastic resonance in models of neuronal ensembles
NASA Astrophysics Data System (ADS)
Chialvo, Dante R.; Longtin, André; Müautller-Gerking, Johannes
1997-02-01
Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the ``resonance curve'' in the multineuron stochastic resonance without tuning scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal.
Stochastic modeling of the auroral electrojet index
NASA Astrophysics Data System (ADS)
Anh, V. V.; Yong, J. M.; Yu, Z. G.
2008-10-01
Substorms are often identified by bursts of activities in the magnetosphere-ionosphere system characterized by the auroral electrojet (AE) index. The highly complex nature of substorm-related bursts suggests that a stochastic approach would be needed. Stochastic models including fractional Brownian motion, linear fractional stable motion, Fokker-Planck equation and Itô-type stochastic differential equation have been suggested to model the AE index. This paper provides a stochastic model for the AE in the form of fractional stochastic differential equation. The long memory of the AE time series is represented by a fractional derivative, while its bursty behavior is modeled by a Lévy noise with inverse Gaussian marginal distribution. The equation has the form of the classical Stokes-Boussinesq-Basset equation of motion for a spherical particle in a fluid with retarded viscosity. Parameter estimation and approximation schemes are detailed for the simulation of the equation. The fractional order of the equation conforms with the previous finding that the fluctuations of the magnetosphere-ionosphere system as seen in the AE reflect the fluctuations in the solar wind: they both possess the same extent of long-range dependence. The introduction of a fractional derivative term into the equation to capture the extent of long-range dependence together with an inverse Gaussian noise input describe the right amount of intermittency inherent in the AE data.
Non-Markovian stochastic evolution equations
NASA Astrophysics Data System (ADS)
Costanza, G.
2014-05-01
Non-Markovian continuum stochastic and deterministic equations are derived from a set of discrete stochastic and deterministic evolution equations. Examples are given of discrete evolution equations whose updating rules depend on two or more previous time steps. Among them, the continuum stochastic evolution equation of the Newton second law, the stochastic evolution equation of a wave equation, the stochastic evolution equation for the scalar meson field, etc. are obtained as special cases. Extension to systems of evolution equations and other extensions are considered and examples are given. The concept of isomorphism and almost isomorphism are introduced in order to compare the coefficients of the continuum evolution equations of two different smoothing procedures that arise from two different approaches. Usually these discrepancies arising from two sources: On the one hand, the use of different representations of the generalized functions appearing in the models and, on the other hand, the different approaches used to describe the models. These new concept allows to overcome controversies that were appearing during decades in the literature.
Discrete analysis of stochastic NMR.II
NASA Astrophysics Data System (ADS)
Wong, S. T. S.; Rods, M. S.; Newmark, R. D.; Budinger, T. F.
Stochastic NMR is an efficient technique for high-field in vivo imaging and spectroscopic studies where the peak RF power required may be prohibitively high for conventional pulsed NMR techniques. A stochastic NMR experiment excites the spin system with a sequence of RF pulses where the flip angles or the phases of the pulses are samples of a discrete stochastic process. In a previous paper the stochastic experiment was analyzed and analytic expressions for the input-output cross-correlations, average signal power, and signal spectral density were obtained for a general stochastic RF excitation. In this paper specific cases of excitation with random phase, fixed flip angle, and excitation with two random components in quadrature are analyzed. The input-output cross-correlation for these two types of excitations is shown to be Lorentzian. Line broadening is the only spectral distortion as the RF excitation power is increased. The systematic noise power is inversely proportional to the number of data points N used in the spectral reconstruction. The use of a complete maximum length sequence (MLS) may improve the signal-to-systematic-noise ratio by 20 dB relative to random binary excitation, but peculiar features in the higher-order autocorrelations of MLS cause noise-like distortion in the reconstructed spectra when the excitation power is high. The amount of noise-like distortion depends on the choice of the MLS generator.
EDITORIAL: Stochasticity in fusion plasmas
NASA Astrophysics Data System (ADS)
Finken, K. H.
2006-04-01
In recent years the importance of externally imposed resonant magnetic fields on plasma has become more and more recognized. These fields will cause ergodization at well defined plasma layers and can induce large size islands at rational q-surfaces. A hope for future large scale tokamak devices is the development of a reliable method for mitigating the large ELMs of type 1 ELMy-H-modes by modifying the edge transport. Other topics of interest for fusion reactors are the option of distributing the heat to a large area and optimizing methods for heat and particle exhaust, or the understanding of the transport around tearing mode instabilities. The cluster of papers in this issue of Nuclear Fusion is a successor to the 2004 special issue (Nuclear Fusion 44 S1-122 ) intended to raise interest in the subject. The contents of this present issue are based on presentations at the Second Workshop on Stochasticity in Fusion Plasmas (SFP) held in Juelich, Germany, 15-17 March 2005. The SFP workshops have been stimulated by the installation of the Dynamic Ergodic Divertor (DED) in the TEXTOR tokamak. It has attracted colleagues working on various plasma configurations such as tokamaks, stellarators or reversed field pinches. The workshop was originally devoted to phenomena on the plasma edge but it has been broadened to transport questions over the whole plasma cross-section. It is a meeting place for experimental and theoretical working groups. The next workshop is planned for February/March 2007 in Juelich, Germany. For details see http://www.fz-juelich.de/sfp/. The content of the workshop is summarized in the following conference summary (K.H. Finken 2006 Nuclear Fusion 46 S107-112). At the workshop experimental results on the plasma transport resulting from ergodization in various devices were presented. Highlights were the results from DIII-D on the mitigation of ELMs (see also T.E. Evans et al 2005 Nuclear Fusion 45 595 ). Theoretical work was focused around the topics
Stochastic approach to equilibrium and nonequilibrium thermodynamics.
Tomé, Tânia; de Oliveira, Mário J
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions. PMID:25974471
Stochastic Differential Equation of Earthquakes Series
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Tweneboah, Osei K.; Gonzalez-Huizar, Hector; Serpa, Laura
2016-07-01
This work is devoted to modeling earthquake time series. We propose a stochastic differential equation based on the superposition of independent Ornstein-Uhlenbeck processes driven by a Γ (α, β ) process. Superposition of independent Γ (α, β ) Ornstein-Uhlenbeck processes offer analytic flexibility and provides a class of continuous time processes capable of exhibiting long memory behavior. The stochastic differential equation is applied to the study of earthquakes by fitting the superposed Γ (α, β ) Ornstein-Uhlenbeck model to earthquake sequences in South America containing very large events (Mw ≥ 8). We obtained very good fit of the observed magnitudes of the earthquakes with the stochastic differential equations, which supports the use of this methodology for the study of earthquakes sequence.
Modeling stochasticity in biochemical reaction networks
NASA Astrophysics Data System (ADS)
Constantino, P. H.; Vlysidis, M.; Smadbeck, P.; Kaznessis, Y. N.
2016-03-01
Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts.
Stochastic resonance in geomagnetic polarity reversals.
Consolini, Giuseppe; De Michelis, Paola
2003-02-01
Among noise-induced cooperative phenomena a peculiar relevance is played by stochastic resonance. In this paper we offer evidence that geomagnetic polarity reversals may be due to a stochastic resonance process. In detail, analyzing the distribution function P(tau) of polarity residence times (chrons), we found the evidence of a stochastic synchronization process, i.e., a series of peaks in the P(tau) at T(n) approximately (2n+1)T(Omega)/2 with n=0,1,...,j and T(omega) approximately 0.1 Myr. This result is discussed in connection with both the typical time scale of Earth's orbit eccentricity variation and the recent results on the typical time scale of climatic long-term variation. PMID:12633403
Stochastic Differential Equation of Earthquakes Series
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Tweneboah, Osei K.; Gonzalez-Huizar, Hector; Serpa, Laura
2016-05-01
This work is devoted to modeling earthquake time series. We propose a stochastic differential equation based on the superposition of independent Ornstein-Uhlenbeck processes driven by a Γ (α, β ) process. Superposition of independent Γ (α, β ) Ornstein-Uhlenbeck processes offer analytic flexibility and provides a class of continuous time processes capable of exhibiting long memory behavior. The stochastic differential equation is applied to the study of earthquakes by fitting the superposed Γ (α, β ) Ornstein-Uhlenbeck model to earthquake sequences in South America containing very large events (Mw ≥ 8). We obtained very good fit of the observed magnitudes of the earthquakes with the stochastic differential equations, which supports the use of this methodology for the study of earthquakes sequence.
Stochastic Averaging of Duhem Hysteretic Systems
NASA Astrophysics Data System (ADS)
YING, Z. G.; ZHU, W. Q.; NI, Y. Q.; KO, J. M.
2002-06-01
The response of Duhem hysteretic system to externally and/or parametrically non-white random excitations is investigated by using the stochastic averaging method. A class of integrable Duhem hysteresis models covering many existing hysteresis models is identified and the potential energy and dissipated energy of Duhem hysteretic component are determined. The Duhem hysteretic system under random excitations is replaced equivalently by a non-hysteretic non-linear random system. The averaged Ito's stochastic differential equation for the total energy is derived and the Fokker-Planck-Kolmogorov equation associated with the averaged Ito's equation is solved to yield stationary probability density of total energy, from which the statistics of system response can be evaluated. It is observed that the numerical results by using the stochastic averaging method is in good agreement with that from digital simulation.
Derivatives of the Stochastic Growth Rate
Steinsaltz, David; Tuljapurkar, Shripad; Horvitz, Carol
2011-01-01
We consider stochastic matrix models for population driven by random environments which form a Markov chain. The top Lyapunov exponent a, which describes the long-term growth rate, depends smoothly on the demographic parameters (represented as matrix entries) and on the parameters that define the stochastic matrix of the driving Markov chain. The derivatives of a — the “stochastic elasticities” — with respect to changes in the demographic parameters were derived by Tuljapurkar (1990). These results are here extended to a formula for the derivatives with respect to changes in the Markov chain driving the environments. We supplement these formulas with rigorous bounds on computational estimation errors, and with rigorous derivations of both the new and the old formulas. PMID:21463645
Regeneration of stochastic processes: an inverse method
NASA Astrophysics Data System (ADS)
Ghasemi, F.; Peinke, J.; Sahimi, M.; Rahimi Tabar, M. R.
2005-10-01
We propose a novel inverse method that utilizes a set of data to construct a simple equation that governs the stochastic process for which the data have been measured, hence enabling us to reconstruct the stochastic process. As an example, we analyze the stochasticity in the beat-to-beat fluctuations in the heart rates of healthy subjects as well as those with congestive heart failure. The inverse method provides a novel technique for distinguishing the two classes of subjects in terms of a drift and a diffusion coefficients which behave completely differently for the two classes of subjects, hence potentially providing a novel diagnostic tool for distinguishing healthy subjects from those with congestive heart failure, even at the early stages of the disease development.
Stochastic approach to equilibrium and nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Tomé, Tânia; de Oliveira, Mário J.
2015-04-01
We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.
Structural factoring approach for analyzing stochastic networks
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Shier, Douglas R.
1991-01-01
The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.
Computational stochastic model of ions implantation
Zmievskaya, Galina I. Bondareva, Anna L.; Levchenko, Tatiana V.; Maino, Giuseppe
2015-03-10
Implantation flux ions into crystal leads to phase transition /PT/ 1-st kind. Damaging lattice is associated with processes clustering vacancies and gaseous bubbles as well their brownian motion. System of stochastic differential equations /SDEs/ Ito for evolution stochastic dynamical variables corresponds to the superposition Wiener processes. The kinetic equations in partial derivatives /KE/, Kolmogorov-Feller and Einstein-Smolukhovskii, were formulated for nucleation into lattice of weakly soluble gases. According theory, coefficients of stochastic and kinetic equations uniquely related. Radiation stimulated phase transition are characterized by kinetic distribution functions /DFs/ of implanted clusters versus their sizes and depth of gas penetration into lattice. Macroscopic parameters of kinetics such as the porosity and stress calculated in thin layers metal/dielectric due to Xe{sup ++} irradiation are attracted as example. Predictions of porosity, important for validation accumulation stresses in surfaces, can be applied at restoring of objects the cultural heritage.
NASA Technical Reports Server (NTRS)
Sadunas, J. A.; French, E. P.; Sexton, H.
1973-01-01
A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.
Pricing foreign equity option with stochastic volatility
NASA Astrophysics Data System (ADS)
Sun, Qi; Xu, Weidong
2015-11-01
In this paper we propose a general foreign equity option pricing framework that unifies the vast foreign equity option pricing literature and incorporates the stochastic volatility into foreign equity option pricing. Under our framework, the time-changed Lévy processes are used to model the underlying assets price of foreign equity option and the closed form pricing formula is obtained through the use of characteristic function methodology. Numerical tests indicate that stochastic volatility has a dramatic effect on the foreign equity option prices.