Effect of misspecification of gene frequency on the two-point LOD score.
Pal, D K; Durner, M; Greenberg, D A
2001-11-01
In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
Reliability Analysis of the Gradual Degradation of Semiconductor Devices.
1983-07-20
under the heading of linear models or linear statistical models . 3 ,4 We have not used this material in this report. Assuming catastrophic failure when...assuming a catastrophic model . In this treatment we first modify our system loss formula and then proceed to the actual analysis. II. ANALYSIS OF...Failure Time 1 Ti Ti 2 T2 T2 n Tn n and are easily analyzed by simple linear regression. Since we have assumed a log normal/Arrhenius activation
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
SUSTAIN: A Network Model of Category Learning
ERIC Educational Resources Information Center
Love, Bradley C.; Medin, Douglas L.; Gureckis, Todd M.
2004-01-01
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN…
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
ERIC Educational Resources Information Center
Savage, Robert; Burgos, Giovani; Wood, Eileen; Piquette, Noella
2015-01-01
The Simple View of Reading (SVR) describes Reading Comprehension as the product of distinct child-level variance in decoding (D) and linguistic comprehension (LC) component abilities. When used as a model for educational policy, distinct classroom-level influences of each of the components of the SVR model have been assumed, but have not yet been…
A Simple Probabilistic Combat Model
2016-06-13
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at
Accounting For Gains And Orientations In Polarimetric SAR
NASA Technical Reports Server (NTRS)
Freeman, Anthony
1992-01-01
Calibration method accounts for characteristics of real radar equipment invalidating standard 2 X 2 complex-amplitude R (receiving) and T (transmitting) matrices. Overall gain in each combination of transmitting and receiving channels assumed different even when only one transmitter and one receiver used. One characterizes departure of polarimetric Synthetic Aperture Radar (SAR) system from simple 2 X 2 model in terms of single parameter used to transform measurements into format compatible with simple 2 X 2 model. Data processed by applicable one of several prior methods based on simple model.
Technical Report 1205: A Simple Probabilistic Combat Model
2016-07-08
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Ridge Regression for Interactive Models.
ERIC Educational Resources Information Center
Tate, Richard L.
1988-01-01
An exploratory study of the value of ridge regression for interactive models is reported. Assuming that the linear terms in a simple interactive model are centered to eliminate non-essential multicollinearity, a variety of common models, representing both ordinal and disordinal interactions, are shown to have "orientations" that are…
Calibration of Response Data Using MIRT Models with Simple and Mixed Structures
ERIC Educational Resources Information Center
Zhang, Jinming
2012-01-01
It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.
1979-01-01
The elimination of Ampholine from the system by establishing the pH gradient with simple ampholytes is proposed. A mathematical model was exercised at the level of the two-component system by using values for mobilities, diffusion coefficients, and dissociation constants representative of glutamic acid and histidine. The constants assumed in the calculations are reported. The predictions of the model and computer simulation of isoelectric focusing experiments are in direct importance to obtain Ampholine-free, stable pH gradients.
NASA Astrophysics Data System (ADS)
Vincenzo, F.; Matteucci, F.; Spitoni, E.
2017-04-01
We present a theoretical method for solving the chemical evolution of galaxies by assuming an instantaneous recycling approximation for chemical elements restored by massive stars and the delay time distribution formalism for delayed chemical enrichment by Type Ia Supernovae. The galaxy gas mass assembly history, together with the assumed stellar yields and initial mass function, represents the starting point of this method. We derive a simple and general equation, which closely relates the Laplace transforms of the galaxy gas accretion history and star formation history, which can be used to simplify the problem of retrieving these quantities in the galaxy evolution models assuming a linear Schmidt-Kennicutt law. We find that - once the galaxy star formation history has been reconstructed from our assumptions - the differential equation for the evolution of the chemical element X can be suitably solved with classical methods. We apply our model to reproduce the [O/Fe] and [Si/Fe] versus [Fe/H] chemical abundance patterns as observed at the solar neighbourhood by assuming a decaying exponential infall rate of gas and different delay time distributions for Type Ia Supernovae; we also explore the effect of assuming a non-linear Schmidt-Kennicutt law, with the index of the power law being k = 1.4. Although approximate, we conclude that our model with the single-degenerate scenario for Type Ia Supernovae provides the best agreement with the observed set of data. Our method can be used by other complementary galaxy stellar population synthesis models to predict also the chemical evolution of galaxies.
Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded
2017-07-01
Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Economos, A. C.; Miquel, J.
1979-01-01
A simple physiological model of mortality kinetics is used to assess the intuitive concept that the aging rates of populations are proportional to their mortality rates. It is assumed that the vitality of an individual can be expressed as a simple summation of the weighted functional capacities of its organs and homeostatic systems that are indispensable for survival. It is shown that the mortality kinetics of a population can be derived by a linear transformation of the frequency distribution of vitality, assuming a uniform constant rate of decline of the physiological functions. A simple comparison of two populations is not possible when they have different vitality frequency distributions. Analysis of the data using the model suggests that the differences in decline of survivorship with age between the military pilot population, a medically insured population, and the control population can be accounted for by the effect of physical selection on the vitality frequency distribution of the screened populations.
Ambient Scattering from Ring-Symmetric Spacecraft Exhaust Plume.
1987-04-01
spacecraft is shielded from ambient scattering by its own plume. Assuming hard- speres collisions, the first-collision model is given by a simple...may change upon replacing the hard- speres approximation by a more realistic collision model. A possible modification of spacecraft charging by the
A simple inertial model for Neptune's zonal circulation
NASA Technical Reports Server (NTRS)
Allison, Michael; Lumetta, James T.
1990-01-01
Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.
Finite Feedback Cycling in Structural Equation Models
ERIC Educational Resources Information Center
Hayduk, Leslie A.
2009-01-01
In models containing reciprocal effects, or longer causal loops, the usual effect estimates assume that any effect touching a loop initiates an infinite cycling of effects around that loop. The real world, in contrast, might permit only finite feedback cycles. I use a simple hypothetical model to demonstrate that if the world permits only a few…
Isothermal Circumstellar Dust Shell Model for Teaching
ERIC Educational Resources Information Center
Robinson, G.; Towers, I. N.; Jovanoski, Z.
2009-01-01
We introduce a model of radiative transfer in circumstellar dust shells. By assuming that the shell is both isothermal and its thickness is small compared to its radius, the model is simple enough for students to grasp and yet still provides a quantitative description of the relevant physical features. The isothermal model can be used in a…
A Positive Stigma for Child Labor?
ERIC Educational Resources Information Center
Patrinos, Harry Anthony; Shafiq, M. Najeeb
2008-01-01
We introduce a simple empirical model that assumes a positive stigma (or norm) towards child labor that is common in some developing countries. We then illustrate our positive stigma model using data from Guatemala. Controlling for several child- and household-level characteristics, we use two instruments for measuring stigma: a child's indigenous…
An Equilibrium Flow Model of a University Campus.
ERIC Educational Resources Information Center
Oliver, Robert M.; Hopkins, David S. P.
This paper develops a simple deterministic model that relates student admissions and enrollments to the final demand for educated students. It includes the effects of dropout rates and student-teacher ratios on student enrollments and faculty staffing levels. Certain technological requirements are assumed known and given. These, as well as the…
The Behavioral Economics of Choice and Interval Timing
Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.
2009-01-01
We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985
Malaria transmission rates estimated from serological data.
Burattini, M. N.; Massad, E.; Coutinho, F. A.
1993-01-01
A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011
Huang, J; Vieland, V J
2001-01-01
It is well known that the asymptotic null distribution of the homogeneity lod score (LOD) does not depend on the genetic model specified in the analysis. When appropriately rescaled, the LOD is asymptotically distributed as 0.5 chi(2)(0) + 0.5 chi(2)(1), regardless of the assumed trait model. However, because locus heterogeneity is a common phenomenon, the heterogeneity lod score (HLOD), rather than the LOD itself, is often used in gene mapping studies. We show here that, in contrast with the LOD, the asymptotic null distribution of the HLOD does depend upon the genetic model assumed in the analysis. In affected sib pair (ASP) data, this distribution can be worked out explicitly as (0.5 - c)chi(2)(0) + 0.5chi(2)(1) + cchi(2)(2), where c depends on the assumed trait model. E.g., for a simple dominant model (HLOD/D), c is a function of the disease allele frequency p: for p = 0.01, c = 0.0006; while for p = 0.1, c = 0.059. For a simple recessive model (HLOD/R), c = 0.098 independently of p. This latter (recessive) distribution turns out to be the same as the asymptotic distribution of the MLS statistic under the possible triangle constraint, which is asymptotically equivalent to the HLOD/R. The null distribution of the HLOD/D is close to that of the LOD, because the weight c on the chi(2)(2) component is small. These results mean that the cutoff value for a test of size alpha will tend to be smaller for the HLOD/D than the HLOD/R. For example, the alpha = 0.0001 cutoff (on the lod scale) for the HLOD/D with p = 0.05 is 3.01, while for the LOD it is 3.00, and for the HLOD/R it is 3.27. For general pedigrees, explicit analytical expression of the null HLOD distribution does not appear possible, but it will still depend on the assumed genetic model. Copyright 2001 S. Karger AG, Basel
Application of balancing methods in modeling the penicillin fermentation.
Heijnen, J J; Roels, J A; Stouthamer, A H
1979-12-01
This paper shows the application of elementary balancing methods in combination with simple kinetic equations in the formulation of an unstructured model for the fed-batch process for the production of penicillin. The rate of substrate uptake is modeled with a Monod-type relationship. The specific penicillin production rate is assumed to be a function of growth rate. Hydrolysis of penicillin to penicilloic acid is assumed to be first order in penicillin. In simulations with the present model it is shown that the model, although assuming a strict relationship between specific growth rate and penicillin productivity, allows for the commonly observed lag phase in the penicillin concentration curve and the apparent separation between growth and production phase (idiophase-trophophase concept). Furthermore it is shown that the feed rate profile during fermentation is of vital importance in the realization of a high production rate throughout the duration of the fermentation. It is emphasized that the method of modeling presented may also prove rewarding for an analysis of fermentation processes other than the penicillin fermentation.
The Behavioral Economics of Choice and Interval Timing
ERIC Educational Resources Information Center
Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.
2009-01-01
The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…
Optimal Government Subsidies to Universities in the Face of Tuition and Enrollment Constraints
ERIC Educational Resources Information Center
Easton, Stephen T.; Rockerbie, Duane W.
2008-01-01
This paper develops a simple static model of an imperfectly competitive university operating under government-imposed constraints on the ability to raise tuition fees and increase enrollments. The model has particular applicability to Canadian universities. Assuming an average cost pricing rule, rules for adequate government subsidies (operating…
A Bayesian Beta-Mixture Model for Nonparametric IRT (BBM-IRT)
ERIC Educational Resources Information Center
Arenson, Ethan A.; Karabatsos, George
2017-01-01
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Zare, Yasser; Rhim, Sungsoo; Garmabi, Hamid; Rhee, Kyong Yop
2018-04-01
The networks of nanoparticles in nanocomposites cause solid-like behavior demonstrating a constant storage modulus at low frequencies. This study examines the storage modulus of poly (lactic acid)/poly (ethylene oxide)/carbon nanotubes (CNT) nanocomposites. The experimental data of the storage modulus in the plateau regions are obtained by a frequency sweep test. In addition, a simple model is developed to predict the constant storage modulus assuming the properties of the interphase regions and the CNT networks. The model calculations are compared with the experimental results, and the parametric analyses are applied to validate the predictability of the developed model. The calculations properly agree with the experimental data at all polymer and CNT concentrations. Moreover, all parameters acceptably modulate the constant storage modulus. The percentage of the networked CNT, the modulus of networks, and the thickness and modulus of the interphase regions directly govern the storage modulus of nanocomposites. The outputs reveal the important roles of the interphase properties in the storage modulus. Copyright © 2018 Elsevier Ltd. All rights reserved.
Equilibria of perceptrons for simple contingency problems.
Dawson, Michael R W; Dupuis, Brian
2012-08-01
The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Trending in Probability of Collision Measurements
NASA Technical Reports Server (NTRS)
Vallejo, J. J.; Hejduk, M. D.; Stamey, J. D.
2015-01-01
A simple model is proposed to predict the behavior of Probabilities of Collision (P(sub c)) for conjunction events. The model attempts to predict the location and magnitude of the peak P(sub c) value for an event by assuming the progression of P(sub c) values can be modeled to first order by a downward-opening parabola. To incorporate prior information from a large database of past conjunctions, the Bayes paradigm is utilized; and the operating characteristics of the model are established through a large simulation study. Though the model is simple, it performs well in predicting the temporal location of the peak (P(sub c)) and thus shows promise as a decision aid in operational conjunction assessment risk analysis.
NASA Astrophysics Data System (ADS)
Kotha, Shiva Prasad
Bone mineral and bone organic are assumed to be a linearly elastic, brittle material. A simple micromechanical model based on the shear lag theory is developed to model the stress transfer between the mineral platelets of bone. The bone mineral platelets carry most of the applied load while the organic primarily serves to transfer load between the overlapped mineral platelets by shear. Experiments were done to elucidate the mechanism of failure in bovine cortical bone and to decrease the mineral content of control bone with in-vitro fluoride ion treatments. It was suggested that the failure at the ultrastructural level is due to the transverse failure of bonds between the collagen microfibrils in the organic matrix. However, the shear stress transfer and the axial load bearing capacity of the organic is not impaired. Hence, it is assumed that the shear strain in the matrix increases while the shear stress remains constant at the shear yield stress once the matrix starts yielding at the ends of the bone mineral. When the shear stress over the length of the mineral platelet reaches the shear yield stress, no more applied stress is carried by the bone mineral platelets while the organic matrix carries the increased axial load. The bone fails when the axial stress in the organic reaches its ultimate stress. The bone mineral is assumed to dissolve due to in-vitro fluoride ion treatments and precipitate calcium fluoride or fluoroapatite like material. The amount of dissolution is estimated based on 19F Nuclear Magnetic Resonance or a decrease in the carbonate content of bone. The dissolution of bone mineral is assumed to increase the porosity in the organic. We assume that the elastic modulus and the ultimate strength of the organic decrease due to the increased porosity. A simple empirical model is used to model the decrease in the elastic modulus. The strength is modeled to decrease based on an increase in the cross-sectional area occupied by the porosity. The precipitate is assumed to contribute to the mechanical properties of bone due to friction generated by the poisson's contraction of the organic as it carries axial loads. The resulting stress-strain curve predicted by the model resembles the stress-strain curves obtained in the experiments.
NASA Astrophysics Data System (ADS)
Kato, N.
2017-12-01
Numerical simulations of earthquake cycles are conducted to investigate the origin of complexity of earthquake recurrence. There are two main causes of the complexity. One is self-organized stress heterogeneity due to dynamical effect. The other is the effect of interaction between some fault patches. In the model, friction on the fault is assumed to obey a rate- and state-dependent friction law. Circular patches of velocity-weakening frictional property are assumed on the fault. On the remaining areas of the fault, velocity-strengthening friction is assumed. We consider three models: Single patch model, two-patch model, and three-patch model. In the first model, the dynamical effect is mainly examined. The latter two models take into consideration the effect of interaction as well as the dynamical effect. Complex multiperiodic or aperiodic sequences of slip events occur when slip behavior changes from the seismic to aseismic, and when the degree of interaction between seismic patches is intermediate. The former is observed in all the models, and the latter is observed in the two-patch model and the three-patch model. Evolution of spatial distribution of shear stress on the fault suggests that aperiodicity at the transition from seismic to aseismic slip is caused by self-organized stress heterogeneity. The iteration maps of recurrence intervals of slip events in aperiodic sequences are examined, and they are approximately expressed by simple curves for aperiodicity at the transition from seismic to aseismic slip. In contrast, the iteration maps for aperiodic sequences caused by interaction between seismic patches are scattered and they are not expressed by simple curves. This result suggests that complex sequences caused by different mechanisms may be distinguished.
ERIC Educational Resources Information Center
Brady, Timothy F.; Tenenbaum, Joshua B.
2013-01-01
When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…
ERIC Educational Resources Information Center
Hayes, Andrew F.; Preacher, Kristopher J.
2010-01-01
Most treatments of indirect effects and mediation in the statistical methods literature and the corresponding methods used by behavioral scientists have assumed linear relationships between variables in the causal system. Here we describe and extend a method first introduced by Stolzenberg (1980) for estimating indirect effects in models of…
Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H
2017-12-27
In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.
A simple branching model that reproduces language family and language population distributions
NASA Astrophysics Data System (ADS)
Schwämmle, Veit; de Oliveira, Paulo Murilo Castro
2009-07-01
Human history leaves fingerprints in human languages. Little is known about language evolution and its study is of great importance. Here we construct a simple stochastic model and compare its results to statistical data of real languages. The model is based on the recent finding that language changes occur independently of the population size. We find agreement with the data additionally assuming that languages may be distinguished by having at least one among a finite, small number of different features. This finite set is also used in order to define the distance between two languages, similarly to linguistics tradition since Swadesh.
Nakasaki, Kiyohiko; Ohtaki, Akihito
2002-01-01
Using dog food as a model of the organic waste that comprises composting raw material, the degradation pattern of organic materials was examined by continuously measuring the quantity of CO2 evolved during the composting process in both batch and fed-batch operations. A simple numerical model was made on the basis of three suppositions for describing the organic matter decomposition in the batch operation. First, a certain quantity of carbon in the dog food was assumed to be recalcitrant to degradation in the composting reactor within the retention time allowed. Second, it was assumed that the decomposition rate of carbon is proportional to the quantity of easily degradable carbon, that is, the carbon recalcitrant to degradation was subtracted from the total carbon remaining in the dog food. Third, a certain lag time is assumed to occur before the start of active decomposition of organic matter in the dog food; this lag corresponds to the time required for microorganisms to proliferate and become active. It was then ascertained that the decomposition pattern for the organic matter in the dog food during the fed-batch operation could be predicted by the numerical model with the parameters obtained from the batch operation. This numerical model was modified so that the change in dry weight of composting materials could be obtained. The modified model was found suitable for describing the organic matter decomposition pattern in an actual fed-batch composting operation of the garbage obtained from a restaurant, approximately 10 kg d(-1) loading for 60 d.
Spatial surplus production modeling of Atlantic tunas and billfish.
Carruthers, Thomas R; McAllister, Murdoch K; Taylor, Nathan G
2011-10-01
We formulate and simulation-test a spatial surplus production model that provides a basis with which to undertake multispecies, multi-area, stock assessment. Movement between areas is parameterized using a simple gravity model that includes a "residency" parameter that determines the degree of stock mixing among areas. The model is deliberately simple in order to (1) accommodate nontarget species that typically have fewer available data and (2) minimize computational demand to enable simulation evaluation of spatial management strategies. Using this model, we demonstrate that careful consideration of spatial catch and effort data can provide the basis for simple yet reliable spatial stock assessments. If simple spatial dynamics can be assumed, tagging data are not required to reliably estimate spatial distribution and movement. When applied to eight stocks of Atlantic tuna and billfish, the model tracks regional catch data relatively well by approximating local depletions and exchange among high-abundance areas. We use these results to investigate and discuss the implications of using spatially aggregated stock assessment for fisheries in which the distribution of both the population and fishing vary over time.
Third-Degree Price Discrimination Revisited
ERIC Educational Resources Information Center
Kwon, Youngsun
2006-01-01
The author derives the probability that price discrimination improves social welfare, using a simple model of third-degree price discrimination assuming two independent linear demands. The probability that price discrimination raises social welfare increases as the preferences or incomes of consumer groups become more heterogeneous. He derives the…
Watson, K.; Hummer-Miller, S.
1981-01-01
A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.
Two Simple Models for Fracking
NASA Astrophysics Data System (ADS)
Norris, Jaren Quinn
Recent developments in fracking have enable the recovery of oil and gas from tight shale reservoirs. These developments have also made fracking one of the most controversial environmental issues in the United States. Despite the growing controversy surrounding fracking, there is relatively little publicly available research. This dissertation introduces two simple models for fracking that were developed using techniques from non-linear and statistical physics. The first model assumes that the volume of induced fractures must be equal to the volume of injected fluid. For simplicity, these fractures are assumed to form a spherically symmetric damage region around the borehole. The predicted volumes of water necessary to create a damage region with a given radius are in good agreement with reported values. The second model is a modification of invasion percolation which was previously introduced to model water flooding. The reservoir rock is represented by a regular lattice of local traps that contain oil and/or gas separated by rock barriers. The barriers are assumed to be highly heterogeneous and are assigned random strengths. Fluid is injected from a central site and the weakest rock barrier breaks allowing fluid to flow into the adjacent site. The process repeats with the weakest barrier breaking and fluid flowing to an adjacent site each time step. Extensive numerical simulations were carried out to obtain statistical properties of the growing fracture network. The network was found to be fractal with fractal dimensions differing slightly from the accepted values for traditional percolation. Additionally, the network follows Horton-Strahler and Tokunaga branching statistics which have been used to characterize river networks. As with other percolation models, the growth of the network occurs in bursts. These bursts follow a power-law size distribution similar to observed microseismic events. Reservoir stress anisotropy is incorporated into the model by assigning horizontal bonds weaker strengths on average than vertical bonds. Numerical simulations show that increasing bond strength anisotropy tends to reduce the fractal dimension of the growing fracture network, and decrease the power-law slope of the burst size distribution. Although simple, these two models are useful for making informed decisions about fracking.
NASA Astrophysics Data System (ADS)
Christoffersen, J.; Christoffersen, M. R.; Arends, J.
1984-06-01
A model is presented for remineralization of partly demineralized tooth enamel, taking the effect of the presence of fluoride ions into account. The model predicts that, in the absence of precipitation of other phases than calcium hydroxyapatite (HAP) and fluroridized HAP, which are assumed to model enamel, there exists a maximum value of the fluoride concentration gradient, above which lesions cannot be successfully repaired.
Context-dependent decision-making: a simple Bayesian model
Lloyd, Kevin; Leslie, David S.
2013-01-01
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101
Context-dependent decision-making: a simple Bayesian model.
Lloyd, Kevin; Leslie, David S
2013-05-06
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.
Eckert, Kristen A; Carter, Marissa J; Lansingh, Van C; Wilson, David A; Furtado, João M; Frick, Kevin D; Resnikoff, Serge
2015-01-01
To estimate the annual loss of productivity from blindness and moderate to severe visual impairment (MSVI) using simple models (analogous to how a rapid assessment model relates to a comprehensive model) based on minimum wage (MW) and gross national income (GNI) per capita (US$, 2011). Cost of blindness (COB) was calculated for the age group ≥50 years in nine sample countries by assuming the loss of current MW and loss of GNI per capita. It was assumed that all individuals work until 65 years old and that half of visual impairment prevalent in the ≥50 years age group is prevalent in the 50-64 years age group. For cost of MSVI (COMSVI), individual wage and GNI loss of 30% was assumed. Results were compared with the values of the uncorrected refractive error (URE) model of productivity loss. COB (MW method) ranged from $0.1 billion in Honduras to $2.5 billion in the United States, and COMSVI ranged from $0.1 billion in Honduras to $5.3 billion in the US. COB (GNI method) ranged from $0.1 million in Honduras to $7.8 billion in the US, and COMSVI ranged from $0.1 billion in Honduras to $16.5 billion in the US. Most GNI method values were near equivalent to those of the URE model. Although most people with blindness and MSVI live in developing countries, the highest productivity losses are in high income countries. The global economy could improve if eye care were made more accessible and more affordable to all.
A simple and complete model for wind turbine wakes over complex terrain
NASA Astrophysics Data System (ADS)
Rommelfanger, Nick; Rajborirug, Mai; Luzzatto-Fegiz, Paolo
2017-11-01
Simple models for turbine wakes have been used extensively in the wind energy community, both as independent tools, as well as to complement more refined and computationally-intensive techniques. These models typically prescribe empirical relations for how the wake radius grows with downstream distance x and obtain the wake velocity at each x through the application of either mass conservation, or of both mass and momentum conservation (e.g. Katić et al. 1986; Frandsen et al. 2006; Bastankhah & Porté-Agel 2014). Since these models assume a global behavior of the wake (for example, linear spreading with x) they cannot respond to local changes in background flow, as may occur over complex terrain. Instead of assuming a global wake shape, we develop a model by relying on a local assumption for the growth of the turbulent interface. To this end, we introduce to wind turbine wakes the use of the entrainment hypothesis, which has been used extensively in other areas of geophysical fluid dynamics. We obtain two coupled ordinary differential equations for mass and momentum conservation, which can be readily solved with a prescribed background pressure gradient. Our model is in good agreement with published data for the development of wakes over complex terrain.
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
On the joint bimodality of temperature and moisture near stratocumulus cloud tops
NASA Technical Reports Server (NTRS)
Randall, D. A.
1983-01-01
The observed distributions of the thermodynamic variables near stratocumulus top are highly bimodal. Two simple models of sub-grid fractional cloudiness motivated by this observed bimodality are examined. In both models, certain low order moments of two independent, moist-conservative thermodynamic variables are assumed to be known. The first model is based on the assumption of two discrete populations of parcels: a warm-day population and a cool-moist population. If only the first and second moments are assumed to be known, the number of unknowns exceeds the number of independent equations. If the third moments are assumed to be known as well, the number of independent equations exceeds the number of unknowns. The second model is based on the assumption of a continuous joint bimodal distribution of parcels, obtained as the weighted sum of two binormal distributions. For this model, the third moments are used to obtain 9 independent nonlinear algebraic equations in 11 unknowns. Two additional equations are needed to determine the covariance within the two subpopulations. In case these two internal covariance vanish, the system of equations can be solved analytically.
Simulated laser fluorosensor signals from subsurface chlorophyll distributions
NASA Technical Reports Server (NTRS)
Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.
1986-01-01
A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.
Speed and Accuracy in the Processing of False Statements About Semantic Information.
ERIC Educational Resources Information Center
Ratcliff, Roger
1982-01-01
A standard reaction time procedure and a response signal procedure were used on data from eight experiments on semantic verifications. Results suggest that simple models of the semantic verification task that assume a single yes/no dimension on which discrimination is made are not correct. (Author/PN)
Synapse fits neuron: joint reduction by model inversion.
van der Scheer, H T; Doelman, A
2017-08-01
In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
A consensus-based dynamics for market volumes
NASA Astrophysics Data System (ADS)
Sabatelli, Lorenzo; Richmond, Peter
2004-12-01
We develop a model of trading orders based on opinion dynamics. The agents may be thought as the share holders of a major mutual fund rather than as direct traders. The balance between their buy and sell orders determines the size of the fund order (volume) and has an impact on prices and indexes. We assume agents interact simultaneously to each other through a Sznajd-like interaction. Their degree of connection is determined by the probability of changing opinion independently of what their neighbours are doing. We assume that such a probability may change randomly, after each transaction, of an amount proportional to the relative difference between the volatility then measured and a benchmark that we assume to be an exponential moving average of the past volume values. We show how this simple model is compatible with some of the main statistical features observed for the asset volumes in financial markets.
Quantifying Confidence in Model Predictions for Hypersonic Aircraft Structures
2015-03-01
of isolating calibrations of models in the network, segmented and simultaneous calibration are compared using the Kullback - Leibler ...value of θ. While not all test -statistics are as simple as measuring goodness or badness of fit , their directional interpretations tend to remain...data quite well, qualitatively. Quantitative goodness - of - fit tests are problematic because they assume a true empirical CDF is being tested or
Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model
NASA Technical Reports Server (NTRS)
Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.
2009-01-01
Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.
Simple model for deriving sdg interacting boson model Hamiltonians: 150Nd example
NASA Astrophysics Data System (ADS)
Devi, Y. D.; Kota, V. K. B.
1993-07-01
A simple and yet useful model for deriving sdg interacting boson model (IBM) Hamiltonians is to assume that single-boson energies derive from identical particle (pp and nn) interactions and proton, neutron single-particle energies, and that the two-body matrix elements for bosons derive from pn interaction, with an IBM-2 to IBM-1 projection of the resulting p-n sdg IBM Hamiltonian. The applicability of this model in generating sdg IBM Hamiltonians is demonstrated, using a single-j-shell Otsuka-Arima-Iachello mapping of the quadrupole and hexadecupole operators in proton and neutron spaces separately and constructing a quadrupole-quadrupole plus hexadecupole-hexadecupole Hamiltonian in the analysis of the spectra, B(E2)'s, and E4 strength distribution in the example of 150Nd.
Landau-Zener transitions and Dykhne formula in a simple continuum model
NASA Astrophysics Data System (ADS)
Dunham, Yujin; Garmon, Savannah
The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Cosmic microwave background radiation anisotropies in brane worlds.
Koyama, Kazuya
2003-11-28
We propose a new formulation to calculate the cosmic microwave background (CMB) spectrum in the Randall-Sundrum two-brane model based on recent progress in solving the bulk geometry using a low energy approximation. The evolution of the anisotropic stress imprinted on the brane by the 5D Weyl tensor is calculated. An impact of the dark radiation perturbation on the CMB spectrum is investigated in a simple model assuming an initially scale-invariant adiabatic perturbation. The dark radiation perturbation induces isocurvature perturbations, but the resultant spectrum can be quite different from the prediction of simple mixtures of adiabatic and isocurvature perturbations due to Weyl anisotropic stress.
Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model
NASA Astrophysics Data System (ADS)
Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman
2015-01-01
The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.
Theoretical size distribution of fossil taxa: analysis of a null model
Reed, William J; Hughes, Barry D
2007-01-01
Background This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family. PMID:17376249
Elastic and viscoelastic calculations of stresses in sedimentary basins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warpinski, N.R.
This study presents a method for estimating the stress state within reservoirs at depth using a time-history approach for both elastic and viscoelastic rock behavior. Two features of this model are particularly significant for stress calculations. The first is the time-history approach, where we assume that the present in situ stress is a result of the entire history of the rock mass, rather than due only to the present conditions. The model can incorporate: (1) changes in pore pressure due to gas generation; (2) temperature gradients and local thermal episodes; (3) consolidation and diagenesis through time-varying material properties; and (4)more » varying tectonic episodes. The second feature is the use of a new viscoelastic model. Rather than assume a form of the relaxation function, a complete viscoelastic solution is obtained from the elastic solution through the viscoelastic correspondence principal. Simple rate models are then applied to obtain the final rock behavior. Example calculations for some simple cases are presented that show the contribution of individual stress or strain components. Finally, a complete example of the stress history of rocks in the Piceance basin is attempted. This calculation compares favorably with present-day stress data in this location. This model serves as a predictor for natural fracture genesis and expected rock fracturing from the model is compared with actual fractures observed in this region. These results show that most current estimates of in situ stress at depth do not incorporate all of the important mechanisms and a more complete formulation, such as this study, is required for acceptable stress calculations. The method presented here is general and is applicable to any basin having a relatively simple geologic history. 25 refs., 18 figs.« less
A GENERATIVE SKETCH OF BURMESE.
ERIC Educational Resources Information Center
BURLING, ROBBINS
ASSUMING THAT A GENERATIVE APPROACH PROVIDES A FAIRLY DIRECT AND SIMPLE DESCRIPTION OF LINGUISTIC DATA, THE AUTHOR TAKES A TRADITIONAL BURMESE GRAMMAR (W. CORNYN'S "OUTLINE OF BURMESE GRAMMAR," REFERRED TO AS OBG THROUGHOUT THE PAPER) AND REWORKS IT INTO A GENERATIVE FRAMEWORK BASED ON A MODEL BY CHOMSKY. THE STUDY IS DIVIDED INTO FIVE SECTIONS,…
The Future of Humanities Labor
ERIC Educational Resources Information Center
Bauerlein, Mark
2008-01-01
"Publish or perish" has long been the formula of academic labor at research universities, but for many humanities professors that imperative has decayed into a simple rule of production. The publish-or-perish model assumed a peer-review process that maintained quality, but more and more it is the bare volume of printed words that counts. When…
Evidence-Based Practices in a Changing World: Reconsidering the Counterfactual in Education Research
ERIC Educational Resources Information Center
Lemons, Christopher J.; Fuchs, Douglas; Gilbert, Jennifer K.; Fuchs, Lynn S.
2014-01-01
Experimental and quasi-experimental designs are used in educational research to establish causality and develop effective practices. These research designs rely on a counterfactual model that, in simple form, calls for a comparison between a treatment group and a control group. Developers of educational practices often assume that the population…
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Electrostatic potential jump across fast-mode collisionless shocks
NASA Technical Reports Server (NTRS)
Mandt, M. E.; Kan, J. R.
1991-01-01
The electrostatic potential jump across fast-mode collisionless shocks is examined by comparing published observations, hybrid simulations, and a simple model, in order to better characterize its dependence on the various shock parameters. In all three, it is assumed that the electrons can be described by an isotropic power-law equation of state. The observations show that the cross-shock potential jump correlates well with the shock strength but shows very little correlation with other shock parameters. Assuming that the electrons obey an isotropic power law equation of state, the correlation of the potential jump with the shock strength follows naturally from the increased shock compression and an apparent dependence of the power law exponent on the Mach number which the observations indicate. It is found that including a Mach number dependence for the power law exponent in the electron equation of state in the simple model produces a potential jump which better fits the observations. On the basis of the simulation results and theoretical estimates of the cross-shock potential, it is discussed how the cross-shock potential might be expected to depend on the other shock parameters.
A simple model to estimate the impact of sea-level rise on platform beaches
NASA Astrophysics Data System (ADS)
Taborda, Rui; Ribeiro, Mónica Afonso
2015-04-01
Estimates of future beach evolution in response to sea-level rise are needed to assess coastal vulnerability. A research gap is identified in providing adequate predictive methods to use for platform beaches. This work describes a simple model to evaluate the effects of sea-level rise on platform beaches that relies on the conservation of beach sand volume and assumes an invariant beach profile shape. In closed systems, when compared with the Inundation Model, results show larger retreats; the differences are higher for beaches with wide berms and when the shore platform develops at shallow depths. The application of the proposed model to Cascais (Portugal) beaches, using 21st century sea-level rise scenarios, shows that there will be a significant reduction in beach width.
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.
2013-01-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236
Crystal structure refinement of reedmergnerite, the boron analog of albite
Clark, J.R.; Appleman, D.E.
1960-01-01
Ordering of boron in a feldspar crystallographic site T1(0) has been found in reedmergnerite, which has silicon-oxygen and sodium-oxygen distances comparable to those in isostructural low albite. If a simple ionic model is assumed, calculated bond strengths yield a considerable charge imbalance in reedmergnerite, an indication of the inadequacy of the model with respect to these complex structures and of the speculative nature of conclusions based on such a model.
Kosmidis, Kosmas; Argyrakis, Panos; Macheras, Panos
2003-07-01
To verify the Higuchi law and study the drug release from cylindrical and spherical matrices by means of Monte Carlo computer simulation. A one-dimensional matrix, based on the theoretical assumptions of the derivation of the Higuchi law, was simulated and its time evolution was monitored. Cylindrical and spherical three-dimensional lattices were simulated with sites at the boundary of the lattice having been denoted as leak sites. Particles were allowed to move inside it using the random walk model. Excluded volume interactions between the particles was assumed. We have monitored the system time evolution for different lattice sizes and different initial particle concentrations. The Higuchi law was verified using the Monte Carlo technique in a one-dimensional lattice. It was found that Fickian drug release from cylindrical matrices can be approximated nicely with the Weibull function. A simple linear relation between the Weibull function parameters and the specific surface of the system was found. Drug release from a matrix, as a result of a diffusion process assuming excluded volume interactions between the drug molecules, can be described using a Weibull function. This model, although approximate and semiempirical, has the benefit of providing a simple physical connection between the model parameters and the system geometry, which was something missing from other semiempirical models.
Acceleration and Velocity Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Truax, Roger
2015-01-01
A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
A viscoplastic shear-zone model for episodic slow slip events in oceanic subduction zones
NASA Astrophysics Data System (ADS)
Yin, A.; Meng, L.
2016-12-01
Episodic slow slip events occur widely along oceanic subduction zones at the brittle-ductile transition depths ( 20-50 km). Although efforts have been devoted to unravel their mechanical origins, it remains unclear about the physical controls on the wide range of their recurrence intervals and slip durations. In this study we present a simple mechanical model that attempts to account for the observed temporal evolution of slow slip events. In our model we assume that slow slip events occur in a viscoplastic shear zone (i.e., Bingham material), which has an upper static and a lower dynamic plastic yield strength. We further assume that the hanging wall deformation is approximated as an elastic spring. We envision the shear zone to be initially locked during forward/landward motion but is subsequently unlocked when the elastic and gravity-induced stress exceeds the static yield strength of the shear zone. This leads to backward/trenchward motion damped by viscous shear-zone deformation. As the elastic spring progressively loosens, the hanging wall velocity evolves with time and the viscous shear stress eventually reaches the dynamic yield strength. This is followed by the termination of the trenchward motion when the elastic stress is balanced by the dynamic yield strength of the shear zone and the gravity. In order to account for the zig-saw slip-history pattern of typical repeated slow slip events, we assume that the shear zone progressively strengthens after each slow slip cycle, possibly caused by dilatancy as commonly assumed or by progressive fault healing through solution-transport mechanisms. We quantify our conceptual model by obtaining simple analytical solutions. Our model results suggest that the duration of the landward motion increases with the down-dip length and the static yield strength of the shear zone, but decreases with the ambient loading velocity and the elastic modulus of the hanging wall. The duration of the backward/trenchward motion depends on the thickness, viscosity, and dynamic yield strength of the shear zone. Our model predicts a linear increase in slip with time during the landward motion and an exponential decrease in slip magnitude during the trenchward motion.
Magneto-hydrodynamic modeling of gas discharge switches
NASA Astrophysics Data System (ADS)
Doiphode, P.; Sakthivel, N.; Sarkar, P.; Chaturvedi, S.
2002-12-01
We have performed one-dimensional, time-dependent magneto-hydrodynamic modeling of fast gas-discharge switches. The model has been applied to both high- and low-pressure switches, involving a cylindrical argon-filled cavity. It is assumed that the discharge is initiated in a small channel near the axis of the cylinder. Joule heating in this channel rapidly raises its temperature and pressure. This drives a radial shock wave that heats and ionizes the surrounding low-temperature region, resulting in progressive expansion of the current channel. Our model is able to reproduce this expansion. However, significant difference of detail is observed, as compared with a simple model reported in the literature. In this paper, we present details of our simulations, a comparison with results from the simple model, and a physical interpretation for these differences. This is a first step towards development of a detailed 2-D model for such switches.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Technical Reports Server (NTRS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-01-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Time-dependent inhomogeneous jet models for BL Lac objects
NASA Astrophysics Data System (ADS)
Marlowe, A. T.; Urry, C. M.; George, I. M.
1992-05-01
Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.
Theoretical size distribution of fossil taxa: analysis of a null model.
Reed, William J; Hughes, Barry D
2007-03-22
This article deals with the theoretical size distribution (of number of sub-taxa) of a fossil taxon arising from a simple null model of macroevolution. New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions) or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. The size distributions of the pioneering genus (following a cataclysm) and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
Vibrational analysis of vertical axis wind turbine blades
NASA Astrophysics Data System (ADS)
Kapucu, Onur
The goal of this research is to derive a vibration model for a vertical axis wind turbine blade. This model accommodates the affects of varying relative flow angle caused by rotating the blade in the flow field, uses a simple aerodynamic model that assumes constant wind speed and constant rotation rate, and neglects the disturbance of wind due to upstream blade or post. The blade is modeled as elastic Euler-Bernoulli beam under transverse bending and twist deflections. Kinetic and potential energy equations for a rotating blade under deflections are obtained, expressed in terms of assumed modal coordinates and then plugged into Lagrangian equations where the non-conservative forces are the lift and drag forces and moments. An aeroelastic model for lift and drag forces, approximated with third degree polynomials, on the blade are obtained assuming an airfoil under variable angle of attack and airflow magnitudes. A simplified quasi-static airfoil theory is used, in which the lift and drag coefficients are not dependent on the history of the changing angle of attack. Linear terms on the resulting equations of motion will be used to conduct a numerical analysis and simulation, where numeric specifications are modified from the Sandia-17m Darrieus wind turbine by Sandia Laboratories.
Interaction dynamics of multiple mobile robots with simple navigation strategies
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1989-01-01
The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.
Abreu, P C; Greenberg, D A; Hodge, S E
1999-09-01
Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.
Optimal ordering and production policy for a recoverable item inventory system with learning effect
NASA Astrophysics Data System (ADS)
Tsai, Deng-Maw
2012-02-01
This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
Analysis of the free-fall behavior of liquid-metal drops in a gaseous atmosphere
NASA Technical Reports Server (NTRS)
Mccoy, J. Kevin; Markworth, Alan J.; Collings, E. W.; Brodkey, Robert S.
1987-01-01
The free-fall of a liquid-metal drop and heat transfer from the drop to its environment are described for both a gaseous atmosphere and vacuum. A simple model, in which the drop is assumed to fall rectilinearly with behavior like that of a rigid particle, is developed first, then possible causes of deviation from this behavior are discussed. The model is applied to describe solidification of drops in a drop tube. Possible future developments of the model are suggested.
The Location of Sales Offices and the Attraction of Cities.
ERIC Educational Resources Information Center
Holmes, Thomas J.
2005-01-01
This paper examines how manufacturers locate sales offices across cities. Sales office costs are assumed to have four components: a fixed cost, a frictional cost for out-of-town sales, a cost-reducing knowledge spillover related to city size, and an idiosyncratic match quality for each firm-city pair. A simple theoretical model is developed and is…
ERIC Educational Resources Information Center
Kane, Michael
2011-01-01
Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…
SUSTAIN: a network model of category learning.
Love, Bradley C; Medin, Douglas L; Gureckis, Todd M
2004-04-01
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a model of how humans learn categories from examples. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g., it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes-attractors-rules. SUSTAIN's discovery of category substructure is affected not only by the structure of the world but by the nature of the learning task and the learner's goals. SUSTAIN successfully extends category learning models to studies of inference learning, unsupervised learning, category construction, and contexts in which identification learning is faster than classification learning.
Experimentally validated modification to Cook-Torrance BRDF model for improved accuracy
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Ethridge, James A.; Nauyoks, Stephen E.; Marciniak, Michael A.
2017-09-01
The BRDF describes optical scatter off realistic surfaces. The microfacet BRDF model assumes geometric optics but is computationally simple compared to wave optics models. In this work, MERL BRDF data is fitted to the original Cook-Torrance microfacet model, and a modified Cook-Torrance model using the polarization factor in place of the mathematically problematic cross section conversion and geometric attenuation terms. The results provide experimental evidence that this modified Cook-Torrance model leads to improved fits, particularly for large incident and scattered angles. These results are expected to lead to more accurate BRDF modeling for remote sensing.
A Simple, Analytical Model of Collisionless Magnetic Reconnection in a Pair Plasma
NASA Technical Reports Server (NTRS)
Hesse, Michael; Zenitani, Seiji; Kuznetova, Masha; Klimas, Alex
2011-01-01
A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region, and to impart thermal energy to the plasma by means of quasi-viscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative, procedure. The solutions show expected features such as dominance of enthalpy flux in the reconnection outflow, as well as combination of adiabatic and quasi-viscous heating. Furthermore, the model predicts a maximum reconnection electric field of E(sup *)=0.4, normalized to the parameters at the inflow edge of the diffusion region.
A simple, analytical model of collisionless magnetic reconnection in a pair plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hesse, Michael; Zenitani, Seiji; Kuznetsova, Masha
2009-10-15
A set of conservation equations is utilized to derive balance equations in the reconnection diffusion region of a symmetric pair plasma. The reconnection electric field is assumed to have the function to maintain the current density in the diffusion region and to impart thermal energy to the plasma by means of quasiviscous dissipation. Using these assumptions it is possible to derive a simple set of equations for diffusion region parameters in dependence on inflow conditions and on plasma compressibility. These equations are solved by means of a simple, iterative procedure. The solutions show expected features such as dominance of enthalpymore » flux in the reconnection outflow, as well as combination of adiabatic and quasiviscous heating. Furthermore, the model predicts a maximum reconnection electric field of E{sup *}=0.4, normalized to the parameters at the inflow edge of the diffusion region.« less
Wake Vortex Prediction Models for Decay and Transport Within Stratified Environments
NASA Astrophysics Data System (ADS)
Switzer, George F.; Proctor, Fred H.
2002-01-01
This paper proposes two simple models to predict vortex transport and decay. The models are determined empirically from results of three-dimensional large eddy simulations, and are applicable to wake vortices out of ground effect and not subjected to environmental winds. The results, from the large eddy simulations assume a range of ambient turbulence and stratification levels. The models and the results from the large eddy simulations support the hypothesis that the decay of the vortex hazard is decoupled from its change in descent rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dizier, M.H.; Eliaou, J.F.; Babron, M.C.
In order to investigate the HLA component involved in rheumatoid arthritis (RA), the authors tested genetic models by the marker association-segregation [chi][sup 2] (MASC) method, using the HLA genotypic distribution observed in a sample of 97 RA patients. First they tested models assuming the involvement of a susceptibility gene linked to the DR locus. They showed that the present data are compatible with a simple model assuming the effect of a recessive allele of a biallelic locus linked to the DR locus and without any assumption of synergistic effect. Then they considered models assuming the direct involvement of the DRmore » allele products, and tested the unifying-shared-epitope hypothesis, which has been proposed. Under this hypothesis the DR alleles are assumed to be directly involved in the susceptibility to the disease because of the presence of similar or identical amino acid sequences in position 70-74 of the third hypervariable region of the DRBI molecules, shared by the RA-associated DR alleles DR4Dw4, DR4Dw14, and DR1. This hypothesis was strongly rejected with the present data. In the case of the direct involvement of the DR alleles, hypotheses more complex that the unifying-shared-epitope hypothesis would have to be considered. 28 refs., 2 tabs.« less
Viral kinetic modeling: state of the art
Canini, Laetitia; Perelson, Alan S.
2014-06-25
Viral kinetic modeling has led to increased understanding of the within host dynamics of viral infections and the effects of therapy. Here we review recent developments in the modeling of viral infection kinetics with emphasis on two infectious diseases: hepatitis C and influenza. We review how viral kinetic modeling has evolved from simple models of viral infections treated with a drug or drug cocktail with an assumed constant effectiveness to models that incorporate drug pharmacokinetics and pharmacodynamics, as well as phenomenological models that simply assume drugs have time varying-effectiveness. We also discuss multiscale models that include intracellular events in viralmore » replication, models of drug-resistance, models that include innate and adaptive immune responses and models that incorporate cell-to-cell spread of infection. Overall, viral kinetic modeling has provided new insights into the understanding of the disease progression and the modes of action of several drugs. In conclusion, we expect that viral kinetic modeling will be increasingly used in the coming years to optimize drug regimens in order to improve therapeutic outcomes and treatment tolerability for infectious diseases.« less
Ferromagnetism in the Hubbard Model with a Gapless Nearly-Flat Band
NASA Astrophysics Data System (ADS)
Tanaka, Akinori
2018-01-01
We present a version of the Hubbard model with a gapless nearly-flat lowest band which exhibits ferromagnetism in two or more dimensions. The model is defined on a lattice obtained by placing a site on each edge of the hypercubic lattice, and electron hopping is assumed to be only between nearest and next nearest neighbor sites. The lattice, where all the sites are identical, is simple, and the corresponding single-electron band structure, where two cosine-type bands touch without an energy gap, is also simple. We prove that the ground state of the model is unique and ferromagnetic at half-filling of the lower band, if the lower band is nearly flat and the strength of on-site repulsion is larger than a certain value which is independent of the lattice size. This is the first example of ferromagnetism in three dimensional non-singular models with a gapless band structure.
NASA Technical Reports Server (NTRS)
Holmes, Thomas; Owe, Manfred; deJeu, Richard
2007-01-01
Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.
Performance Improvement Assuming Complexity
ERIC Educational Resources Information Center
Rowland, Gordon
2007-01-01
Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…
What is Neptune's D/H ratio really telling us about its water abundance?
NASA Astrophysics Data System (ADS)
Ali-Dib, Mohamad; Lakhlani, Gunjan
2018-05-01
We investigate the deep-water abundance of Neptune using a simple two-component (core + envelope) toy model. The free parameters of the model are the total mass of heavy elements in the planet (Z), the mass fraction of Z in the envelope (fenv), and the D/H ratio of the accreted building blocks (D/Hbuild).We systematically search the allowed parameter space on a grid and constrain it using Neptune's bulk carbon abundance, D/H ratio, and interior structure models. Assuming solar C/O ratio and cometary D/H for the accreted building blocks are forming the planet, we can fit all of the constraints if less than ˜15 per cent of Z is in the envelope (f_{env}^{median} ˜ 7 per cent), and the rest is locked in a solid core. This model predicts a maximum bulk oxygen abundance in Neptune of 65× solar value. If we assume a C/O of 0.17, corresponding to clathrate-hydrates building blocks, we predict a maximum oxygen abundance of 200× solar value with a median value of ˜140. Thus, both cases lead to oxygen abundance significantly lower than the preferred value of Cavalié et al. (˜540× solar), inferred from model-dependent deep CO observations. Such high-water abundances are excluded by our simple but robust model. We attribute this discrepancy to our imperfect understanding of either the interior structure of Neptune or the chemistry of the primordial protosolar nebula.
Fiber Composite Sandwich Thermostructural Behavior: Computational Simulation
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Aiello, R. A.; Murthy, P. L. N.
1986-01-01
Several computational levels of progressive sophistication/simplification are described to computationally simulate composite sandwich hygral, thermal, and structural behavior. The computational levels of sophistication include: (1) three-dimensional detailed finite element modeling of the honeycomb, the adhesive and the composite faces; (2) three-dimensional finite element modeling of the honeycomb assumed to be an equivalent continuous, homogeneous medium, the adhesive and the composite faces; (3) laminate theory simulation where the honeycomb (metal or composite) is assumed to consist of plies with equivalent properties; and (4) derivations of approximate, simplified equations for thermal and mechanical properties by simulating the honeycomb as an equivalent homogeneous medium. The approximate equations are combined with composite hygrothermomechanical and laminate theories to provide a simple and effective computational procedure for simulating the thermomechanical/thermostructural behavior of fiber composite sandwich structures.
Human sleep and circadian rhythms: a simple model based on two coupled oscillators.
Strogatz, S H
1987-01-01
We propose a model of the human circadian system. The sleep-wake and body temperature rhythms are assumed to be driven by a pair of coupled nonlinear oscillators described by phase variables alone. The novel aspect of the model is that its equations may be solved analytically. Computer simulations are used to test the model against sleep-wake data pooled from 15 studies of subjects living for weeks in unscheduled, time-free environments. On these tests the model performs about as well as the existing models, although its mathematical structure is far simpler.
Kobayashi, Seiji
2002-05-10
A point-spread function (PSF) is commonly used as a model of an optical disk readout channel. However, the model given by the PSF does not contain the quadratic distortion generated by the photo-detection process. We introduce a model for calculating an approximation of the quadratic component of a signal. We show that this model can be further simplified when a read-only-memory (ROM) disk is assumed. We introduce an edge-spread function by which a simple nonlinear model of an optical ROM disk readout channel is created.
Modification of the Simons model for calculation of nonradial expansion plumes
NASA Technical Reports Server (NTRS)
Boyd, I. D.; Stark, J. P. W.
1989-01-01
The Simons model is a simple model for calculating the expansion plumes of rockets and thrusters and is a widely used engineering tool for the determination of spacecraft impingement effects. The model assumes that the density of the plume decreases radially from the nozzle exit. Although a high degree of success has been achieved in modeling plumes with moderate Mach numbers, the accuracy obtained under certain conditions is unsatisfactory. A modification made to the model that allows effective description of nonradial behavior in plumes is presented, and the conditions under which its use is preferred are prescribed.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas
2004-01-01
Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...
Extinction risk in successional landscapes subject to catastrophic disturbances.
David Boughton; Urmila Malvadkar
2002-01-01
We explore the thesis that stochasticity in successional-disturbance systems can be an agent of species extinction. The analysis uses a simple model of patch dynamics for seral stages in an idealized landscape; each seral stage is assumed to support a specialist biota. The landscape as a whole is characterized by a mean patch birth rate, mean patch size, and mean...
Score tests for independence in semiparametric competing risks models.
Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul
2009-12-01
A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.
NASA Astrophysics Data System (ADS)
Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.
2012-02-01
The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.
A Simplified Model for Detonation Based Pressure-Gain Combustors
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2010-01-01
A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.
A simple model of space radiation damage in GaAs solar cells
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Stith, J. J.; Stock, L. V.
1983-01-01
A simple model is derived for the radiation damage of shallow junction gallium arsenide (GaAs) solar cells. Reasonable agreement is found between the model and specific experimental studies of radiation effects with electron and proton beams. In particular, the extreme sensitivity of the cell to protons stopping near the cell junction is predicted by the model. The equivalent fluence concept is of questionable validity for monoenergetic proton beams. Angular factors are quite important in establishing the cell sensitivity to incident particle types and energies. A fluence of isotropic incidence 1 MeV electrons (assuming infinite backing) is equivalent to four times the fluence of normal incidence 1 MeV electrons. Spectral factors common to the space radiations are considered, and cover glass thickness required to minimize the initial damage for a typical cell configuration is calculated. Rough equivalence between the geosynchronous environment and an equivalent 1 MeV electron fluence (normal incidence) is established.
A discrete Markov metapopulation model for persistence and extinction of species.
Thompson, Colin J; Shtilerman, Elad; Stone, Lewi
2016-09-07
A simple discrete generation Markov metapopulation model is formulated for studying the persistence and extinction dynamics of a species in a given region which is divided into a large number of sites or patches. Assuming a linear site occupancy probability from one generation to the next we obtain exact expressions for the time evolution of the expected number of occupied sites and the mean-time to extinction (MTE). Under quite general conditions we show that the MTE, to leading order, is proportional to the logarithm of the initial number of occupied sites and in precise agreement with similar expressions for continuous time-dependent stochastic models. Our key contribution is a novel application of generating function techniques and simple asymptotic methods to obtain a second order asymptotic expression for the MTE which is extremely accurate over the entire range of model parameter values. Copyright © 2016 Elsevier Ltd. All rights reserved.
Roll plane analysis of on-aircraft antennas
NASA Technical Reports Server (NTRS)
Burnside, W. D.; Marhefka, R. J.; Byu, C. L.
1974-01-01
Roll plane radiation patterns of on-aircraft antennas are analyzed using high frequency solutions. Aircraft-antenna pattern performance in which the aircraft is modelled in its most basic form is presented. The fuselage is assumed to be a perfectly conducting elliptic cylinder with the antennas mounted near the top or bottom. The wings are simulated by arbitrarily many sided flat plates and the engines by circular cylinders. The patterns in each case are verified by measured results taken on simple models as well as scale models of actual aircraft.
Market dynamics and stock price volatility
NASA Astrophysics Data System (ADS)
Li, H.; Rosser, J. B., Jr.
2004-06-01
This paper presents a possible explanation for some of the empirical properties of asset returns within a heterogeneous-agents framework. The model turns out, even if we assume the input fundamental value follows an simple Gaussian distribution lacking both fat tails and volatility dependence, these features can show up in the time series of asset returns. In this model, the profit comparison and switching between heterogeneous play key roles, which build a connection between endogenous market and the emergence of stylized facts.
Wealth Condensation and ``Corruption'' in a Toy Model
NASA Astrophysics Data System (ADS)
Johnston, D.; Burda, Z.; Jurkiewicz, J.; Kaminski, M.; Nowak, M. A.; Papp, G.; Zahed, I.
2005-09-01
We discuss the wealth condensation mechanism in a simple toy economy in which individual agent's wealths are distributed according to a Pareto power law and the overall wealth is fixed. The observed behaviour is the manifestation of a transition which occurs in Zero Range Processes (ZRPs) or ``balls in boxes'' models. An amusing feature of the transition in this context is that the condensation can be induced by increasing the exponent in the power law, which one might have naively assumed penalised greater wealths more.
Robust stability of second-order systems
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1993-01-01
A feedback linearization technique is used in conjunction with passivity concepts to design robust controllers for space robots. It is assumed that bounded modeling uncertainties exist in the inertia matrix and the vector representing the coriolis, centripetal, and friction forces. Under these assumptions, the controller guarantees asymptotic tracking of the joint variables. A Lagrangian approach is used to develop a dynamic model for space robots. Closed-loop simulation results are illustrated for a simple case of a single link planar manipulator with freely floating base.
A simple model for calculating tsunami flow speed from tsunami deposits
Jaffe, B.E.; Gelfenbuam, G.
2007-01-01
This paper presents a simple model for tsunami sedimentation that can be applied to calculate tsunami flow speed from the thickness and grain size of a tsunami deposit (the inverse problem). For sandy tsunami deposits where grain size and thickness vary gradually in the direction of transport, tsunami sediment transport is modeled as a steady, spatially uniform process. The amount of sediment in suspension is assumed to be in equilibrium with the steady portion of the long period, slowing varying uprush portion of the tsunami. Spatial flow deceleration is assumed to be small and not to contribute significantly to the tsunami deposit. Tsunami deposits are formed from sediment settling from the water column when flow speeds on land go to zero everywhere at the time of maximum tsunami inundation. There is little erosion of the deposit by return flow because it is a slow flow and is concentrated in topographic lows. Variations in grain size of the deposit are found to have more effect on calculated tsunami flow speed than deposit thickness. The model is tested using field data collected at Arop, Papua New Guinea soon after the 1998 tsunami. Speed estimates of 14??m/s at 200??m inland from the shoreline compare favorably with those from a 1-D inundation model and from application of Bernoulli's principle to water levels on buildings left standing after the tsunami. As evidence that the model is applicable to some sandy tsunami deposits, the model reproduces the observed normal grading and vertical variation in sorting and skewness of a deposit formed by the 1998 tsunami.
Uncertainty about fundamentals and herding behavior in the FOREX market
NASA Astrophysics Data System (ADS)
Kaltwasser, Pablo Rovira
2010-03-01
It is traditionally assumed in finance models that the fundamental value of assets is known with certainty. Although this is an appealing simplifying assumption it is by no means based on empirical evidence. A simple heterogeneous agent model of the exchange rate is presented. In the model, traders do not observe the true underlying fundamental exchange rate and as a consequence they base their trades on beliefs about this variable. Despite the fact that only fundamentalist traders operate in the market, the model belongs to the heterogeneous agent literature, as traders have different beliefs about the fundamental rate.
Complex discrete dynamics from simple continuous population models.
Gamarra, Javier G P; Solé, Ricard V
2002-05-01
Nonoverlapping generations have been classically modelled as difference equations in order to account for the discrete nature of reproductive events. However, other events such as resource consumption or mortality are continuous and take place in the within-generation time. We have realistically assumed a hybrid ODE bidimensional model of resources and consumers with discrete events for reproduction. Numerical and analytical approaches showed that the resulting dynamics resembles a Ricker map, including the doubling route to chaos. Stochastic simulations with a handling-time parameter for indirect competition of juveniles may affect the qualitative behaviour of the model.
An interfacial mechanism for cloud droplet formation on organic aerosols
Ruehl, C. R.; Davies, J. F.; Wilson, K. R.
2016-03-25
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less
An interfacial mechanism for cloud droplet formation on organic aerosols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruehl, C. R.; Davies, J. F.; Wilson, K. R.
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depressionmore » by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation.« less
Contact problem for an elastic reinforcement bonded to an elastic plate
NASA Technical Reports Server (NTRS)
Erdogan, F.; Civelek, M. B.
1973-01-01
The stiffening layer is treated as an elastic membrane and the base plate is assumed to be an elastic continuum. The bonding between the two materials is assumed to be either one of direct adhesion ro through a thin adhesive layer which is treated as a shear spring. The solution for the simple case in which both the stiffener and the base plate are treated as membranes is also given. The contact stress is obtained for a series of numerical examples. In the direct adhesion case the contact stress becomes infinite at the stiffener ends with a typical square root singularity for the continuum model, and behaving as a delta function for the membrane model. In the case of bonding through an adhesive layer the contact stress becomes finite and continuous along the entire contact area.
An interfacial mechanism for cloud droplet formation on organic aerosols.
Ruehl, Christopher R; Davies, James F; Wilson, Kevin R
2016-03-25
Accurate predictions of aerosol/cloud interactions require simple, physically accurate parameterizations of the cloud condensation nuclei (CCN) activity of aerosols. Current models assume that organic aerosol species contribute to CCN activity by lowering water activity. We measured droplet diameters at the point of CCN activation for particles composed of dicarboxylic acids or secondary organic aerosol and ammonium sulfate. Droplet activation diameters were 40 to 60% larger than predicted if the organic was assumed to be dissolved within the bulk droplet, suggesting that a new mechanism is needed to explain cloud droplet formation. A compressed film model explains how surface tension depression by interfacial organic molecules can alter the relationship between water vapor supersaturation and droplet size (i.e., the Köhler curve), leading to the larger diameters observed at activation. Copyright © 2016, American Association for the Advancement of Science.
Thermoelastic damping in thin microrings with two-dimensional heat conduction
NASA Astrophysics Data System (ADS)
Fang, Yuming; Li, Pu
2015-05-01
Accurate determination of thermoelastic damping (TED) is very challenging in the design of micro-resonators. Microrings are widely used in many micro-resonators. In the past, to model the TED effect on the microrings, some analytical models have been developed. However, in the previous works, the heat conduction within the microring is modeled by using the one-dimensional approach. The governing equation for heat conduction is solved only for the one-dimensional heat conduction along the radial thickness of the microring. This paper presents a simple analytical model for TED in microrings. The two-dimensional heat conduction over the thermoelastic temperature gradients along the radial thickness and the circumferential direction are considered in the present model. A two-dimensional heat conduction equation is developed. The solution of the equation is represented by the product of an assumed sine series along the radial thickness and an assumed trigonometric series along the circumferential direction. The analytical results obtained by the present 2-D model show a good agreement with the numerical (FEM) results. The limitations of the previous 1-D model are assessed.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Cooling and solidification of liquid-metal drops in a gaseous atmosphere
NASA Technical Reports Server (NTRS)
Mccoy, J. K.; Markworth, A. J.; Collings, E. W.; Brodkey, R. S.
1992-01-01
The free fall of a liquid-metal drop, heat transfer from the drop to its environment, and solidification of the drop are described for both gaseous and vacuum atmospheres. A simple model, in which the drop is assumed to fall rectilinearly, with behavior like that of a rigid particle, is developed to describe cooling behavior. Recalescence of supercooled drops is assumed to occur instantaneously when a specified temperature is passed. The effects of solidification and experimental parameters on drop cooling are calculated and discussed. Major results include temperature as a function of time, and of drag, time to complete solidification, and drag as a function of the fraction of the drop solidified.
Influence of the Mesh Geometry Evolution on Gearbox Dynamics during Its Maintenance
NASA Astrophysics Data System (ADS)
Dąbrowski, Z.; Dziurdź, J.; Klekot, G.
2017-12-01
Toothed gears constitute the necessary elements of power transmission systems. They are applied as stationary devices in drive systems of road vehicles, ships and crafts as well as airplanes and helicopters. One of the problems related to the toothed gears usage is the determination of their technical state or its evolutions. Assuming that the gear slippage velocity is attributed to vibrations and noises generated by cooperating toothed wheels, the application of a simple cooperation model of rolled wheels of skew teeth is proposed for the analysis of the mesh evolution influence on the gear dynamics. In addition, an example of utilising an ordinary coherence function for investigating evolutionary mesh changes related to the effects impossible to be described by means of the simple kinematic model is presented.
Searching for orbits around the triple system 45 Eugenia
NASA Astrophysics Data System (ADS)
Mescolotti, B. Y. P. M.; Prado, A. F. B. A.; Chiaradia, A. P. M.; Gomes, V. M.
2017-10-01
Asteroids are small bodies that raises high interest, because they have unknown characteristics. The present research aims to study orbits for a spacecraft around the triple asteroid 45 Eugenia. The quality of the observations made by the spacecraft depends on the distance the spacecraft remains from the bodies of the system. It is used a semi-analytical model that is simple but able to represent the main characteristics of that system. This model is called “Precessing Inclined Bi-Elliptical Problem” (PIBEP). A reference system centered on the main body (Eugenia) and with the reference plane assumed to be in the orbital plane of the second more massive body, here called Petit-Prince, is used. The secondary bodies are assumed to be in elliptical orbits. In addition, it is assumed that the orbits of the smaller bodies are precessing due to the presence of the flattening of the main body (J2). This work analyzes orbits for the spacecraft with passages near Petit-Prince and Princesses, which are the two smaller bodies of the triple system.
The dynamics and fueling of active nuclei
NASA Technical Reports Server (NTRS)
Norman, C.; Silk, J.
1983-01-01
It is generally believed that quasars and active galactic nuclei produce their prodigious luminosities in connection with the release of gravitational energy associated with accretion and infall of matter onto a compact central object. In the present analysis, it is assumed that the central object is a massive black hole. The fact that a black hole provides the deepest possible central potential well does imply that it is the most natural candidate for the central engine. It is also assumed that the quasar is associated with the nucleus of a conventional galaxy. A number of difficulties arise in connection with finding a suitable stellar fueling model. A simple scheme is discussed for resolving these difficulties. Attention is given to fueling in a nonaxisymmetric potential, the effects of a massive accretion disk, and the variability in the disk luminosity caused by star-disk collisions assuming that the energy deposited in the disk is radiated.
Meridional overturning circulations driven by surface wind and buoyancy forcing
NASA Astrophysics Data System (ADS)
Bell, M. J.
2016-02-01
A conceptual picture of the Meridional Overturning Circulation (MOC) is developed using 2- and 3-layer models governed by the planetary geostrophic equations and simple global geometries. The picture has four main elements. First cold water driven to the surface in the South Atlantic north of Drake passage by Ekman upwelling is transformed into warmer water by heat input at the surface from the atmosphere. Second the model's boundary conditions constrain the depths of the isopycnal layers to be almost flat along the eastern boundaries of the ocean. This results in, third, warm water reaching high latitudes in the northern hemisphere where it is transformed into cold water by surface heat loss. Finally it is assumed that western boundary currents are able to close the circulations. The results from a set of numerical experiments for the upwelling limb in the Southern Hemisphere are summarised in a simple conceptual schematic. Analytical solutions have been found for the down-welling limb assuming the wind stress in the Northern Hemisphere is negligible. Expressions for the depth of the isopycnal interface on the eastern boundary and the strength of the MOC obtained by combining these solutions in a 2-layer model are generally consistent with and complementary to those obtained by Gnandesikan (1999). The MOC in two basins one of which has a strong halocline is also discussed.
The problem with simple lumped parameter models: Evidence from tritium mean transit times
NASA Astrophysics Data System (ADS)
Stewart, Michael; Morgenstern, Uwe; Gusyev, Maksym; Maloszewski, Piotr
2017-04-01
Simple lumped parameter models (LPMs) based on assuming homogeneity and stationarity in catchments and groundwater bodies are widely used to model and predict hydrological system outputs. However, most systems are not homogeneous or stationary, and errors resulting from disregard of the real heterogeneity and non-stationarity of such systems are not well understood and rarely quantified. As an example, mean transit times (MTTs) of streamflow are usually estimated from tracer data using simple LPMs. The MTT or transit time distribution of water in a stream reveals basic catchment properties such as water flow paths, storage and mixing. Importantly however, Kirchner (2016a) has shown that there can be large (several hundred percent) aggregation errors in MTTs inferred from seasonal cycles in conservative tracers such as chloride or stable isotopes when they are interpreted using simple LPMs (i.e. a range of gamma models or GMs). Here we show that MTTs estimated using tritium concentrations are similarly affected by aggregation errors due to heterogeneity and non-stationarity when interpreted using simple LPMs (e.g. GMs). The tritium aggregation error series from the strong nonlinearity between tritium concentrations and MTT, whereas for seasonal tracer cycles it is due to the nonlinearity between tracer cycle amplitudes and MTT. In effect, water from young subsystems in the catchment outweigh water from old subsystems. The main difference between the aggregation errors with the different tracers is that with tritium it applies at much greater ages than it does with seasonal tracer cycles. We stress that the aggregation errors arise when simple LPMs are applied (with simple LPMs the hydrological system is assumed to be a homogeneous whole with parameters representing averages for the system). With well-chosen compound LPMs (which are combinations of simple LPMs) on the other hand, aggregation errors are very much smaller because young and old water flows are treated separately. "Well-chosen" means that the compound LPM is based on hydrologically- and geologically-validated information, and the choice can be assisted by matching simulations to time series of tritium measurements. References: Kirchner, J.W. (2016a): Aggregation in environmental systems - Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments. Hydrol. Earth Syst. Sci. 20, 279-297. Stewart, M.K., Morgenstern, U., Gusyev, M.A., Maloszewski, P. 2016: Aggregation effects on tritium-based mean transit times and young water fractions in spatially heterogeneous catchments and groundwater systems, and implications for past and future applications of tritium. Submitted to Hydrol. Earth Syst. Sci., 10 October 2016, doi:10.5194/hess-2016-532.
Modeling of two-phase porous flow with damage
NASA Astrophysics Data System (ADS)
Cai, Z.; Bercovici, D.
2009-12-01
Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.
Control-structure interaction study for the Space Station solar dynamic power module
NASA Technical Reports Server (NTRS)
Cheng, J.; Ianculescu, G.; Ly, J.; Kim, M.
1991-01-01
The authors investigate the feasibility of using a conventional PID (proportional plus integral plus derivative) controller design to perform the pointing and tracking functions for the Space Station Freedom solar dynamic power module. Using this simple controller design, the control/structure interaction effects were also studied without assuming frequency bandwidth separation. From the results, the feasibility of a simple solar dynamic control solution with a reduced-order model, which satisfies the basic system pointing and stability requirements, is suggested. However, the conventional control design approach is shown to be very much influenced by the order of reduction of the plant model, i.e., the number of the retained elastic modes from the full-order model. This suggests that, for complex large space structures, such as the Space Station Freedom solar dynamic, the conventional control system design methods may not be adequate.
Inhomogeneity and velocity fields effects on scattering polarization in solar prominences
NASA Astrophysics Data System (ADS)
Milić, I.; Faurobert, M.
2015-10-01
One of the methods for diagnosing vector magnetic fields in solar prominences is the so called "inversion" of observed polarized spectral lines. This inversion usually assumes a fairly simple generative model and in this contribution we aim to study the possible systematic errors that are introduced by this assumption. On two-dimensional toy model of a prominence, we first demonstrate importance of multidimensional radiative transfer and horizontal inhomogeneities. These are able to induce a significant level of polarization in Stokes U, without the need for the magnetic field. We then compute emergent Stokes spectrum from a prominence which is pervaded by the vector magnetic field and use a simple, one-dimensional model to interpret these synthetic observations. We find that inferred values for the magnetic field vector generally differ from the original ones. Most importantly, the magnetic field might seem more inclined than it really is.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Simple model of surface roughness for binary collision sputtering simulations
NASA Astrophysics Data System (ADS)
Lindsey, Sloan J.; Hobler, Gerhard; Maciążek, Dawid; Postawa, Zbigniew
2017-02-01
It has been shown that surface roughness can strongly influence the sputtering yield - especially at glancing incidence angles where the inclusion of surface roughness leads to an increase in sputtering yields. In this work, we propose a simple one-parameter model (the "density gradient model") which imitates surface roughness effects. In the model, the target's atomic density is assumed to vary linearly between the actual material density and zero. The layer width is the sole model parameter. The model has been implemented in the binary collision simulator IMSIL and has been evaluated against various geometric surface models for 5 keV Ga ions impinging an amorphous Si target. To aid the construction of a realistic rough surface topography, we have performed MD simulations of sequential 5 keV Ga impacts on an initially crystalline Si target. We show that our new model effectively reproduces the sputtering yield, with only minor variations in the energy and angular distributions of sputtered particles. The success of the density gradient model is attributed to a reduction of the reflection coefficient - leading to increased sputtering yields, similar in effect to surface roughness.
Aggregate age-at-marriage patterns from individual mate-search heuristics.
Todd, Peter M; Billari, Francesco C; Simão, Jorge
2005-08-01
The distribution of age at first marriage shows well-known strong regularities across many countries and recent historical periods. We accounted for these patterns by developing agent-based models that simulate the aggregate behavior of individuals who are searching for marriage partners. Past models assumed fully rational agents with complete knowledge of the marriage market; our simulated agents used psychologically plausible simple heuristic mate search rules that adjust aspiration levels on the basis of a sequence of encounters with potential partners. Substantial individual variation must be included in the models to account for the demographically observed age-at-marriage patterns.
Cowell, Robert G
2018-05-04
Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Aguirre, E. E.; Karchewski, B.
2017-12-01
DC resistivity surveying is a geophysical method that quantifies the electrical properties of the subsurface of the earth by applying a source current between two electrodes and measuring potential differences between electrodes at known distances from the source. Analytical solutions for a homogeneous half-space and simple subsurface models are well known, as the former is used to define the concept of apparent resistivity. However, in situ properties are heterogeneous meaning that simple analytical models are only an approximation, and ignoring such heterogeneity can lead to misinterpretation of survey results costing time and money. The present study examines the extent to which random variations in electrical properties (i.e. electrical conductivity) affect potential difference readings and therefore apparent resistivities, relative to an assumed homogeneous subsurface model. We simulate the DC resistivity survey using a Finite Difference (FD) approximation of an appropriate simplification of Maxwell's equations implemented in Matlab. Electrical resistivity values at each node in the simulation were defined as random variables with a given mean and variance, and are assumed to follow a log-normal distribution. The Monte Carlo analysis for a given variance of electrical resistivity was performed until the mean and variance in potential difference measured at the surface converged. Finally, we used the simulation results to examine the relationship between variance in resistivity and variation in surface potential difference (or apparent resistivity) relative to a homogeneous half-space model. For relatively low values of standard deviation in the material properties (<10% of mean), we observed a linear correlation between variance of resistivity and variance in apparent resistivity.
Mubayi, Anuj; Greenwood, Priscilla E.; Castillo-Chávez, Carlos; Gruenewald, Paul; Gorman, Dennis M.
2009-01-01
Alcohol consumption is a function of social dynamics, environmental contexts, individuals’ preferences and family history. Empirical surveys have focused primarily on identification of risk factors for high-level drinking but have done little to clarify the underlying mechanisms at work. Also, there have been few attempts to apply nonlinear dynamics to the study of these mechanisms and processes at the population level. A simple framework where drinking is modeled as a socially contagious process in low- and high-risk connected environments is introduced. Individuals are classified as light, moderate (assumed mobile), and heavy drinkers. Moderate drinkers provide the link between both environments, that is, they are assumed to be the only individuals drinking in both settings. The focus here is on the effect of moderate drinkers, measured by the proportion of their time spent in “low-” versus “high-” risk drinking environments, on the distribution of drinkers. A simple model within our contact framework predicts that if the relative residence times of moderate drinkers is distributed randomly between low- and high-risk environments then the proportion of heavy drinkers is likely to be higher than expected. However, the full story even in a highly simplified setting is not so simple because “strong” local social mixing tends to increase high-risk drinking on its own. High levels of social interaction between light and moderate drinkers in low-risk environments can diminish the importance of the distribution of relative drinking times on the prevalence of heavy drinking. PMID:20161388
Contact problem for an elastic reinforcement bonded to an elastic plate
NASA Technical Reports Server (NTRS)
Erdogan, F.; Civelek, M. B.
1974-01-01
The contact problem for a thin elastic reinforcement bonded to an elastic plate is considered. The stiffening layer is treated as an elastic membrane and the base plate is assumed to be an elastic continuum. The bonding between the two materials is assumed to be either one of direct adhesion or through a thin adhesive layer which is treated as a shear spring. The solution for the simple case in which both the stiffener and the base plate are treated as membranes is also given. The contact stress is obtained for a series of numerical examples. In the direct adhesion case the contact stress becomes infinite at the stiffener ends with a typical square root singularity for the continuum model and behaving as a delta function for the membrane model. In the case of bonding through an adhesive layer the contact stress becomes finite and continuous along the entire contact area.
Rockfall travel distances theoretical distributions
NASA Astrophysics Data System (ADS)
Jaboyedoff, Michel; Derron, Marc-Henri; Pedrazzini, Andrea
2017-04-01
The probability of propagation of rockfalls is a key part of hazard assessment, because it permits to extrapolate the probability of propagation of rockfall either based on partial data or simply theoretically. The propagation can be assumed frictional which permits to describe on average the propagation by a line of kinetic energy which corresponds to the loss of energy along the path. But loss of energy can also be assumed as a multiplicative process or a purely random process. The distributions of the rockfall block stop points can be deduced from such simple models, they lead to Gaussian, Inverse-Gaussian, Log-normal or exponential negative distributions. The theoretical background is presented, and the comparisons of some of these models with existing data indicate that these assumptions are relevant. The results are either based on theoretical considerations or by fitting results. They are potentially very useful for rockfall hazard zoning and risk assessment. This approach will need further investigations.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
Unpacking buyer-seller differences in valuation from experience: A cognitive modeling approach.
Pachur, Thorsten; Scheibehenne, Benjamin
2017-12-01
People often indicate a higher price for an object when they own it (i.e., as sellers) than when they do not (i.e., as buyers)-a phenomenon known as the endowment effect. We develop a cognitive modeling approach to formalize, disentangle, and compare alternative psychological accounts (e.g., loss aversion, loss attention, strategic misrepresentation) of such buyer-seller differences in pricing decisions of monetary lotteries. To also be able to test possible buyer-seller differences in memory and learning, we study pricing decisions from experience, obtained with the sampling paradigm, where people learn about a lottery's payoff distribution from sequential sampling. We first formalize different accounts as models within three computational frameworks (reinforcement learning, instance-based learning theory, and cumulative prospect theory), and then fit the models to empirical selling and buying prices. In Study 1 (a reanalysis of published data with hypothetical decisions), models assuming buyer-seller differences in response bias (implementing a strategic-misrepresentation account) performed best; models assuming buyer-seller differences in choice sensitivity or memory (implementing a loss-attention account) generally fared worst. In a new experiment involving incentivized decisions (Study 2), models assuming buyer-seller differences in both outcome sensitivity (as proposed by a loss-aversion account) and response bias performed best. In both Study 1 and 2, the models implemented in cumulative prospect theory performed best. Model recovery studies validated our cognitive modeling approach, showing that the models can be distinguished rather well. In summary, our analysis supports a loss-aversion account of the endowment effect, but also reveals a substantial contribution of simple response bias.
Irradiation and Enhanced Magnetic Braking in Cataclysmic Variables
NASA Astrophysics Data System (ADS)
McCormick, P. J.; Frank, J.
1998-12-01
In previous work we have shown that irradiation driven mass transfer cycles can occur in cataclysmic variables at all orbital periods if an additional angular momentum loss mechanism is assumed. Earlier models simply postulated that the enhanced angular momentum loss was proportional to the mass transfer rate without any specific physical model. In this paper we present a simple modification of magnetic braking which seems to have the right properties to sustain irradiation driven cycles at all orbital periods. We assume that the wind mass loss from the irradiated companion consists of two parts: an intrinsic stellar wind term plus an enhancement that is proportional to the irradiation. The increase in mass flow reduces the specific angular momentum carried away by the flow but nevertheless yields an enhanced rate of magnetic braking. The secular evolution of the binary is then computed numerically with a suitably modified double polytropic code (McCormick & Frank 1998). With the above model and under certain conditions, mass transfer oscillations occur at all orbital periods.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Cervera, Javier; Manzanares, Jose Antonio; Mafe, Salvador
2015-02-19
We analyze the coupling of model nonexcitable (non-neural) cells assuming that the cell membrane potential is the basic individual property. We obtain this potential on the basis of the inward and outward rectifying voltage-gated channels characteristic of cell membranes. We concentrate on the electrical coupling of a cell ensemble rather than on the biochemical and mechanical characteristics of the individual cells, obtain the map of single cell potentials using simple assumptions, and suggest procedures to collectively modify this spatial map. The response of the cell ensemble to an external perturbation and the consequences of cell isolation, heterogeneity, and ensemble size are also analyzed. The results suggest that simple coupling mechanisms can be significant for the biophysical chemistry of model biomolecular ensembles. In particular, the spatiotemporal map of single cell potentials should be relevant for the uptake and distribution of charged nanoparticles over model cell ensembles and the collective properties of droplet networks incorporating protein ion channels inserted in lipid bilayers.
A Simple Model of Pulsed Ejector Thrust Augmentation
NASA Technical Reports Server (NTRS)
Wilson, Jack; Deloof, Richard L. (Technical Monitor)
2003-01-01
A simple model of thrust augmentation from a pulsed source is described. In the model it is assumed that the flow into the ejector is quasi-steady, and can be calculated using potential flow techniques. The velocity of the flow is related to the speed of the starting vortex ring formed by the jet. The vortex ring properties are obtained from the slug model, knowing the jet diameter, speed and slug length. The model, when combined with experimental results, predicts an optimum ejector radius for thrust augmentation. Data on pulsed ejector performance for comparison with the model was obtained using a shrouded Hartmann-Sprenger tube as the pulsed jet source. A statistical experiment, in which ejector length, diameter, and nose radius were independent parameters, was performed at four different frequencies. These frequencies corresponded to four different slug length to diameter ratios, two below cut-off, and two above. Comparison of the model with the experimental data showed reasonable agreement. Maximum pulsed thrust augmentation is shown to occur for a pulsed source with slug length to diameter ratio equal to the cut-off value.
Testing the uniqueness of mass models using gravitational lensing
NASA Astrophysics Data System (ADS)
Walls, Levi; Williams, Liliya L. R.
2018-06-01
The positions of images produced by the gravitational lensing of background-sources provide insight to lens-galaxy mass distributions. Simple elliptical mass density profiles do not agree well with observations of the population of known quads. It has been shown that the most promising way to reconcile this discrepancy is via perturbations away from purely elliptical mass profiles by assuming two super-imposed, somewhat misaligned mass distributions: one is dark matter (DM), the other is a stellar distribution. In this work, we investigate if mass modelling of individual lenses can reveal if the lenses have this type of complex structure, or simpler elliptical structure. In other words, we test mass model uniqueness, or how well an extended source lensed by a non-trivial mass distribution can be modeled by a simple elliptical mass profile. We used the publicly-available lensing software, Lensmodel, to generate and numerically model gravitational lenses and “observed” image positions. We then compared “observed” and modeled image positions via root mean square (RMS) of their difference. We report that, in most cases, the RMS is ≤0.05‧‧ when averaged over an extended source. Thus, we show it is possible to fit a smooth mass model to a system that contains a stellar-component with varying levels of misalignment with a DM-component, and hence mass modelling cannot differentiate between simple elliptical versus more complex lenses.
Shock wave oscillation driven by turbulent boundary layer fluctuations
NASA Technical Reports Server (NTRS)
Plotkin, K. J.
1972-01-01
Pressure fluctuations due to the interaction of a shock wave with a turbulent boundary layer were investigated. A simple model is proposed in which the shock wave is convected from its mean position by velocity fluctuations in the turbulent boundary layer. Displacement of the shock is assumed limited by a linear restoring mechanism. Predictions of peak root mean square pressure fluctuation and spectral density are in excellent agreement with available experimental data.
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B; George, Kerry A; Cucinotta, Francis A
2016-01-01
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictions of the charge number and energy dependence of RBE's using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE's are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE's against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Comparisons of the resulting model parameters to those used in the NASA radiation quality factor function are discussed.
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.; ...
2016-04-25
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictionsmore » of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Finally, we discuss comparisons of the resulting model parameters to those used in the NASA radiation quality factor function.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacao, Eliedonna; Hada, Megumi; Saganti, Premkumar B.
The biological effects of high charge and energy (HZE) particle exposures are of interest in space radiation protection of astronauts and cosmonauts, and estimating secondary cancer risks for patients undergoing Hadron therapy for primary cancers. The large number of particles types and energies that makeup primary or secondary radiation in HZE particle exposures precludes tumor induction studies in animal models for all but a few particle types and energies, thus leading to the use of surrogate endpoints to investigate the details of the radiation quality dependence of relative biological effectiveness (RBE) factors. In this report we make detailed RBE predictionsmore » of the charge number and energy dependence of RBE’s using a parametric track structure model to represent experimental results for the low dose response for chromosomal exchanges in normal human lymphocyte and fibroblast cells with comparison to published data for neoplastic transformation and gene mutation. RBE’s are evaluated against acute doses of γ-rays for doses near 1 Gy. Models that assume linear or non-targeted effects at low dose are considered. Modest values of RBE (<10) are found for simple exchanges using a linear dose response model, however in the non-targeted effects model for fibroblast cells large RBE values (>10) are predicted at low doses <0.1 Gy. The radiation quality dependence of RBE’s against the effects of acute doses γ-rays found for neoplastic transformation and gene mutation studies are similar to those found for simple exchanges if a linear response is assumed at low HZE particle doses. Finally, we discuss comparisons of the resulting model parameters to those used in the NASA radiation quality factor function.« less
Model of a multiverse providing the dark energy of our universe
NASA Astrophysics Data System (ADS)
Rebhan, E.
2017-09-01
It is shown that the dark energy presently observed in our universe can be regarded as the energy of a scalar field driving an inflation-like expansion of a multiverse with ours being a subuniverse among other parallel universes. A simple model of this multiverse is elaborated: Assuming closed space geometry, the origin of the multiverse can be explained by quantum tunneling from nothing; subuniverses are supposed to emerge from local fluctuations of separate inflation fields. The standard concept of tunneling from nothing is extended to the effect that in addition to an inflationary scalar field, matter is also generated, and that the tunneling leads to an (unstable) equilibrium state. The cosmological principle is assumed to pertain from the origin of the multiverse until the first subuniverses emerge. With increasing age of the multiverse, its spatial curvature decays exponentially so fast that, due to sharing the same space, the flatness problem of our universe resolves by itself. The dark energy density imprinted by the multiverse on our universe is time-dependent, but such that the ratio w = ϱ/(c2p) of its mass density and pressure (times c2) is time-independent and assumes a value - 1 + 𝜖 with arbitrary 𝜖 > 0. 𝜖 can be chosen so small, that the dark energy model of this paper can be fitted to the current observational data as well as the cosmological constant model.
Dependence of tropical cyclone development on coriolis parameter: A theoretical model
NASA Astrophysics Data System (ADS)
Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda
2018-03-01
A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.
A simple rule for the costs of vigilance: empirical evidence from a social forager.
Cowlishaw, Guy; Lawes, Michael J.; Lightbody, Margaret; Martin, Alison; Pettifor, Richard; Rowcliffe, J. Marcus
2004-01-01
It is commonly assumed that anti-predator vigilance by foraging animals is costly because it interrupts food searching and handling time, leading to a reduction in feeding rate. When food handling does not require visual attention, however, a forager may handle food while simultaneously searching for the next food item or scanning for predators. We present a simple model of this process, showing that when the length of such compatible handling time Hc is long relative to search time S, specifically Hc/S > 1, it is possible to perform vigilance without a reduction in feeding rate. We test three predictions of this model regarding the relationships between feeding rate, vigilance and the Hc/S ratio, with data collected from a wild population of social foragers (samango monkeys, Cercopithecus mitis erythrarchus). These analyses consistently support our model, including our key prediction: as Hc/S increases, the negative relationship between feeding rate and the proportion of time spent scanning becomes progressively shallower. This pattern is more strongly driven by changes in median scan duration than scan frequency. Our study thus provides a simple rule that describes the extent to which vigilance can be expected to incur a feeding rate cost. PMID:15002768
Sound transmission through lightweight double-leaf partitions: theoretical modelling
NASA Astrophysics Data System (ADS)
Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.
2005-09-01
This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.
Computational studies of photoluminescence from disordered nanocrystalline systems
NASA Astrophysics Data System (ADS)
John, George
2000-03-01
The size (d) dependence of emission energies from semiconductor nanocrystallites have been shown to follow an effective exponent ( d^-β) determined by the disorder in the system(V.Ranjan, V.A.Singh and G.C.John, Phys. Rev B 58), 1158 (1998). Our earlier calculation was based on a simple quantum confinement model assuming a normal distribution of crystallites. This model is now extended to study the effects of realistic systems with a lognormal distribution in particle size, accounting for carrier hopping and nonradiative transitions. Computer simulations of this model performed using the Microcal Origin software can explain several conflicting experimental results reported in literature.
Understanding the complex dynamics of stock markets through cellular automata
NASA Astrophysics Data System (ADS)
Qiu, G.; Kandhai, D.; Sloot, P. M. A.
2007-04-01
We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.
Simple standard model extension by heavy charged scalar
NASA Astrophysics Data System (ADS)
Boos, E.; Volobuev, I.
2018-05-01
We consider a Standard Model (SM) extension by a heavy charged scalar gauged only under the UY(1 ) weak hypercharge gauge group. Such an extension, being gauge invariant with respect to the SM gauge group, is a simple special case of the well-known Zee model. Since the interactions of the charged scalar with the Standard Model fermions turn out to be significantly suppressed compared to the Standard Model interactions, the charged scalar provides an example of a long-lived charged particle being interesting to search for at the LHC. We present the pair and single production cross sections of the charged scalar at different colliders and the possible decay widths for various boson masses. It is shown that the current ATLAS and CMS searches at 8 and 13 TeV collision energy lead to the bounds on the scalar boson mass of about 300-320 GeV. The limits are expected to be much larger for higher collision energies and, assuming 15 a b-1 integrated luminosity, reach about 2.7 TeV at future 27 TeV LHC thus covering the most interesting mass region.
Data and methodological problems in establishing state gasoline-conservation targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, D.L.; Walton, G.H.
The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less
Data requirements to model creep in 9Cr-1Mo-V steel
NASA Technical Reports Server (NTRS)
Swindeman, R. W.
1988-01-01
Models for creep behavior are helpful in predicting response of components experiencing stress redistributions due to cyclic loads, and often the analyst would like information that correlates strain rate with history assuming simple hardening rules such as those based on time or strain. On the one hand, much progress has been made in the development of unified constitutive equations that include both hardening and softening through the introduction of state variables whose evolutions are history dependent. Although it is difficult to estimate specific data requirements for general application, there are several simple measurements that can be made in the course of creep testing and results reported in data bases. The issue is whether or not such data could be helpful in developing unified equations, and, if so, how should such data be reported. Data produced on a martensitic 9Cr-1Mo-V-Nb steel were examined with these issues in mind.
S-SPatt: simple statistics for patterns on Markov chains.
Nuel, Grégory
2005-07-01
S-SPatt allows the counting of patterns occurrences in text files and, assuming these texts are generated from a random Markovian source, the computation of the P-value of a given observation using a simple binomial approximation.
Longevity suppresses conflict in animal societies
Port, Markus; Cant, Michael A.
2013-01-01
Models of social conflict in animal societies generally assume that within-group conflict reduces the value of a communal resource. For many animals, however, the primary cost of conflict is increased mortality. We develop a simple inclusive fitness model of social conflict that takes this cost into account. We show that longevity substantially reduces the level of within-group conflict, which can lead to the evolution of peaceful animal societies if relatedness among group members is high. By contrast, peaceful outcomes are never possible in models where the primary cost of social conflict is resource depletion. Incorporating mortality costs into models of social conflict can explain why many animal societies are so remarkably peaceful despite great potential for conflict. PMID:24088564
Correcting the initialization of models with fractional derivatives via history-dependent conditions
NASA Astrophysics Data System (ADS)
Du, Maolin; Wang, Zaihua
2016-04-01
Fractional differential equations are more and more used in modeling memory (history-dependent, non-local, or hereditary) phenomena. Conventional initial values of fractional differential equations are defined at a point, while recent works define initial conditions over histories. We prove that the conventional initialization of fractional differential equations with a Riemann-Liouville derivative is wrong with a simple counter-example. The initial values were assumed to be arbitrarily given for a typical fractional differential equation, but we find one of these values can only be zero. We show that fractional differential equations are of infinite dimensions, and the initial conditions, initial histories, are defined as functions over intervals. We obtain the equivalent integral equation for Caputo case. With a simple fractional model of materials, we illustrate that the recovery behavior is correct with the initial creep history, but is wrong with initial values at the starting point of the recovery. We demonstrate the application of initial history by solving a forced fractional Lorenz system numerically.
NASA Astrophysics Data System (ADS)
Montecinos, S.; Barrientos, P.
2006-03-01
A photochemical model of the atmosphere constitutes a non-linear, non-autonomous dynamical system, enforced by the Earth's rotation. Some studies have shown that the region of the mesopause tends towards non-linear responses such as period-doubling cascades and chaos. In these studies, simple go approximations for the diurnal variations of the photolysis rates are assumed. The goal of this article is to investigate what happens if the more realistic, calculated photolysis rates are introduced. It is found that, if the usual approximations-sinusoidal and step functions-are assumed, the responses of the system are similar: it converges to a 2-day periodic solution. If the more realistic, calculated diurnal cycle is introduced, a new 4-day subharmonic appear.
Deductibles in health insurance
NASA Astrophysics Data System (ADS)
Dimitriyadis, I.; Öney, Ü. N.
2009-11-01
This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.
Spectroscopic measurements of hydrogen ion temperature during divertor recombination
NASA Astrophysics Data System (ADS)
Stotler, D. P.; Skinner, C. H.; Karney, C. F. F.
1999-01-01
We explore the possibility of using the neutral Hα spectral line profile to measure the ion temperature, Ti, in a recombining plasma. Since the Hα emissions due to recombination are larger than those due to other mechanisms, interference from nonrecombining regions contributing to the chord integrated data is insignificant. A Doppler and Stark broadened Hα spectrum is simulated by the DEGAS 2 neutral transport code using assumed plasma conditions. The application of a simple fitting procedure to this spectrum yields an electron density, ne, and Ti consistent with the assumed plasma parameters if the spectrum is dominated by recombination from a region of modest ne variation. General measurements of the ion temperature by Hα spectroscopy appear feasible within the context of a model for the entire divertor plasma.
Misconceptions of Mexican Teachers in the Solution of Simple Pendulum
ERIC Educational Resources Information Center
Garcia Trujillo, Luis Antonio; Ramirez Díaz, Mario H.; Rodriguez Castillo, Mario
2013-01-01
Solving the position of a simple pendulum at any time is apparently one of the most simple and basic problems to solve in high school and college physics courses. However, because of this apparent simplicity, teachers and physics texts often assume that the solution is immediate without pausing to reflect on the problem formulation or verifying…
Could the electroweak scale be linked to the large scale structure of the Universe?
NASA Technical Reports Server (NTRS)
Chakravorty, Alak; Massarotti, Alessandro
1991-01-01
We study a model where the domain walls are generated through a cosmological phase transition involving a scalar field. We assume the existence of a coupling between the scalar field and dark matter and show that the interaction between domain walls and dark matter leads to an energy dependent reflection mechanism. For a simple Yakawa coupling, we find that the vacuum expectation value of the scalar field is theta approx. equals 30GeV - 1TeV, in order for the model to be successful in the formation of large scale 'pancake' structures.
Time behavior of solar flare particles to 5 AU
NASA Technical Reports Server (NTRS)
Haffner, J. W.
1972-01-01
A simple model of solar flare radiation event particle transport is developed to permit the calculation of fluxes and related quantities as a function of distance from the sun (R). This model assumes the particles spiral around the solar magnetic field lines with a constant pitch angle. The particle angular distributions and onset plus arrival times as functions of energy at 1 AU agree with observations if the pitch angle distribution peaks near 90 deg. As a consequence the time dependence factor is essentially proportional to R/1.7, (R in AU), and the event flux is proportional to R/2.
Mathematical Model for the Mineralization of Bone
NASA Technical Reports Server (NTRS)
Martin, Bruce
1994-01-01
A mathematical model is presented for the transport and precipitation of mineral in refilling osteons. One goal of this model was to explain calcification 'halos,' in which the bone near the haversian canal is more highly mineralized than the more peripheral lamellae, which have been mineralizing longer. It was assumed that the precipitation rate of mineral is proportional to the difference between the local concentration of calcium ions and an equilibrium concentration and that the transport of ions is by either diffusion or some other concentration gradient-dependent process. Transport of ions was assumed to be slowed by the accumulation of mineral in the matrix along the transport path. ne model also mimics bone apposition, slowing of apposition during refilling, and mineralization lag time. It was found that simple diffusion cannot account for the transport of calcium ions into mineralizing bone, because the diffusion coefficient is two orders of magnitude too low. If a more rapid concentration gradient-driven means of transport exists, the model demonstrates that osteonal geometry and variable rate of refilling work together to produce calcification halos, as well as the primary and secondary calcification effect reported in the literature.
A Martian global groundwater model
NASA Technical Reports Server (NTRS)
Howard, Alan D.
1991-01-01
A global groundwater flow model was constructed for Mars to study hydrologic response under a variety of scenarios, improving and extending earlier simple cross sectional models. The model is capable of treating both steady state and transient flow as well as permeability that is anisotropic in the horizontal dimensions. A single near surface confining layer may be included (representing in these simulations a coherent permafrost layer). Furthermore, in unconfined flow, locations of complete saturation and seepage are determined. The flow model assumes that groundwater gradients are sufficiently low that DuPuit conditions are satisfied and the flow component perpendicular to the ground surface is negligible. The flow equations were solved using a finite difference method employing 10 deg spacing of latitude and longitude.
Recurrence relations in one-dimensional Ising models.
da Conceição, C M Silva; Maia, R N P
2017-09-01
The exact finite-size partition function for the nonhomogeneous one-dimensional (1D) Ising model is found through an approach using algebra operators. Specifically, in this paper we show that the partition function can be computed through a trace from a linear second-order recurrence relation with nonconstant coefficients in matrix form. A relation between the finite-size partition function and the generalized Lucas polynomials is found for the simple homogeneous model, thus establishing a recursive formula for the partition function. This is an important property and it might indicate the possible existence of recurrence relations in higher-dimensional Ising models. Moreover, assuming quenched disorder for the interactions within the model, the quenched averaged magnetic susceptibility displays a nontrivial behavior due to changes in the ferromagnetic concentration probability.
OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation
NASA Astrophysics Data System (ADS)
Empereur-Mot, Luc; Villemin, Thierry
2003-03-01
Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.
Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.
Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L
2014-05-20
Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.
Asymmetrical Capacitors for Propulsion
NASA Technical Reports Server (NTRS)
Canning, Francis X.; Melcher, Cory; Winet, Edwin
2004-01-01
Asymmetrical Capacitor Thrusters have been proposed as a source of propulsion. For over eighty years, it has been known that a thrust results when a high voltage is placed across an asymmetrical capacitor, when that voltage causes a leakage current to flow. However, there is surprisingly little experimental or theoretical data explaining this effect. This paper reports on the results of tests of several Asymmetrical Capacitor Thrusters (ACTs). The thrust they produce has been measured for various voltages, polarities, and ground configurations and their radiation in the VHF range has been recorded. These tests were performed at atmospheric pressure and at various reduced pressures. A simple model for the thrust was developed. The model assumed the thrust was due to electrostatic forces on the leakage current flowing across the capacitor. It was further assumed that this current involves charged ions which undergo multiple collisions with air. These collisions transfer momentum. All of the measured data was consistent with this model. Many configurations were tested, and the results suggest general design principles for ACTs to be used for a variety of purposes.
Nuclear Structure of the Closed Subshell Nucleus 90Zr Studied with the (n,n'(gamma)) Reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, P E; Younes, Y; Becker, J A
States in {sup 90}Zr have been observed with the (n,n{prime}{gamma}) reaction using both spallation and monoenergetic accelerator-produced neutrons. A scheme comprised of 81 levels and 157 transitions was constructed concentrating on levels below 5.6 MeV in excitation energy. Spins have been determined by considering data from all experimental studies performed for {sup 90}Zr. Lifetimes have been deduced using the Doppler-shift attenuation method for many of the states and transition rates have been obtained. A spherical shell-model interpretation in terms of particle-hole excitations assuming a {sup 88}Sr closed core is given. In some cases, enhancements in B(M1) and B(E2) values aremore » observed that cannot be explained by assuming simple particle-hole excitations. Shell-model calculations using an extended f pg-shell model space reproduce the spectrum of excited states very well, and the gross features of the B(M1) and B(E2) transition rates. Transition rates for individual levels show discrepancies between calculations and experimental values.« less
Bacteria as a new model system for aging studies: investigations using light microscopy.
Ackermann, Martin
2008-04-01
Aging-the decline in an individual's condition over time-is at the center of an active research field in medicine and biology. Some very basic questions have, however, remained unresolved, the most fundamental being: do all organisms age? Or are there organisms that would continue to live forever if not killed by external forces? For a long time it was believed that aging only affected organisms such as animals, plants, and fungi. Bacteria, in contrast, were assumed to be potentially immortal and until recently this assertion remained untested. We used phase-contrast microscopy (on an Olympus BX61) to follow individual bacterial cells over many divisions to prove that some bacteria show a distinction between an aging mother cell and a rejuvenated daughter, and that these bacteria thus age. This indicates that aging is a more fundamental property of organisms than was previously assumed. Bacteria can now be used as very simple model system for investigating why and how organisms age.
2010-04-01
cylinders is suspected to account for the lateral offset. A simple model of the Magnus effect (ref. 23) indicates that it generates force per...The spin also produced a small but measurable Magnus effect . An extreme cg offset produced stability around small end into the wind. The engine...expected. If we assume the flight data for cable angles are accurate to a fraction of a degree, then a Magnus effect similar to that found for spinning
New approach to analyzing soil-building systems
Safak, E.
1998-01-01
A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.A new method of analyzing seismic response of soil-building systems is introduced. The method is based on the discrete-time formulation of wave propagation in layered media for vertically propagating plane shear waves. Buildings are modeled as an extension of the layered soil media by assuming that each story in the building is another layer. The seismic response is expressed in terms of wave travel times between the layers, and the wave reflection and transmission coefficients at layer interfaces. The calculation of the response is reduced to a pair of simple finite-difference equations for each layer, which are solved recursively starting from the bedrock. Compared with commonly used vibration formulation, the wave propagation formulation provides several advantages, including the ability to incorporate soil layers, simplicity of the calculations, improved accuracy in modeling the mass and damping, and better tools for system identification and damage detection.
Theory and application of an approximate model of saltwater upconing in aquifers
McElwee, C.; Kemblowski, M.
1990-01-01
Motion and mixing of salt water and fresh water are vitally important for water-resource development throughout the world. An approximate model of saltwater upconing in aquifers is developed, which results in three non-linear coupled equations for the freshwater zone, the saltwater zone, and the transition zone. The description of the transition zone uses the concept of a boundary layer. This model invokes some assumptions to give a reasonably tractable model, considerably better than the sharp interface approximation but considerably simpler than a fully three-dimensional model with variable density. We assume the validity of the Dupuit-Forchheimer approximation of horizontal flow in each layer. Vertical hydrodynamic dispersion into the base of the transition zone is assumed and concentration of the saltwater zone is assumed constant. Solute in the transition zone is assumed to be moved by advection only. Velocity and concentration are allowed to vary vertically in the transition zone by using shape functions. Several numerical techniques can be used to solve the model equations, and simple analytical solutions can be useful in validating the numerical solution procedures. We find that the model equations can be solved with adequate accuracy using the procedures presented. The approximate model is applied to the Smoky Hill River valley in central Kansas. This model can reproduce earlier sharp interface results as well as evaluate the importance of hydrodynamic dispersion for feeding salt water to the river. We use a wide range of dispersivity values and find that unstable upconing always occurs. Therefore, in this case, hydrodynamic dispersion is not the only mechanism feeding salt water to the river. Calculations imply that unstable upconing and hydrodynamic dispersion could be equally important in transporting salt water. For example, if groundwater flux to the Smoky Hill River were only about 40% of its expected value, stable upconing could exist where hydrodynamic dispersion into a transition zone is the primary mechanism for moving salt water to the river. The current model could be useful in situations involving dense saltwater layers. ?? 1990.
What's Next: Recruitment of a Grounded Predictive Body Model for Planning a Robot's Actions.
Schilling, Malte; Cruse, Holk
2012-01-01
Even comparatively simple, reactive systems are able to control complex motor tasks, such as hexapod walking on unpredictable substrate. The capability of such a controller can be improved by introducing internal models of the body and of parts of the environment. Such internal models can be applied as inverse models, as forward models or to solve the problem of sensor fusion. Usually, separate models are used for these functions. Furthermore, separate models are used to solve different tasks. Here we concentrate on internal models of the body as the brain considers its own body the most important part of the world. The model proposed is formed by a recurrent neural network with the property of pattern completion. The model shows a hierarchical structure but nonetheless comprises a holistic system. One and the same model can be used as a forward model, as an inverse model, for sensor fusion, and, with a simple expansion, as a model to internally simulate (new) behaviors to be used for prediction. The model embraces the geometrical constraints of a complex body with many redundant degrees of freedom, and allows finding geometrically possible solutions. To control behavior such as walking, climbing, or reaching, this body model is complemented by a number of simple reactive procedures together forming a procedural memory. In this article, we illustrate the functioning of this network. To this end we present examples for solutions of the forward function and the inverse function, and explain how the complete network might be used for predictive purposes. The model is assumed to be "innate," so learning the parameters of the model is not (yet) considered.
A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction
NASA Technical Reports Server (NTRS)
Smith, Samantha A.; DelGenio, Anthony D.
1998-01-01
A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.
Modelling the evolution and diversity of cumulative culture
Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo
2011-01-01
Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845
Forces between permanent magnets: experiments and model
NASA Astrophysics Data System (ADS)
González, Manuel I.
2017-03-01
This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
The star formation history of low-mass disk galaxies: A case study of NGC 300
NASA Astrophysics Data System (ADS)
Kang, Xiaoyu; Zhang, Fenghui; Chang, Ruixiang; Wang, Lang; Cheng, Liantao
2016-01-01
Context. Since NGC 300 is a bulgeless, isolated low-mass galaxy and it has not experienced radial migration during its evolution history, it can be treated as an ideal laboratory to test the simple galactic chemical evolution model. Aims: Our main aim is to investigate the main properties of the star formation history (SFH) of NGC 300 and compare its SFH with that of M 33 to explore the common properties and differences between these two nearby low-mass systems. Methods: We construct a simple chemical evolution model for NGC 300, assuming its disk forms gradually from continuous accretion of primordial gas and including the gas-outflow process. The model allows us to build a bridge between the SFH and observed data of NGC 300, in particular, the present-day radial profiles and global observed properties (e.g., cold gas mass, star formation rate, and metallicity). By means of comparing the model predictions with the corresponding observations, we adopt the classical χ2 methodology to find out the best combination of free parameters a, b, and bout. Results: Our results show that by assuming an inside-out formation scenario and an appropriate outflow rate, our model reproduces well most of the present-day observational values. The model not only reproduces well the radial profiles, but also the global observational data for the NGC 300 disk. Our results suggest that NGC 300 may experience a rapid growth of its disk. Through comparing the best-fitting, model-predicted SFH of NGC 300 with that of M 33, we find that the mean stellar age of NGC 300 is older than that of M 33 and there is a recent lack of primordial gas infall onto the disk of NGC 300. Our results also imply that the local environment may play a key role in the secular evolution of galaxy disks.
Flow studies in canine artery bifurcations using a numerical simulation method.
Xu, X Y; Collins, M W; Jones, C J
1992-11-01
Three-dimensional flows through canine femoral bifurcation models were predicted under physiological flow conditions by solving numerically the time-dependent three-dimensional Navier-stokes equations. In the calculations, two models were assumed for the blood, those of (a) a Newtonian fluid, and (b) a non-Newtonian fluid obeying the power law. The blood vessel wall was assumed to be rigid this being the only approximation to the prediction model. The numerical procedure utilized a finite volume approach on a finite element mesh to discretize the equations, and the code used (ASTEC) incorporated the SIMPLE velocity-pressure algorithm in performing the calculations. The predicted velocity profiles were in good qualitative agreement with the in vivo measurements recently obtained by Jones et al. The non-Newtonian effects on the bifurcation flow field were also investigated, and no great differences in velocity profiles were observed. This indicated that the non-Newtonian characteristics of the blood might not be an important factor in determining the general flow patterns for these bifurcations, but could have local significance. Current work involves modeling wall distensibility in an empirically valid manner. Predictions accommodating these will permit a true quantitative comparison with experiment.
NASA Astrophysics Data System (ADS)
Balázs, Csaba; Li, Tong
2016-05-01
In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.
The 4D-var Estimation of North Korean Rocket Exhaust Emissions Into the Ionosphere
NASA Astrophysics Data System (ADS)
Ssessanga, Nicholas; Kim, Yong Ha; Choi, Byungyu; Chung, Jong-Kyun
2018-03-01
We have developed a four-dimensional variation data assimilation technique (4D-var) and utilized it to reconstruct three-dimensional images of the ionospheric hole created during Kwangmyongsong-4 rocket launch. Kwangmyongsong-4 was launched southward from North Korea Sohae space center (124.7°E, 39.6°N) at 00:30 UT on 7 February 2016. The data assimilated were Global Positioning System total electron content from the South Korean Global Positioning System-receiver network. Due to lack of publicized information about Kwangmyongsong-4, the rocket was assumed to inherit its technology from previous launches (Taepodong-2). The created ionospheric hole was assumed to be made by neutral molecules, water (H2O) and hydrogen (H2), deposited in exhaust plumes. The dispersion model was developed based on advection and diffusion equation, and a simple asymmetric diffusion model assumed. From the analysis, using the adjoint technique, we estimated an ionospheric hole with the largest depletion existing around 6-7 min after launch and gradually recovering within 30 min. These results are in agreement with temporal total electron content analyses of the same event from previous studies. Furthermore, Kwangmyongsong-4 second stage exhaust emissions were estimated as 1.9 × 1026 s-1 of which 40% was H2 and the rest H2O.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otake, M.; Schull, W.J.
The occurrence of lenticular opacities among atomic bomb survivors in Hiroshima and Nagasaki detected in 1963-1964 has been examined in reference to their ..gamma.. and neutron doses. A lenticular opacity in this context implies an ophthalmoscopic and slit lamp biomicroscopic defect in the axial posterior aspect of the lens which may or may not interfere measureably with visual acuity. Several different dose-response models were fitted to the data after the effects of age at time of bombing (ATB) were examined. Some postulate the existence of a threshold(s), others do not. All models assume a ''background'' exists, that is, that somemore » number of posterior lenticular opacities are ascribable to events other than radiation exposure. Among these alternatives we can show that a simple linear ..gamma..-neutron relationship which assumes no threshold does not fit the data adequately under the T65 dosimetry, but does fit the recent Oak Ridge and Lawrence Livermore estimates. Other models which envisage quadratic terms in gamma and which may or may not assume a threshold are compatible with the data. The ''best'' fit, that is, the one with the smallest X/sup 2/ and largest tail probability, is with a ''linear gamma:linear neutron'' model which postulates a ..gamma.. threshold but no threshold for neutrons. It should be noted that the greatest difference in the dose-response models associated with the three different sets of doses involves the neutron component, as is, of course, to be expected. No effect of neutrons on the occurrence of lenticular opacities is demonstrable with either the Lawrence Livermore or Oak Ridge estimates.« less
Survival rates of birds of tropical and temperate forests: will the dogma survive?
Karr, J.R.; Nichols, J.D.; Klimkiewicz, M.K.; Brawn, J.D.
1990-01-01
Survival rates of tropical forest birds are widely assumed to be high relative to the survival rates of temperate forest birds. Much life-history theory is based on this assumption despite the lack of empirical data to support it. We provide the first detailed comparison of survival rates of tropical and temperate forest birds based on extensive data bases and modern capture-recapture models. We find no support for the conventional wisdom. Because clutch size is only one component of reproductive rate, the frequently assumed, simple association between clutch size and adult survival rates should not necessarily be expected. Our results emphasize the need to consider components of fecundity in addition to clutch size when comparing the life histories of tropical and temperate birds and suggest similar considerations in the development of vertebrate life-history theory.
Reconstruction phases in the planar three- and four-vortex problems
NASA Astrophysics Data System (ADS)
Hernández-Garduño, Antonio; Shashikanth, Banavara N.
2018-03-01
Pure reconstruction phases—geometric and dynamic—are computed in the N-point-vortex model in the plane, for the cases N=3 and N=4 . The phases are computed relative to a metric-orthogonal connection on appropriately defined principal fiber bundles. The metric is similar to the kinetic energy metric for point masses but with the masses replaced by vortex strengths. The geometric phases are shown to be proportional to areas enclosed by the closed orbit on the symmetry reduced spaces. More interestingly, simple formulae are obtained for the dynamic phases, analogous to Montgomery’s result for the free rigid body, which show them to be proportional to the time period of the symmetry reduced closed orbits. For the case N = 3 a non-zero total vortex strength is assumed. For the case N = 4 the vortex strengths are assumed equal.
Comparison of across-frequency integration strategies in a binaural detection model.
Breebaart, Jeroen
2013-11-01
Breebaart et al. [J. Acoust. Soc. Am. 110, 1089-1104 (2001)] reported that the masker bandwidth dependence of detection thresholds for an out-of-phase signal and an in-phase noise masker (N0Sπ) can be explained by principles of integration of information across critical bands. In this paper, different methods for such across-frequency integration process are evaluated as a function of the bandwidth and notch width of the masker. The results indicate that an "optimal detector" model assuming independent internal noise in each critical band provides a better fit to experimental data than a best filter or a simple across-frequency integrator model. Furthermore, the exponent used to model peripheral compression influences the accuracy of predictions in notched conditions.
Wagner, Peter J
2012-02-23
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.
Stochastic dynamics of cholera epidemics
NASA Astrophysics Data System (ADS)
Azaele, Sandro; Maritan, Amos; Bertuzzo, Enrico; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea
2010-05-01
We describe the predictions of an analytically tractable stochastic model for cholera epidemics following a single initial outbreak. The exact model relies on a set of assumptions that may restrict the generality of the approach and yet provides a realm of powerful tools and results. Without resorting to the depletion of susceptible individuals, as usually assumed in deterministic susceptible-infected-recovered models, we show that a simple stochastic equation for the number of ill individuals provides a mechanism for the decay of the epidemics occurring on the typical time scale of seasonality. The model is shown to provide a reasonably accurate description of the empirical data of the 2000/2001 cholera epidemic which took place in the Kwa Zulu-Natal Province, South Africa, with possibly notable epidemiological implications.
Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.
Peteranderl, Sonja; Oberauer, Klaus
2018-01-01
This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.
The infection rate of Daphnia magna by Pasteuria ramosa conforms with the mass-action principle.
Regoes, R R; Hottinger, J W; Sygnarski, L; Ebert, D
2003-10-01
In simple epidemiological models that describe the interaction between hosts with their parasites, the infection process is commonly assumed to be governed by the law of mass action, i.e. it is assumed that the infection rate depends linearly on the densities of the host and the parasite. The mass-action assumption, however, can be problematic if certain aspects of the host-parasite interaction are very pronounced, such as spatial compartmentalization, host immunity which may protect from infection with low doses, or host heterogeneity with regard to susceptibility to infection. As deviations from a mass-action infection rate have consequences for the dynamics of the host-parasite system, it is important to test for the appropriateness of the mass-action assumption in a given host-parasite system. In this paper, we examine the relationship between the infection rate and the parasite inoculum for the water flee Daphnia magna and its bacterial parasite Pasteuria ramosa. We measured the fraction of infected hosts after exposure to 14 different doses of the parasite. We find that the observed relationship between the fraction of infected hosts and the parasite dose is largely consistent with an infection process governed by the mass-action principle. However, we have evidence for a subtle but significant deviation from a simple mass-action infection model, which can be explained either by some antagonistic effects of the parasite spores during the infection process, or by heterogeneity in the hosts' susceptibility with regard to infection.
Comparison of geometrical shock dynamics and kinematic models for shock-wave propagation
NASA Astrophysics Data System (ADS)
Ridoux, J.; Lardjane, N.; Monasse, L.; Coulouvrat, F.
2018-03-01
Geometrical shock dynamics (GSD) is a simplified model for nonlinear shock-wave propagation, based on the decomposition of the shock front into elementary ray tubes. Assuming small changes in the ray tube area, and neglecting the effect of the post-shock flow, a simple relation linking the local curvature and velocity of the front, known as the A{-}M rule, is obtained. More recently, a new simplified model, referred to as the kinematic model, was proposed. This model is obtained by combining the three-dimensional Euler equations and the Rankine-Hugoniot relations at the front, which leads to an equation for the normal variation of the shock Mach number at the wave front. In the same way as GSD, the kinematic model is closed by neglecting the post-shock flow effects. Although each model's approach is different, we prove their structural equivalence: the kinematic model can be rewritten under the form of GSD with a specific A{-}M relation. Both models are then compared through a wide variety of examples including experimental data or Eulerian simulation results when available. Attention is drawn to the simple cases of compression ramps and diffraction over convex corners. The analysis is completed by the more complex cases of the diffraction over a cylinder, a sphere, a mound, and a trough.
Vector-based model of elastic bonds for simulation of granular solids.
Kuzkin, Vitaly A; Asonov, Igor E
2012-11-01
A model (further referred to as the V model) for the simulation of granular solids, such as rocks, ceramics, concrete, nanocomposites, and agglomerates, composed of bonded particles (rigid bodies), is proposed. It is assumed that the bonds, usually representing some additional gluelike material connecting particles, cause both forces and torques acting on the particles. Vectors rigidly connected with the particles are used to describe the deformation of a single bond. The expression for potential energy of the bond and corresponding expressions for forces and torques are derived. Formulas connecting parameters of the model with longitudinal, shear, bending, and torsional stiffnesses of the bond are obtained. It is shown that the model makes it possible to describe any values of the bond stiffnesses exactly; that is, the model is applicable for the bonds with arbitrary length/thickness ratio. Two different calibration procedures depending on bond length/thickness ratio are proposed. It is shown that parameters of the model can be chosen so that under small deformations the bond is equivalent to either a Bernoulli-Euler beam or a Timoshenko beam or short cylinder connecting particles. Simple analytical expressions, relating parameters of the V model with geometrical and mechanical characteristics of the bond, are derived. Two simple examples of computer simulation of thin granular structures using the V model are given.
Bounding filter - A simple solution to lack of exact a priori statistics.
NASA Technical Reports Server (NTRS)
Nahi, N. E.; Weiss, I. M.
1972-01-01
Wiener and Kalman-Bucy estimation problems assume that models describing the signal and noise stochastic processes are exactly known. When this modeling information, i.e., the signal and noise spectral densities for Wiener filter and the signal and noise dynamic system and disturbing noise representations for Kalman-Bucy filtering, is inexactly known, then the filter's performance is suboptimal and may even exhibit apparent divergence. In this paper a system is designed whereby the actual estimation error covariance is bounded by the covariance calculated by the estimator. Therefore, the estimator obtains a bound on the actual error covariance which is not available, and also prevents its apparent divergence.
Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions
NASA Astrophysics Data System (ADS)
Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.
2016-06-01
In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.
Physically based model for extracting dual permeability parameters using non-Newtonian fluids
NASA Astrophysics Data System (ADS)
Abou Najm, M. R.; Basset, C.; Stewart, R. D.; Hauswirth, S.
2017-12-01
Dual permeability models are effective for the assessment of flow and transport in structured soils with two dominant structures. The major challenge to those models remains in the ability to determine appropriate and unique parameters through affordable, simple, and non-destructive methods. This study investigates the use of water and a non-Newtonian fluid in saturated flow experiments to derive physically-based parameters required for improved flow predictions using dual permeability models. We assess the ability of these two fluids to accurately estimate the representative pore sizes in dual-domain soils, by determining the effective pore sizes of macropores and micropores. We developed two sub-models that solve for the effective macropore size assuming either cylindrical (e.g., biological pores) or planar (e.g., shrinkage cracks and fissures) pore geometries, with the micropores assumed to be represented by a single effective radius. Furthermore, the model solves for the percent contribution to flow (wi) corresponding to the representative macro and micro pores. A user-friendly solver was developed to numerically solve the system of equations, given that relevant non-Newtonian viscosity models lack forms conducive to analytical integration. The proposed dual-permeability model is a unique attempt to derive physically based parameters capable of measuring dual hydraulic conductivities, and therefore may be useful in reducing parameter uncertainty and improving hydrologic model predictions.
Lateral interactions and non-equilibrium in surface kinetics
NASA Astrophysics Data System (ADS)
Menzel, Dietrich
2016-08-01
Work modelling reactions between surface species frequently use Langmuir kinetics, assuming that the layer is in internal equilibrium, and that the chemical potential of adsorbates corresponds to that of an ideal gas. Coverage dependences of reacting species and of site blocking are usually treated with simple power law coverage dependences (linear in the simplest case), neglecting that lateral interactions are strong in adsorbate and co-adsorbate layers which may influence kinetics considerably. My research group has in the past investigated many co-adsorbate systems and simple reactions in them. We have collected a number of examples where strong deviations from simple coverage dependences exist, in blocking, promoting, and selecting reactions. Interactions can range from those between next neighbors to larger distances, and can be quite complex. In addition, internal equilibrium in the layer as well as equilibrium distributions over product degrees of freedom can be violated. The latter effect leads to non-equipartition of energy over molecular degrees of freedom (for products) or non-equal response to those of reactants. While such behavior can usually be described by dynamic or kinetic models, the deeper reasons require detailed theoretical analysis. Here, a selection of such cases is reviewed to exemplify these points.
Nonlinear multiplicative dendritic integration in neuron and network models
Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si
2013-01-01
Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543
NASA Astrophysics Data System (ADS)
Gualdesi, Lavinio
2017-04-01
Mooring lines in the Ocean might be seen as a pretty simple seamanlike activity. Connecting valuable scientific instrumentation to it transforms this simple activity into a sophisticated engineering support which needs to be accurately designed, developed, deployed, monitored and hopefully recovered with its precious load of scientific data. This work is an historical travel along the efforts carried out by scientists all over the world to successfully predict mooring line behaviour through both mathematical simulation and experimental verifications. It is at first glance unexpected how many factors one must observe to get closer and closer to a real ocean situation. Most models have dual applications for mooring lines and towed bodies lines equations. Numerous references are provided starting from the oldest one due to Isaac Newton. In his "Philosophiae Naturalis Principia Matematica" (1687) the English scientist, while discussing about the law of motion for bodies in resistant medium, is envisaging a hyperbolic fitting to the phenomenon including asymptotic behaviour in non-resistant media. A non-exhaustive set of mathematical simulations of the mooring lines trajectory prediction is listed hereunder to document how the subject has been under scientific focus over almost a century. Pode (1951) Prior personal computers diffusion a tabular form of calculus of cable geometry was used by generations of engineers keeping in mind the following limitations and approximations: tangential drag coefficients were assumed to be negligible. A steady current flow was assumed as in the towed configuration. Cchabra (1982) Finite Element Method that assumes an arbitrary deflection angle for the top first section and calculates equilibrium equations down to the sea floor iterating up to a compliant solution. Gualdesi (1987) ANAMOOR. A Fortran Program based on iterative methods above including experimental data from intensive mooring campaign. Database of experimental drag coefficients obtained in wind tunnel for the instrumentation verified in ocean mooring. Dangov (1987) A set of Fortran routines, due to a Canadian scientist, to analyse discrepancies between model and experimental data due to strumming effect on mooring line. Acoustic Doppler Current Profiler's data were adopted for the first time as an input for the model. Skop and O' Hara (1968) Static analysis of a three dimensional multi-leg model Knutson (1987) A model developed at David taylor Model basin based on towed models. Henry Berteaux (1990) SFMOOR Iterative FEM analysis fully fitted with mooring components data base developed by a WHOI scientist. Henry Berteaux (1990) SSMOOR Same model applied to sub-surface moorings. Gobats and Grosenbaugh (1998) Fully developed Method based on Strip Theory developed by WHOI scientists. Experimental validation results are not known.
Traas, T P; Luttik, R; Jongbloed, R H
1996-08-01
In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.
USDA-ARS?s Scientific Manuscript database
Simple sequence repeat (SSR) markers are widely used tools for inferences about genetic diversity, phylogeography and spatial genetic structure. Their applications assume that variation among alleles is essentially caused by an expansion or contraction of the number of repeats and that, accessorily,...
NASA Astrophysics Data System (ADS)
De Geeter, N.; Crevecoeur, G.; Leemans, A.; Dupré, L.
2015-01-01
In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron’s local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract’s position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.
Learning versus correct models: influence of model type on the learning of a free-weight squat lift.
McCullagh, P; Meyer, K N
1997-03-01
It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
A semiparametric spatio-temporal model for solar irradiance data
Patrick, Joshua D.; Harvill, Jane L.; Hansen, Clifford W.
2016-03-01
Here, we evaluate semiparametric spatio-temporal models for global horizontal irradiance at high spatial and temporal resolution. These models represent the spatial domain as a lattice and are capable of predicting irradiance at lattice points, given data measured at other lattice points. Using data from a 1.2 MW PV plant located in Lanai, Hawaii, we show that a semiparametric model can be more accurate than simple interpolation between sensor locations. We investigate spatio-temporal models with separable and nonseparable covariance structures and find no evidence to support assuming a separable covariance structure. These results indicate a promising approach for modeling irradiance atmore » high spatial resolution consistent with available ground-based measurements. Moreover, this kind of modeling may find application in design, valuation, and operation of fleets of utility-scale photovoltaic power systems.« less
An instrumental electrode model for solving EIT forward problems.
Zhang, Weida; Li, David
2014-10-01
An instrumental electrode model (IEM) capable of describing the performance of electrical impedance tomography (EIT) systems in the MHz frequency range has been proposed. Compared with the commonly used Complete Electrode Model (CEM), which assumes ideal front-end interfaces, the proposed model considers the effects of non-ideal components in the front-end circuits. This introduces an extra boundary condition in the forward model and offers a more accurate modelling for EIT systems. We have demonstrated its performance using simple geometry structures and compared the results with the CEM and full Maxwell methods. The IEM can provide a significantly more accurate approximation than the CEM in the MHz frequency range, where the full Maxwell methods are favoured over the quasi-static approximation. The improved electrode model will facilitate the future characterization and front-end design of real-world EIT systems.
AN ANALYTIC RADIATIVE-CONVECTIVE MODEL FOR PLANETARY ATMOSPHERES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Tyler D.; Catling, David C., E-mail: robinson@astro.washington.edu
2012-09-20
We present an analytic one-dimensional radiative-convective model of the thermal structure of planetary atmospheres. Our model assumes that thermal radiative transfer is gray and can be represented by the two-stream approximation. Model atmospheres are assumed to be in hydrostatic equilibrium, with a power-law scaling between the atmospheric pressure and the gray thermal optical depth. The convective portions of our models are taken to follow adiabats that account for condensation of volatiles through a scaling parameter to the dry adiabat. By combining these assumptions, we produce simple, analytic expressions that allow calculations of the atmospheric-pressure-temperature profile, as well as expressions formore » the profiles of thermal radiative flux and convective flux. We explore the general behaviors of our model. These investigations encompass (1) worlds where atmospheric attenuation of sunlight is weak, which we show tend to have relatively high radiative-convective boundaries; (2) worlds with some attenuation of sunlight throughout the atmosphere, which we show can produce either shallow or deep radiative-convective boundaries, depending on the strength of sunlight attenuation; and (3) strongly irradiated giant planets (including hot Jupiters), where we explore the conditions under which these worlds acquire detached convective regions in their mid-tropospheres. Finally, we validate our model and demonstrate its utility through comparisons to the average observed thermal structure of Venus, Jupiter, and Titan, and by comparing computed flux profiles to more complex models.« less
Investigating the Effect of Damage Progression Model Choice on Prognostics Performance
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2011-01-01
The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
Seismic Safety Of Simple Masonry Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guadagnuolo, Mariateresa; Faella, Giuseppe
2008-07-08
Several masonry buildings comply with the rules for simple buildings provided by seismic codes. For these buildings explicit safety verifications are not compulsory if specific code rules are fulfilled. In fact it is assumed that their fulfilment ensures a suitable seismic behaviour of buildings and thus adequate safety under earthquakes. Italian and European seismic codes differ in the requirements for simple masonry buildings, mostly concerning the building typology, the building geometry and the acceleration at site. Obviously, a wide percentage of buildings assumed simple by codes should satisfy the numerical safety verification, so that no confusion and uncertainty have tomore » be given rise to designers who must use the codes. This paper aims at evaluating the seismic response of some simple unreinforced masonry buildings that comply with the provisions of the new Italian seismic code. Two-story buildings, having different geometry, are analysed and results from nonlinear static analyses performed by varying the acceleration at site are presented and discussed. Indications on the congruence between code rules and results of numerical analyses performed according to the code itself are supplied and, in this context, the obtained result can provide a contribution for improving the seismic code requirements.« less
Parallel constraint satisfaction in memory-based decisions.
Glöckner, Andreas; Hodges, Sara D
2011-01-01
Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems
NASA Astrophysics Data System (ADS)
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Flavours and infra-red instability in holography
NASA Astrophysics Data System (ADS)
Kundu, Arnab
2017-11-01
With a simple gravitational model in five dimensions, defined by Einstein-gravity with a negative cosmological constant, coupled to a Dirac-Born-Infeld and a Chern-Simons term, we explore the fate of BF-bound violation for a probe scalar field and a fluctuation mode of the corresponding geometry. We assume this simple model to capture the dynamics of a strongly coupled SU(N c ) gauge theory with N f fundamental matter, which in the limit O({N}_c)˜ O({N}_f) and with a non-vanishing matter density, is holographically described by an AdS2-geometry in the IR. We demonstrate that, superconductor/superfluid instabilities are facilitated and spontaneous breaking of translational invariance is inhibited with increasing values of ( N f /N c ). This is similar, in spirit, with known results in large N c Quantum Chromodynamics with N f quarks and a non-vanishing density, in which the chiral density wave phase becomes suppressed and superconducting instabilities become favoured as the number of quarks is increased.
NASA Astrophysics Data System (ADS)
Halliwell, C. M.; McKay, W. A.
1994-02-01
The impact of liquid effluent discharges, from both existing nuclear power stations and from a possible future pressurized water reactor (PWR), on the levels of radioactivity in Welsh Severn coastal waters has been addressed in this study through the use of a simple box model. If a PWR was in operation at Hinkley Point, and assuming that the existing discharges into the estuary remained the same as in 1989, the levels of the most radiologically significant radionuclide, 137Cs, in seawater along the Welsh shoreline are predicted to increase by 7% (inner estuary), 7% (Welsh outer estuary) and 5% (inner channel) and in sediment by 0·3, 1·3 and 2% respectively. The radiation dose rate from 137Cs to members of the coastal population alone would show only a marginal increase due to these changes, and would remain less than 1% of the internationally recognized limit.
NASA Astrophysics Data System (ADS)
Perez, R. J.; Shevalier, M.; Hutcheon, I.
2004-05-01
Gas solubility is of considerable interest, not only for the theoretical understanding of vapor-liquid equilibria, but also due to extensive applications in combined geochemical, engineering, and environmental problems, such as greenhouse gas sequestration. Reliable models for gas solubility calculations in salt waters and hydrocarbons are also valuable when evaluating fluid inclusions saturated with gas components. We have modeled the solubility of methane, ethane, hydrogen, carbon dioxide, hydrogen sulfide, and five other gases in a water-brine-hydrocarbon system by solving a non-linear system of equations composed by modified Henry's Law Constants (HLC), gas fugacities, and assuming binary mixtures. HLCs are a function of pressure, temperature, brine salinity, and hydrocarbon density. Experimental data of vapor pressures and mutual solubilities of binary mixtures provide the basis for the calibration of the proposed model. It is demonstrated that, by using the Setchenow equation, only a relatively simple modification of the pure water model is required to assess the solubility of gases in brine solutions. Henry's Law constants for gases in hydrocarbons are derived using regular solution theory and Ostwald coefficients available from the literature. We present a set of two-parameter polynomial expressions, which allow simple computation and formulation of the model. Our calculations show that solubility predictions using modified HLCs are acceptable within 0 to 250 C, 1 to 150 bars, salinity up to 5 molar, and gas concentrations up to 4 molar. Our model is currently being used in the IEA Weyburn CO2 monitoring and storage project.
NASA Astrophysics Data System (ADS)
Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens
2018-02-01
Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.
Updates on Force Limiting Improvements
NASA Technical Reports Server (NTRS)
Kolaini, Ali R.; Scharton, Terry
2013-01-01
The following conventional force limiting methods currently practiced in deriving force limiting specifications assume one-dimensional translation source and load apparent masses: Simple TDOF model; Semi-empirical force limits; Apparent mass, etc.; Impedance method. Uncorrelated motion of the mounting points for components mounted on panels and correlated, but out-of-phase, motions of the support structures are important and should be considered in deriving force limiting specifications. In this presentation "rock-n-roll" motions of the components supported by panels, which leads to a more realistic force limiting specifications are discussed.
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
Entropic Repulsion Between Fluctuating Surfaces
NASA Astrophysics Data System (ADS)
Janke, W.
The statistical mechanics of fluctuating surfaces plays an important role in a variety of physical systems, ranging from biological membranes to world sheets of strings in theories of fundamental interactions. In many applications it is a good approximation to assume that the surfaces possess no tension. Their statistical properties are then governed by curvature energies only, which allow for gigantic out-of-plane undulations. These fluctuations are the “entropic” origin of long-range repulsive forces in layered surface systems. Theoretical estimates of these forces for simple model surfaces are surveyed and compared with recent Monte Carlo simulations.
CMB ISW-lensing bispectrum from cosmic strings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamauchi, Daisuke; Sendouda, Yuuiti; Takahashi, Keitaro, E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: sendouda@cc.hirosaki-u.ac.jp, E-mail: keitaro@sci.kumamoto-u.ac.jp
2014-02-01
We study the effect of weak lensing by cosmic (super-)strings on the higher-order statistics of the cosmic microwave background (CMB). A cosmic string segment is expected to cause weak lensing as well as an integrated Sachs-Wolfe (ISW) effect, the so-called Gott-Kaiser-Stebbins (GKS) effect, to the CMB temperature fluctuation, which are thus naturally cross-correlated. We point out that, in the presence of such a correlation, yet another kind of the post-recombination CMB temperature bispectra, the ISW-lensing bispectra, will arise in the form of products of the auto- and cross-power spectra. We first present an analytic method to calculate the autocorrelation ofmore » the temperature fluctuations induced by the strings, and the cross-correlation between the temperature fluctuation and the lensing potential both due to the string network. In our formulation, the evolution of the string network is assumed to be characterized by the simple analytic model, the velocity-dependent one scale model, and the intercommutation probability is properly incorporated in order to characterize the possible superstringy nature. Furthermore, the obtained power spectra are dominated by the Poisson-distributed string segments, whose correlations are assumed to satisfy the simple relations. We then estimate the signal-to-noise ratios of the string-induced ISW-lensing bispectra and discuss the detectability of such CMB signals from the cosmic string network. It is found that in the case of the smaller string tension, Gμ << 10{sup -7}, the ISW-lensing bispectrum induced by a cosmic string network can constrain the string-model parameters even more tightly than the purely GKS-induced bispectrum in the ongoing and future CMB observations on small scales.« less
CMB ISW-lensing bispectrum from cosmic strings
NASA Astrophysics Data System (ADS)
Yamauchi, Daisuke; Sendouda, Yuuiti; Takahashi, Keitaro
2014-02-01
We study the effect of weak lensing by cosmic (super-)strings on the higher-order statistics of the cosmic microwave background (CMB). A cosmic string segment is expected to cause weak lensing as well as an integrated Sachs-Wolfe (ISW) effect, the so-called Gott-Kaiser-Stebbins (GKS) effect, to the CMB temperature fluctuation, which are thus naturally cross-correlated. We point out that, in the presence of such a correlation, yet another kind of the post-recombination CMB temperature bispectra, the ISW-lensing bispectra, will arise in the form of products of the auto- and cross-power spectra. We first present an analytic method to calculate the autocorrelation of the temperature fluctuations induced by the strings, and the cross-correlation between the temperature fluctuation and the lensing potential both due to the string network. In our formulation, the evolution of the string network is assumed to be characterized by the simple analytic model, the velocity-dependent one scale model, and the intercommutation probability is properly incorporated in order to characterize the possible superstringy nature. Furthermore, the obtained power spectra are dominated by the Poisson-distributed string segments, whose correlations are assumed to satisfy the simple relations. We then estimate the signal-to-noise ratios of the string-induced ISW-lensing bispectra and discuss the detectability of such CMB signals from the cosmic string network. It is found that in the case of the smaller string tension, Gμ << 10-7, the ISW-lensing bispectrum induced by a cosmic string network can constrain the string-model parameters even more tightly than the purely GKS-induced bispectrum in the ongoing and future CMB observations on small scales.
Modelling of capital asset pricing by considering the lagged effects
NASA Astrophysics Data System (ADS)
Sukono; Hidayat, Y.; Bon, A. Talib bin; Supian, S.
2017-01-01
In this paper the problem of modelling the Capital Asset Pricing Model (CAPM) with the effect of the lagged is discussed. It is assumed that asset returns are analysed influenced by the market return and the return of risk-free assets. To analyse the relationship between asset returns, the market return, and the return of risk-free assets, it is conducted by using a regression equation of CAPM, and regression equation of lagged distributed CAPM. Associated with the regression equation lagged CAPM distributed, this paper also developed a regression equation of Koyck transformation CAPM. Results of development show that the regression equation of Koyck transformation CAPM has advantages, namely simple as it only requires three parameters, compared with regression equation of lagged distributed CAPM.
Replication of Cancellation Orders Using First-Passage Time Theory in Foreign Currency Market
NASA Astrophysics Data System (ADS)
Boilard, Jean-François; Kanazawa, Kiyoshi; Takayasu, Hideki; Takayasu, Misako
Our research focuses on the annihilation dynamics of limit orders in a spot foreign currency market for various currency pairs. We analyze the cancellation order distribution conditioned on the normalized distance from the mid-price; where the normalized distance is defined as the final distance divided by the initial distance. To reproduce real data, we introduce two simple models that assume the market price moves randomly and cancellation occurs either after fixed time t or following the Poisson process. Results of our model qualitatively reproduce basic statistical properties of cancellation orders of the data when limit orders are cancelled according to the Poisson process. We briefly discuss implication of our findings in the construction of more detailed microscopic models.
Non-monotonicity and divergent time scale in Axelrod model dynamics
NASA Astrophysics Data System (ADS)
Vazquez, F.; Redner, S.
2007-04-01
We study the evolution of the Axelrod model for cultural diversity, a prototypical non-equilibrium process that exhibits rich dynamics and a dynamic phase transition between diversity and an inactive state. We consider a simple version of the model in which each individual possesses two features that can assume q possibilities. Within a mean-field description in which each individual has just a few interaction partners, we find a phase transition at a critical value qc between an active, diverse state for q < qc and a frozen state. For q lesssim qc, the density of active links is non-monotonic in time and the asymptotic approach to the steady state is controlled by a time scale that diverges as (q-qc)-1/2.
On the Mass Distribution of Animal Species
NASA Astrophysics Data System (ADS)
Redner, Sidney; Clauset, Aaron; Schwab, David
2009-03-01
We develop a simple diffusion-reaction model to account for the broad and asymmetric distribution of adult body masses for species within related taxonomic groups. The model assumes three basic evolutionary features that control body mass: (i) a fixed lower limit that is set by metabolic constraints, (ii) a species extinction risk that is a weakly increasing function of body mass, and (iii) cladogenetic diffusion, in which daughter species have a slight tendency toward larger mass. The steady-state solution for the distribution of species masses in this model can be expressed in terms of the Airy function. This solution gives mass distributions that are in good agreement with data on 4002 terrestrial mammal species from the late Quaternary and 8617 extant bird species.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Incentives for Optimal Multi-level Allocation of HIV Prevention Resources
Malvankar, Monali M.; Zaric, Gregory S.
2013-01-01
HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551
Sn ion energy distributions of ns- and ps-laser produced plasmas
NASA Astrophysics Data System (ADS)
Bayerle, A.; Deuzeman, M. J.; van der Heijden, S.; Kurilovich, D.; de Faria Pinto, T.; Stodolna, A.; Witte, S.; Eikema, K. S. E.; Ubachs, W.; Hoekstra, R.; Versolato, O. O.
2018-04-01
Ion energy distributions arising from laser-produced plasmas of Sn are measured over a wide laser parameter space. Planar-solid and liquid-droplet targets are exposed to infrared laser pulses with energy densities between 1 J cm‑2 and 4 kJ cm‑2 and durations spanning 0.5 ps to 6 ns. The measured ion energy distributions are compared to two self-similar solutions of a hydrodynamic approach assuming isothermal expansion of the plasma plume into vacuum. For planar and droplet targets exposed to ps-long pulses, we find good agreement between the experimental results and the self-similar solution of a semi-infinite simple planar plasma configuration with an exponential density profile. The ion energy distributions resulting from solid Sn exposed to ns-pulses agrees with solutions of a limited-mass model that assumes a Gaussian-shaped initial density profile.
Wagner, Peter J.
2012-01-01
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266
A Black-Scholes Approach to Satisfying the Demand in a Failure-Prone Manufacturing System
NASA Technical Reports Server (NTRS)
Chavez-Fuentes, Jorge R.; Gonzalex, Oscar R.; Gray, W. Steven
2007-01-01
The goal of this paper is to use a financial model and a hedging strategy in a systems application. In particular, the classical Black-Scholes model, which was developed in 1973 to find the fair price of a financial contract, is adapted to satisfy an uncertain demand in a manufacturing system when one of two production machines is unreliable. This financial model together with a hedging strategy are used to develop a closed formula for the production strategies of each machine. The strategy guarantees that the uncertain demand will be met in probability at the final time of the production process. It is assumed that the production efficiency of the unreliable machine can be modeled as a continuous-time stochastic process. Two simple examples illustrate the result.
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.
Pre-operative prediction of surgical morbidity in children: comparison of five statistical models.
Cooper, Jennifer N; Wei, Lai; Fernandez, Soledad A; Minneci, Peter C; Deans, Katherine J
2015-02-01
The accurate prediction of surgical risk is important to patients and physicians. Logistic regression (LR) models are typically used to estimate these risks. However, in the fields of data mining and machine-learning, many alternative classification and prediction algorithms have been developed. This study aimed to compare the performance of LR to several data mining algorithms for predicting 30-day surgical morbidity in children. We used the 2012 National Surgical Quality Improvement Program-Pediatric dataset to compare the performance of (1) a LR model that assumed linearity and additivity (simple LR model) (2) a LR model incorporating restricted cubic splines and interactions (flexible LR model) (3) a support vector machine, (4) a random forest and (5) boosted classification trees for predicting surgical morbidity. The ensemble-based methods showed significantly higher accuracy, sensitivity, specificity, PPV, and NPV than the simple LR model. However, none of the models performed better than the flexible LR model in terms of the aforementioned measures or in model calibration or discrimination. Support vector machines, random forests, and boosted classification trees do not show better performance than LR for predicting pediatric surgical morbidity. After further validation, the flexible LR model derived in this study could be used to assist with clinical decision-making based on patient-specific surgical risks. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fully Resolved Simulations of 3D Printing
NASA Astrophysics Data System (ADS)
Tryggvason, Gretar; Xia, Huanxiong; Lu, Jiacai
2017-11-01
Numerical simulations of Fused Deposition Modeling (FDM) (or Fused Filament Fabrication) where a filament of hot, viscous polymer is deposited to ``print'' a three-dimensional object, layer by layer, are presented. A finite volume/front tracking method is used to follow the injection, cooling, solidification and shrinking of the filament. The injection of the hot melt is modeled using a volume source, combined with a nozzle, modeled as an immersed boundary, that follows a prescribed trajectory. The viscosity of the melt depends on the temperature and the shear rate and the polymer becomes immobile as its viscosity increases. As the polymer solidifies, the stress is found by assuming a hyperelastic constitutive equation. The method is described and its accuracy and convergence properties are tested by grid refinement studies for a simple setup involving two short filaments, one on top of the other. The effect of the various injection parameters, such as nozzle velocity and injection velocity are briefly examined and the applicability of the approach to simulate the construction of simple multilayer objects is shown. The role of fully resolved simulations for additive manufacturing and their use for novel processes and as the ``ground truth'' for reduced order models is discussed.
Bomphrey, Richard J; Taylor, Graham K; Lawson, Nicholas J; Thomas, Adrian L.R
2005-01-01
Actuator disc models of insect flight are concerned solely with the rate of momentum transfer to the air that passes through the disc. These simple models assume that an even pressure is applied across the disc, resulting in a uniform downwash distribution. However, a correction factor, k, is often included to correct for the difference in efficiency between the assumed even downwash distribution, and the real downwash distribution. In the absence of any empirical measurements of the downwash distribution behind a real insect, the values of k used in the literature have been necessarily speculative. Direct measurement of this efficiency factor is now possible, and could be used to compare the relative efficiencies of insect flight across the Class. Here, we use Digital Particle Image Velocimetry to measure the instantaneous downwash distribution, mid-downstroke, of a tethered desert locust (Schistocerca gregaria). By integrating the downwash distribution, we are thereby able to provide the first direct empirical measurement of k for an insect. The measured value of k=1.12 corresponds reasonably well with that predicted by previous theoretical studies. PMID:16849240
Numerical and Experimental Studies on Impact Loaded Concrete Structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saarenheimo, Arja; Hakola, Ilkka; Karna, Tuomo
2006-07-01
An experimental set-up has been constructed for medium scale impact tests. The main objective of this effort is to provide data for the calibration and verification of numerical models of a loading scenario where an aircraft impacts against a nuclear power plant. One goal is to develop and take in use numerical methods for predicting response of reinforced concrete structures to impacts of deformable projectiles that may contain combustible liquid ('fuel'). Loading, structural behaviour, like collapsing mechanism and the damage grade, will be predicted by simple analytical methods and using non-linear FE-method. In the so-called Riera method the behavior ofmore » the missile material is assumed to be rigid plastic or rigid visco-plastic. Using elastic plastic and elastic visco-plastic material models calculations are carried out by ABAQUS/Explicit finite element code, assuming axisymmetric deformation mode for the missile. With both methods, typically, the impact force time history, the velocity of the missile rear end and the missile shortening during the impact were recorded for comparisons. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Overton, J.H.; Jarabek, A.M.
1989-01-01
The U.S. EPA advocates the assessment of health-effects data and calculation of inhaled reference doses as benchmark values for gauging systemic toxicity to inhaled gases. The assessment often requires an inter- or intra-species dose extrapolation from no observed adverse effect level (NOAEL) exposure concentrations in animals to human equivalent NOAEL exposure concentrations. To achieve this, a dosimetric extrapolation procedure was developed based on the form or type of equations that describe the uptake and disposition of inhaled volatile organic compounds (VOCs) in physiologically-based pharmacokinetic (PB-PK) models. The procedure assumes allometric scaling of most physiological parameters and that the value ofmore » the time-integrated human arterial-blood concentration must be limited to no more than to that of experimental animals. The scaling assumption replaces the need for most parameter values and allows the derivation of a simple formula for dose extrapolation of VOCs that gives equivalent or more-conservative exposure concentrations values than those that would be obtained using a PB-PK model in which scaling was assumed.« less
Cunningham, J C; Sinka, I C; Zavaliangos, A
2004-08-01
In this first of two articles on the modeling of tablet compaction, the experimental inputs related to the constitutive model of the powder and the powder/tooling friction are determined. The continuum-based analysis of tableting makes use of an elasto-plastic model, which incorporates the elements of yield, plastic flow potential, and hardening, to describe the mechanical behavior of microcrystalline cellulose over the range of densities experienced during tableting. Specifically, a modified Drucker-Prager/cap plasticity model, which includes material parameters such as cohesion, internal friction, and hydrostatic yield pressure that evolve with the internal state variable relative density, was applied. Linear elasticity is assumed with the elastic parameters, Young's modulus, and Poisson's ratio dependent on the relative density. The calibration techniques were developed based on a series of simple mechanical tests including diametrical compression, simple compression, and die compaction using an instrumented die. The friction behavior is measured using an instrumented die and the experimental data are analyzed using the method of differential slices. The constitutive model and frictional properties are essential experimental inputs to the finite element-based model described in the companion article. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:2022-2039, 2004
Proton facility economics: the importance of "simple" treatments.
Johnstone, Peter A S; Kerstiens, John; Richard, Helsper
2012-08-01
Given the cost and debt incurred to build a modern proton facility, impetus exists to minimize treatment of patients with complex setups because of their slower throughput. The aim of this study was to determine how many "simple" cases are necessary given different patient loads simply to recoup construction costs and debt service, without beginning to cover salaries, utilities, beam costs, and so on. Simple cases are ones that can be performed quickly because of an easy setup for the patient or because the patient is to receive treatment to just one or two fields. A "standard" construction cost and debt for 1, 3, and 4 gantry facilities were calculated from public documents of facilities built in the United States, with 100% of the construction funded through standard 15-year financing at 5% interest. Clinical best case (that each room was completely scheduled with patients over a 14-hour workday) was assumed, and a statistical analysis was modeled with debt, case mix, and payer mix moving independently. Treatment times and reimbursement data from the investigators' facility for varying complexities of patients were extrapolated for varying numbers treated daily. Revenue assumptions of $X per treatment were assumed both for pediatric cases (a mix of Medicaid and private payer) and state Medicare simple case rates. Private payer reimbursement averages $1.75X per treatment. The number of simple patients required daily to cover construction and debt service costs was then derived. A single gantry treating only complex or pediatric patients would need to apply 85% of its treatment slots simply to service debt. However, that same room could cover its debt treating 4 hours of simple patients, thus opening more slots for complex and pediatric patients. A 3-gantry facility treating only complex and pediatric cases would not have enough treatment slots to recoup construction and debt service costs at all. For a 4-gantry center, focusing on complex and pediatric cases alone, there would not be enough treatment slots to cover even 60% of debt service. Personnel and recurring costs and profit further reduce the business case for performing more complex patients. Debt is not variable with capacity. Absent philanthropy, financing a modern proton center requires treating a case load emphasizing simple patients even before operating costs and any profit are achieved. Copyright © 2012 American College of Radiology. Published by Elsevier Inc. All rights reserved.
How pigeons discriminate the relative frequency of events.
Keen, R; Machado, A
1999-09-01
This study examined how pigeons discriminate the relative frequencies of events when the events occur serially. In a discrete-trials procedure, 6 pigeons were shown one light nf times and then another nl times. Next, they received food for choosing the light that had occurred the least number of times during the sample. At issue were (a) how the discrimination was related to two variables, the difference between the frequencies of the two lights, D = nf - nl, and the total number of lights in the sample, T = nf + nl; and (b) whether a simple mathematical model of the discrimination process could account for the data. In contrast with models that assume that pigeons count the stimulus lights, engage in mental arithmetic on numerons, or remember the number of stimuli, the present model assumed only that the influence of a sample stimulus on choice increases linearly when the stimulus is presented, but decays exponentially when the stimulus is absent. The results showed that, overall, the pigeons discriminated the relative frequencies well. Their accuracy always increased with the absolute value of the difference D and, for D > 0, it decreased with T. Performance also showed clear recency, primacy, and contextual effects. The model accounted well for the major trends in the data.
Radiation effects induced in pin photodiodes by 40- and 85-MeV protons
NASA Technical Reports Server (NTRS)
Becher, J.; Kernell, R. L.; Reft, C. S.
1985-01-01
PIN photodiodes were bombarded with 40- and 85-MeV protons to a fluence of 1.5 x 10 to the 11th power p/sq cm, and the resulting change in spectral response in the near infrared was determined. The photocurrent, dark current and pulse amplitude were measured as a function of proton fluence. Changes in these three measured properties are discussed in terms of changes in the diode's spectral response, minority carrier diffusion length and depletion width. A simple model of induced radiation effects is presented which is in good agreement with the experimental results. The model assumes that incident protons produce charged defects within the depletion region simulating donor type impurities.
A collision probability analysis of the double-heterogeneity problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hebert, A.
1993-10-01
A practical collision probability model is presented for the description of geometries with many levels of heterogeneity. Regular regions of the macrogeometry are assumed to contain a stochastic mixture of spherical grains or cylindrical tubes. Simple expressions for the collision probabilities in the global geometry are obtained as a function of the collision probabilities in the macro- and microgeometries. This model was successfully implemented in the collision probability kernel of the APOLLO-1, APOLLO-2, and DRAGON lattice codes for the description of a broad range of reactor physics problems. Resonance self-shielding and depletion calculations in the microgeometries are possible because eachmore » microregion is explicitly represented.« less
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
Pore Structure Model for Predicting Elastic Wavespeeds in Fluid-Saturated Sandstones
NASA Astrophysics Data System (ADS)
Zimmerman, R. W.; David, E. C.
2011-12-01
During hydrostatic compression, in the elastic regime, ultrasonic P and S wave velocities measured on rock cores generally increase with pressure, and reach asymptotic values at high pressures. The pressure dependence of seismic velocities is generally thought to be due to the closure of compliant cracks, in which case the high-pressure velocities must reflect only the influence of the non-closable, equant "pores". Assuming that pores can be represented by spheroids, we can relate the elastic properties to the pore structure using an effective medium theory. Moreover, the closure pressure of a thin crack-like pore is directly proportional to its aspect ratio. Hence, our first aim is to use the pressure dependence of seismic velocities to invert the aspect ratio distribution. We use a simple analytical algorithm developed by Zimmerman (Compressibility of Sandstones, 1991), which can be used for any effective medium theory. Previous works have used overly restrictive assumptions, such as assuming that the stiff pores are spherical, or that the interactions between pores can be neglected. Here, we assume that the rock contains an exponential distribution of crack aspect ratios, and one family of stiff pores having an aspect ratio lying somewhere between 0.01 and 1. We develop our model in two versions, using the Differential Scheme, and the Mori-Tanaka scheme. The inversion is done using data obtained in dry experiments, since pore fluids have a strong effect on velocities and tend to mask the effect of the pore geometry. This avoids complicated joint inversion of dry and wet data, such as done by Cheng and Toksoz (JGR, 1979). Our results show that for many sets of data on sandstones, we can fit very well the dry velocities. Our second aim is to predict the saturated velocities from our pore structure model, noting that at a given differential stress, the pore structure should be the same as for a dry test. Our results show that the Biot-Gassmann predictions always underpredict the rock stiffness and that, for ultrasonic measurements performed at high frequencies (~MHz), it is more accurate to use the results from effective medium theories, which implicitly assume that the fluid is trapped in the pores. Hence, we use the aspect ratio distribution inverted from dry data, but this time introducing fluid in the pores. For a good number of experimental data on sandstones, our predictions for the saturated velocities match well the experimental data. This validates the use of a spheroidal model for pores. The results are only very weakly dependent on the choice of the effective medium theory. We conclude that our method, which remain relatively simple, is a useful tool to extract the pore aspect ratio distribution, as well as predicting the saturated velocities for sandstones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrinec, S.M.; Russell, C.T.
1995-06-01
The shape of the dayside magnetopause has been studied from both a theoretical and an empirical perspective for several decades. Early theoretical studies of the magnetopause shape assumed an inviscid interaction and normal pressure balance along the entire boundary, with the interior magnetic field and magnetopause currents being solved self-consistently and iteratively, using the Biot-Savart Law. The derived shapes are complicated, due to asymmetries caused by the nature of the dipole field and the direction of flow of the solar wind. These models contain a weak field region or cusp through which the solar wind has direct access to themore » ionosphere. More recent MHD model results have indicated that the closed magnetic field lines of the dayside magnetosphere can be dragged tailward of the terminator plane, so that there is no direct access of the magnetosheath to the ionosphere. Most empirical studies have assumed that the magnetopause can be approximated by a simple conic section with a specified number of coefficients, which are determined by least squares fits to spacecraft crossing positions. Thus most empirical models resemble more the MHD models than the more complex shape of the Biot-Savart models. In this work, the authors examine empirically the effect of the cusp regions on the shape of the dayside magnetopause, and they test the accuracy of these models. They find that during periods of northward IMF, crossings of the magnetopause that are close to one of the cusp regions are observed at distances closer to Earth than crossings in the equatorial plane. This result is consistent with the results of the inviscid Biot-Savart models and suggests that the magnetopause is less viscous than is assumed in many MHD models. 28 refs., 4 figs., 1 tab.« less
Some methodological issues in the longitudinal analysis of demographic data.
Krishinan, P
1982-12-01
Most demographic data are macro (or aggregate) in nature. Some relevant methodological issues are presented here in a time series study using aggregate data. The micro-macro distinction is relative. Time enters into the micro and macro variables in different ways. A simple micro model of rural-urban migration is given. Method 1 is to assume homogeneity in behavior. Method 2 is a Bayesian estimation. A discusssion of the results follows. Time series models of aggregate data are given. The nature of the model--predictive or explanatory--must be decided on. Explanatory models in longitudinal studies have been developed. Ways to go to the micro level from the macro are discussed. The aggregation-disaggregation problem in demography is not similar to that in econometrics. To understand small populations, separate micro level data have to be collected and analyzed and appropriate models developed. Both types of models have their uses.
Mixed Poisson distributions in exact solutions of stochastic autoregulation models.
Iyer-Biswas, Srividya; Jayaprakash, C
2014-11-01
In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.
Plasma Model V&V of Collisionless Electrostatic Shock
NASA Astrophysics Data System (ADS)
Martin, Robert; Le, Hai; Bilyeu, David; Gildea, Stephen
2014-10-01
A simple 1D electrostatic collisionless shock was selected as an initial validation and verification test case for a new plasma modeling framework under development at the Air Force Research Laboratory's In-Space Propulsion branch (AFRL/RQRS). Cross verification between PIC, Vlasov, and Fluid plasma models within the framework along with expected theoretical results will be shown. The non-equilibrium velocity distributions (VDF) captured by PIC and Vlasov will be compared to each other and the assumed VDF of the fluid model at selected points. Validation against experimental data from the University of California, Los Angeles double-plasma device will also be presented along with current work in progress at AFRL/RQRS towards reproducing the experimental results using higher fidelity diagnostics to help elucidate differences between model results and between the models and original experiment. DISTRIBUTION A: Approved for public release; unlimited distribution; PA (Public Affairs) Clearance Number 14332.
Electrical description of N2 capacitively coupled plasmas with the global model
NASA Astrophysics Data System (ADS)
Cao, Ming-Lu; Lu, Yi-Jia; Cheng, Jia; Ji, Lin-Hong; Engineering Design Team
2016-10-01
N2 discharges in a commercial capacitively coupled plasma reactor are modelled by a combination of an equivalent circuit and the global model, for a range of gas pressure at 1 4 Torr. The ohmic and inductive plasma bulk and the capacitive sheath are represented as LCR elements, with electrical characteristics determined by plasma parameters. The electron density and electron temperature are obtained from the global model in which a Maxwellian electron distribution is assumed. Voltages and currents are recorded by a VI probe installed after the match network. Using the measured voltage as an input, the current flowing through the discharge volume is calculated from the electrical model and shows excellent agreement with the measurements. The experimentally verified electrical model provides a simple and accurate description for the relationship between the external electrical parameters and the plasma properties, which can serve as a guideline for process window planning in industrial applications.
A mathematical model for lactate transport to red blood cells.
Wahl, Patrick; Yue, Zengyuan; Zinner, Christoph; Bloch, Wilhelm; Mester, Joachim
2011-03-01
A simple mathematical model for the transport of lactate from plasma to red blood cells (RBCs) during and after exercise is proposed based on our experimental studies for the lactate concentrations in RBCs and in plasma. In addition to the influx associated with the plasma-to-RBC lactate concentration gradient, it is argued that an efflux must exist. The efflux rate is assumed to be proportional to the lactate concentration in RBCs. This simple model is justified by the comparison between the model-predicted results and observations: For all 33 cases (11 subjects and 3 different warm-up conditions), the model-predicted time courses of lactate concentrations in RBC are generally in good agreement with observations, and the model-predicted ratios between lactate concentrations in RBCs and in plasma at the peak of lactate concentration in RBCs are very close to the observed values. Two constants, the influx rate coefficient C (1) and the efflux rate coefficient C (2), are involved in the present model. They are determined by the best fit to observations. Although the exact electro-chemical mechanism for the efflux remains to be figured out in the future research, the good agreement of the present model with observations suggests that the efflux must get stronger as the lactate concentration in RBCs increases. The physiological meanings of C (1) and C (2) as well as their potential applications are discussed.
A simple model for the evolution of melt pond coverage on permeable Arctic sea ice
NASA Astrophysics Data System (ADS)
Popović, Predrag; Abbot, Dorian
2017-05-01
As the melt season progresses, sea ice in the Arctic often becomes permeable enough to allow for nearly complete drainage of meltwater that has collected on the ice surface. Melt ponds that remain after drainage are hydraulically connected to the ocean and correspond to regions of sea ice whose surface is below sea level. We present a simple model for the evolution of melt pond coverage on such permeable sea ice floes in which we allow for spatially varying ice melt rates and assume the whole floe is in hydrostatic balance. The model is represented by two simple ordinary differential equations, where the rate of change of pond coverage depends on the pond coverage. All the physical parameters of the system are summarized by four strengths that control the relative importance of the terms in the equations. The model both fits observations and allows us to understand the behavior of melt ponds in a way that is often not possible with more complex models. Examples of insights we can gain from the model are that (1) the pond growth rate is more sensitive to changes in bare sea ice albedo than changes in pond albedo, (2) ponds grow slower on smoother ice, and (3) ponds respond strongest to freeboard sinking on first-year ice and sidewall melting on multiyear ice. We also show that under a global warming scenario, pond coverage would increase, decreasing the overall ice albedo and leading to ice thinning that is likely comparable to thinning due to direct forcing. Since melt pond coverage is one of the key parameters controlling the albedo of sea ice, understanding the mechanisms that control the distribution of pond coverage will help improve large-scale model parameterizations and sea ice forecasts in a warming climate.
Service, Elisabet; Maury, Sini
2015-01-01
Working memory (WM) has been described as an interface between cognition and action, or a system for access to a limited amount of information needed in complex cognition. Access to morphological information is needed for comprehending and producing sentences. The present study probed WM for morphologically complex word forms in Finnish, a morphologically rich language. We studied monomorphemic (boy), inflected (boy+’s), and derived (boy+hood) words in three tasks. Simple span, immediate serial recall of words, in Experiment 1, is assumed to mainly rely on information in the focus of attention. Sentence span, a dual task combining sentence reading with recall of the last word (Experiment 2) or of a word not included in the sentence (Experiment 3) is assumed to involve establishment of a search set in long-term memory for fast activation into the focus of attention. Recall was best for monomorphemic and worst for inflected word forms with performance on derived words in between. However, there was an interaction between word type and experiment, suggesting that complex span is more sensitive to morphological complexity in derivations than simple span. This was explored in a within-subjects Experiment 4 combining all three tasks. An interaction between morphological complexity and task was replicated. Both inflected and derived forms increased load in WM. In simple span, recall of inflectional forms resulted in form errors. Complex span tasks were more sensitive to morphological load in derived words, possibly resulting from interference from morphological neighbors in the mental lexicon. The results are best understood as involving competition among inflectional forms when binding words from input into an output structure, and competition from morphological neighbors in secondary memory during cumulative retrieval-encoding cycles. Models of verbal recall need to be able to represent morphological as well as phonological and semantic information. PMID:25642181
Modeling Hydrothermal Activity on Enceladus
NASA Astrophysics Data System (ADS)
Stamper, T., Jr.; Farough, A.
2017-12-01
Cassini's mass spectrometer data and gravitational field measurements imply water-rock interactions around the porous core of Enceladus. Using such data we characterize global heat and fluid transport properties of the core and model the ongoing hydrothermal activity on Enceladus. We assume that within the global ocean beneath the surface ice, seawater percolates downward into the core where it is heated and rises to the oceanfloor where it emanates in the form of diffuse discharge. We utilize the data from Hsu et al., [2015] with models of diffuse flow in seafloor hydrothermal systems by Lowell et al., [2015] to characterize the global heat transport properties of the Enceladus's core. Based on direct observations the gravitational acceleration (g) is calculated 0.123 m s-2. We assume fluid's density (ρ) is 103 kg m-3 and the specific heat of the fluid (cf) is 4000 Jkg-1 °C-1. From these values effective thermal diffusivity (a*) is calculated as 10-6 m2 s-1. We also assume the coefficient of thermal expansion of fluid (αf) and the kinematic viscosity of fluid (ν) to be 10-4 °C-1 and 10-6 m2 s-1 respectively. The estimated Rayleigh number (Ra) ranges between 0.11-2468.0, for core porosity (φ) of 5-15%, permeability (k) between 10-12-10-8 m2 and temperature between 90-200 °C and the depth of fluid circulation of 100 m. High values of Rayleigh number, cause vigorous convection within the core of Enceladus. Numerical modeling of reactive transport in multicomponent, multiphase systems is required to obtain a full understanding of the characteristics and evolution of the hydrothermal system on Enceladus, but simple scaling laws can provide insight into the physics of water-rock interactions.
Action-Based Dynamical Modelling For The Milky Way Disk
NASA Astrophysics Data System (ADS)
Trick, Wilma; Rix, Hans-Walter; Bovy, Jo
2016-09-01
We present Road Mapping, a full-likelihood dynamical modelling machinery, that aims to recover the Milky Way's (MW) gravitational potential from large samples of stars in the Galactic disk. Road Mapping models the observed positions and velocities of stars with a parameterized, action-based distribution function (DF) in a parameterized axisymmetric gravitational potential (Binney & McMillan 2011, Binney 2012, Bovy & Rix 2013).In anticipation of the Gaia data release in autumn, we have fully tested Road Mapping and demonstrated its robustness against the breakdown of its assumptions.Using large suites of mock data, we investigated in isolated test cases how the modelling would be affected if the data's true potential or DF was not included in the families of potentials and DFs assumed by Road Mapping, or if we misjudged measurement errors or the spatial selection function (SF) (Trick et al., submitted to ApJ). We found that the potential can be robustly recovered — given the limitations of the assumed potential model—, even for minor misjudgments in DF or SF, or for proper motion errors or distances known to within 10%.We were also able to demonstrate that Road Mapping is still successful if the strong assumption of axisymmetric breaks down (Trick et al., in preparation). Data drawn from a highresolution simulation (D'Onghia et al. 2013) of a MW-like galaxy with pronounced spiral arms does neither follow the assumed simple DF, nor does it come from an axisymmetric potential. We found that as long as the survey volume is large enough, Road Mapping gives good average constraints on the galaxy's potential.We are planning to apply Road Mapping to a real data set — the Tycho-2 catalogue (Hog et al. 2000) —very soon, and might be able to present some preliminary results already at the conference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baloković, M.; Harrison, F. A.; Esmerian, C. J.
2015-02-10
Measurements of the high-energy cut-off in the coronal continuum of active galactic nuclei have long been elusive for all but a small number of the brightest examples. We present a direct measurement of the cut-off energy in the nuclear continuum of the nearby Seyfert 1.9 galaxy MCG-05-23-016 with unprecedented precision. The high sensitivity of NuSTAR up to 79 keV allows us to clearly disentangle the spectral curvature of the primary continuum from that of its reflection component. Using a simple phenomenological model for the hard X-ray spectrum, we constrain the cut-off energy to 116{sub −5}{sup +6} keV with 90% confidence.more » Testing for more complex models and nuisance parameters that could potentially influence the measurement, we find that the cut-off is detected robustly. We further use simple Comptonized plasma models to provide independent constraints for both the kinetic temperature of the electrons in the corona and its optical depth. At the 90% confidence level, we find kT{sub e} = 29 ± 2 keV and τ {sub e} = 1.23 ± 0.08 assuming a slab (disk-like) geometry, and kT{sub e} = 25 ± 2 keV and τ {sub e} = 3.5 ± 0.2 assuming a spherical geometry. Both geometries are found to fit the data equally well and their two principal physical parameters are correlated in both cases. With the optical depth in the τ {sub e} ≳ 1 regime, the data are pushing the currently available theoretical models of the Comptonized plasma to the limits of their validity. Since the spectral features and variability arising from the inner accretion disk have been observed previously in MCG-05-23-016, the inferred high optical depth implies that a spherical or disk-like corona cannot be homogeneous.« less
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
Anisotropic Poroelasticity in a Rock With Cracks
NASA Astrophysics Data System (ADS)
Wong, Teng-Fong
2017-10-01
Deformation of a saturated rock in the field and laboratory may occur in a broad range of conditions, ranging from undrained to drained. The poromechanical response is often anisotropic, and in a brittle rock, closely related to preexisting and stress-induced cracks. This can be modeled as a rock matrix embedded with an anisotropic system of cracks. Assuming microisotropy, expressions for three of the poroelastic coefficients of a transversely isotropic rock were derived in terms of the crack density tensor. Together with published results for the five effective elastic moduli, this provides a complete micromechanical description of the eight independent poroelastic coefficients of such a cracked rock. Relatively simple expressions were obtained for the Skempton pore pressure tensor, which allow one to infer the crack density tensor from undrained measurement in the laboratory, and also to infer the Biot-Willis effective stress coefficients. The model assumes a dilute concentration of noninteractive penny-shaped cracks, and it shows good agreement with experimental data for Berea sandstone, with crack density values up to 0.6. Whereas predictions on the storage coefficient and normal components of the elastic stiffness tensor also seem reasonable, significant discrepancy between model and measurement was observed regarding the off-diagonal and shear components of the stiffness. A plausible model had been proposed for development of very strong anisotropy in the undrained response of a fault zone, and the model here placed geometric constraints on the associated fracture system.
A model for warfare in stratified small-scale societies: The effect of within-group inequality.
Pandit, Sagar; Pradhan, Gauri; van Schaik, Carel
2017-01-01
In order to predict the features of non-raiding human warfare in small-scale, socially stratified societies, we study a coalitionary model of war that assumes that individuals participate voluntarily because their decisions serve to maximize fitness. Individual males join the coalition if war results in a net economic and thus fitness benefit. Within the model, viable offensive war ensues if the attacking coalition of males can overpower the defending coalition. We assume that the two groups will eventually fuse after a victory, with ranks arranged according to the fighting abilities of all males and that the new group will adopt the winning group's skew in fitness payoffs. We ask whether asymmetries in skew, group size and the amount of resources controlled by a group affect the likelihood of successful war. The model shows, other things being equal, that (i) egalitarian groups are more likely to defeat their more despotic enemies, even when these are stronger, (ii) defection to enemy groups will be rare, unless the attacked group is far more despotic than the attacking one, and (iii) genocidal war is likely under a variety of conditions, in particular when the group under attack is more egalitarian. This simple optimality model accords with several empirically observed correlations in human warfare. Its success underlines the important role of egalitarianism in warfare.
A model for warfare in stratified small-scale societies: The effect of within-group inequality
Pandit, Sagar; van Schaik, Carel
2017-01-01
In order to predict the features of non-raiding human warfare in small-scale, socially stratified societies, we study a coalitionary model of war that assumes that individuals participate voluntarily because their decisions serve to maximize fitness. Individual males join the coalition if war results in a net economic and thus fitness benefit. Within the model, viable offensive war ensues if the attacking coalition of males can overpower the defending coalition. We assume that the two groups will eventually fuse after a victory, with ranks arranged according to the fighting abilities of all males and that the new group will adopt the winning group’s skew in fitness payoffs. We ask whether asymmetries in skew, group size and the amount of resources controlled by a group affect the likelihood of successful war. The model shows, other things being equal, that (i) egalitarian groups are more likely to defeat their more despotic enemies, even when these are stronger, (ii) defection to enemy groups will be rare, unless the attacked group is far more despotic than the attacking one, and (iii) genocidal war is likely under a variety of conditions, in particular when the group under attack is more egalitarian. This simple optimality model accords with several empirically observed correlations in human warfare. Its success underlines the important role of egalitarianism in warfare. PMID:29228014
Compact stars in the non-minimally coupled electromagnetic fields to gravity
NASA Astrophysics Data System (ADS)
Sert, Özcan
2018-03-01
We investigate the gravitational models with the non-minimal Y(R)F^2 coupled electromagnetic fields to gravity, in order to describe charged compact stars, where Y( R) denotes a function of the Ricci curvature scalar R and F^2 denotes the Maxwell invariant term. We determine two parameter family of exact spherically symmetric static solutions and the corresponding non-minimal model without assuming any relation between energy density of matter and pressure. We give the mass-radius, electric charge-radius ratios and surface gravitational redshift which are obtained by the boundary conditions. We reach a wide range of possibilities for the parameters k and α in these solutions. Lastly we show that the models can describe the compact stars even in the more simple case α =3.
Analytical model for the threshold voltage of III-V nanowire transistors including quantum effects
NASA Astrophysics Data System (ADS)
Marin, E. G.; Ruiz, F. G.; Tienda-Luna, I. M.; Godoy, A.; Gámiz, F.
2014-02-01
In this work we propose an analytical model for the threshold voltage (VT) of III-V cylindrical nanowires, that takes into consideration the two dimensional quantum confinement of the carriers, the Fermi-Dirac statistics, the wave-function penetration into the gate insulator and the non-parabolicity of the conduction band structure. A simple expression for VT is obtained assuming some suitable approximations. The model results are compared to those of a 2D self consistent Schrödinger-Poisson solver, demonstrating a good fit for different III-V materials, insulator thicknesses and nanowire sizes with diameter down to 5 nm. The VT dependence on the confinement effective mass is discussed. The different contributions to VT are analyzed showing significant variations among different III-V materials.
NASA Astrophysics Data System (ADS)
Lemus-Mondaca, Roberto A.; Vega-Gálvez, Antonio; Zambra, Carlos E.; Moraga, Nelson O.
2017-01-01
A 3D model considering heat and mass transfer for food dehydration inside a direct contact dryer is studied. The k- ɛ model is used to describe turbulent air flow. The samples thermophysical properties as density, specific heat, and thermal conductivity are assumed to vary non-linearly with temperature. FVM, SIMPLE algorithm based on a FORTRAN code are used. Results unsteady velocity, temperature, moisture, kinetic energy and dissipation rate for the air flow are presented, whilst temperature and moisture values for the food also are presented. The validation procedure includes a comparison with experimental and numerical temperature and moisture content results obtained from experimental data, reaching a deviation 7-10 %. In addition, this turbulent k- ɛ model provided a better understanding of the transport phenomenon inside the dryer and sample.
Compression strength of composite primary structural components
NASA Technical Reports Server (NTRS)
Johnson, Eric R.
1994-01-01
The linear elastic response is determined for an internally pressurized, long circular cylindrical shell stiffened on the inside by a regular arrangement of identical stringers and identical rings. Periodicity of this configuration permits the analysis of a portion of the shell wall centered over a generic stringer-ring joint; i.e., a unit cell model. The stiffeners are modeled as discrete beams, and the stringer is assumed to have a symmetrical cross section and the ring an asymmetrical section. Asymmetery causes out-of-plane bending and torsion of the ring. Displacements are assumed as truncated double Fourier series plus simple terms in the axial coordinate to account for the closed and pressure vessel effect (a non-periodic effect). The interacting line loads between the stiffeners and the inside shell wall are Lagrange multipliers in the formulation, and they are also assumed as truncated Fourier series. Displacement continuity constraints between the stiffeners and shell along the contact lines are satisfied point-wise. Equilibrium is imposed by the principle of virtual work. A composite material crown panel from the fuselage of a large transport aircraft is the numerical example. The distributions of the interacting line loads, and the out-of-plane bending moment and torque in the ring, are strongly dependent on modeling the deformations due to transverse shear and cross-sectional warping of the ring in torsion. This paper contains the results from the semiannual report on research on 'Pressure Pillowing of an Orthogonally Stiffened Cylindrical Shell'. The results of the new work are illustrated in the included appendix.
NASA Astrophysics Data System (ADS)
Albert, Carlo; Ulzega, Simone; Stoop, Ruedi
2016-04-01
Measured time-series of both precipitation and runoff are known to exhibit highly non-trivial statistical properties. For making reliable probabilistic predictions in hydrology, it is therefore desirable to have stochastic models with output distributions that share these properties. When parameters of such models have to be inferred from data, we also need to quantify the associated parametric uncertainty. For non-trivial stochastic models, however, this latter step is typically very demanding, both conceptually and numerically, and always never done in hydrology. Here, we demonstrate that methods developed in statistical physics make a large class of stochastic differential equation (SDE) models amenable to a full-fledged Bayesian parameter inference. For concreteness we demonstrate these methods by means of a simple yet non-trivial toy SDE model. We consider a natural catchment that can be described by a linear reservoir, at the scale of observation. All the neglected processes are assumed to happen at much shorter time-scales and are therefore modeled with a Gaussian white noise term, the standard deviation of which is assumed to scale linearly with the system state (water volume in the catchment). Even for constant input, the outputs of this simple non-linear SDE model show a wealth of desirable statistical properties, such as fat-tailed distributions and long-range correlations. Standard algorithms for Bayesian inference fail, for models of this kind, because their likelihood functions are extremely high-dimensional intractable integrals over all possible model realizations. The use of Kalman filters is illegitimate due to the non-linearity of the model. Particle filters could be used but become increasingly inefficient with growing number of data points. Hamiltonian Monte Carlo algorithms allow us to translate this inference problem to the problem of simulating the dynamics of a statistical mechanics system and give us access to most sophisticated methods that have been developed in the statistical physics community over the last few decades. We demonstrate that such methods, along with automated differentiation algorithms, allow us to perform a full-fledged Bayesian inference, for a large class of SDE models, in a highly efficient and largely automatized manner. Furthermore, our algorithm is highly parallelizable. For our toy model, discretized with a few hundred points, a full Bayesian inference can be performed in a matter of seconds on a standard PC.
Validation analysis of probabilistic models of dietary exposure to food additives.
Gilsenan, M B; Thompson, R L; Lambe, J; Gibney, M J
2003-10-01
The validity of a range of simple conceptual models designed specifically for the estimation of food additive intakes using probabilistic analysis was assessed. Modelled intake estimates that fell below traditional conservative point estimates of intake and above 'true' additive intakes (calculated from a reference database at brand level) were considered to be in a valid region. Models were developed for 10 food additives by combining food intake data, the probability of an additive being present in a food group and additive concentration data. Food intake and additive concentration data were entered as raw data or as a lognormal distribution, and the probability of an additive being present was entered based on the per cent brands or the per cent eating occasions within a food group that contained an additive. Since the three model components assumed two possible modes of input, the validity of eight (2(3)) model combinations was assessed. All model inputs were derived from the reference database. An iterative approach was employed in which the validity of individual model components was assessed first, followed by validation of full conceptual models. While the distribution of intake estimates from models fell below conservative intakes, which assume that the additive is present at maximum permitted levels (MPLs) in all foods in which it is permitted, intake estimates were not consistently above 'true' intakes. These analyses indicate the need for more complex models for the estimation of food additive intakes using probabilistic analysis. Such models should incorporate information on market share and/or brand loyalty.
Simulations of induced-charge electro-osmosis in microfluidic devices
NASA Astrophysics Data System (ADS)
Ben, Yuxing
2005-03-01
Theories of nonlinear electrokinetic phenomena generally assume a uniform, neutral bulk electroylte in contact with a polarizable thin double layer near a metal or dielectric surface, which acts as a "capacitor skin". Induced-charge electro-osmosis (ICEO) is the general effect of nonlinear electro-osmotic slip, when an applied electric field acts on its own induced (diffuse) double-layer charge. In most theoretical and experimental work, ICEO has been studied in very simple geometries, such as colloidal spheres and planar, periodic micro-electrode arrays. Here we use finite-element simulations to predict how more complicated geometries of polarizable surfaces and/or electrodes yield flow profiles with subtle dependence on the amplitude and frequency of the applied voltage. We also consider how the simple model equations break down, due to surface conduction, bulk diffusion, and concentration polarization, for large applied voltages (as in most experiments).
Nick-free formation of reciprocal heteroduplexes: a simple solution to the topological problem.
Wilson, J H
1979-01-01
Because the individual strands of DNA are intertwined, formation of heteroduplex structures between duplexes--as in presumed recombination intermediates--presents a topological puzzle, known as the winding problem. Previous approaches to this problem have assumed that single-strand breaks are required to permit formation of fully coiled heteroduplexes. This paper describes a simple, nick-free solution to the winding problem that satisfies all topological constraints. Homologous duplexes associated by their minor-groove surfaces can switch strand pairing to form reciprocal heteroduplexes that coil together into a compact, four-stranded helix throughout the region of pairing. Model building shows that this fused heteroduplex structure is plausible, being composed entirely of right-handed primary helices with Watson-Crick base pairing throughout. Its simplicity of formation, structural symmetry, and high degree of specificity are suggestive of a natural mechanism for alignment by base pairing between intact homologous duplexes. Implications for genetic recombination are discussed. Images PMID:291028
On electromechanical instability in semicrystalline polymer
NASA Astrophysics Data System (ADS)
Yong, Huadong; Zhou, Youhe
2013-10-01
Semicrystalline polymers are promising materials for actuators and capacitors. In response to the electric field, the polymer undergoes large deformation. Based on a simple model, the critical electric field in the polymer is investigated in the present paper. The polymer is assumed to be incompressible and specified by the power law relation. Using the stability condition of the determinant of the Hessian, the critical electric field can be obtained. Comparing the results from prestress with prestrain, it is shown that the critical electric field is related to the hardening exponent N and may be restricted by the necking instability.
Theoretical Analysis of a Pulse Tube Regenerator
NASA Technical Reports Server (NTRS)
Roach, Pat R.; Kashani, Ali; Lee, J. M.; Cheng, Pearl L. (Technical Monitor)
1995-01-01
A theoretical analysis of the behavior of a typical pulse tube regenerator has been carried out. Assuming simple sinusoidal oscillations, the static and oscillatory pressures, velocities and temperatures have been determined for a model that includes a compressible gas and imperfect thermal contact between the gas and the regenerator matrix. For realistic material parameters, the analysis reveals that the pressure and, velocity oscillations are largely independent of details of the thermal contact between the gas and the solid matrix. Only the temperature oscillations depend on this contact. Suggestions for optimizing the design of a regenerator are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, Fujio; Kuwagata, Tuneo
1995-02-01
The thermally induced local circulation over a periodic valley is simulated by a two-dimensional numerical model that does-not include condensational processes. During the daytime of a clear, calm day, heat is transported from the mountainous region to the valley area by anabatic wind and its return flow. The specific humidity is, however, transported in an inverse manner. The horizontal exchange rate of sensible heat has a horizontal scale similarity, as long as the horizontal scale is less than a critical width of about 100 km. The sensible heat accumulated in an atmospheric column over an arbitrary point can be estimatedmore » by a simple model termed the uniform mixed-layer model (UML). The model assumes that the potential temperature is both vertically and horizontally uniform in the mixed layer, even over the complex terrain. The UML model is valid only when the horizontal scale of the topography is less than the critical width and the maximum difference in the elevation of the topography is less than about 1500 m. Latent heat is accumulated over the mountainous region while the atmosphere becomes dry over the valley area. When the horizontal scale is close to the critical width, the largest amount of humidity is accumulated during the late afternoon over the mountainous region. 18 refs., 15 figs., 1 tab.« less
Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges
NASA Astrophysics Data System (ADS)
Bouchaud, Jean-Philippe
2013-05-01
Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.
Estimation of Critical Gap Based on Raff's Definition
Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang
2014-01-01
Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result. PMID:25574160
Estimation of critical gap based on Raff's definition.
Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang
2014-01-01
Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.
Furubayashi, Taro
2018-01-01
The emergence and dominance of parasitic replicators are among the major hurdles for the proliferation of primitive replicators. Compartmentalization of replicators is proposed to relieve the parasite dominance; however, it remains unclear under what conditions simple compartmentalization uncoupled with internal reaction secures the long-term survival of a population of primitive replicators against incessant parasite emergence. Here, we investigate the sustainability of a compartmentalized host-parasite replicator (CHPR) system undergoing periodic washout-mixing cycles, by constructing a mathematical model and performing extensive simulations. We describe sustainable landscapes of the CHPR system in the parameter space and elucidate the mechanism of phase transitions between sustainable and extinct regions. Our findings revealed that a large population size of compartments, a high mixing intensity, and a modest amount of nutrients are important factors for the robust survival of replicators. We also found two distinctive sustainable phases with different mixing intensities. These results suggest that a population of simple host–parasite replicators assumed before the origin of life can be sustained by a simple compartmentalization with periodic washout-mixing processes. PMID:29373536
Shot-Noise Limited Single-Molecule FRET Histograms: Comparison between Theory and Experiments†
Nir, Eyal; Michalet, Xavier; Hamadani, Kambiz M.; Laurence, Ted A.; Neuhauser, Daniel; Kovchegov, Yevgeniy; Weiss, Shimon
2011-01-01
We describe a simple approach and present a straightforward numerical algorithm to compute the best fit shot-noise limited proximity ratio histogram (PRH) in single-molecule fluorescence resonant energy transfer diffusion experiments. The key ingredient is the use of the experimental burst size distribution, as obtained after burst search through the photon data streams. We show how the use of an alternated laser excitation scheme and a correspondingly optimized burst search algorithm eliminates several potential artifacts affecting the calculation of the best fit shot-noise limited PRH. This algorithm is tested extensively on simulations and simple experimental systems. We find that dsDNA data exhibit a wider PRH than expected from shot noise only and hypothetically account for it by assuming a small Gaussian distribution of distances with an average standard deviation of 1.6 Å. Finally, we briefly mention the results of a future publication and illustrate them with a simple two-state model system (DNA hairpin), for which the kinetic transition rates between the open and closed conformations are extracted. PMID:17078646
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, J. M.
2003-07-16
In a previous paper the author and Demay advanced a model to explain the melt fracture instability observed when molten linear polymer melts are extruded in a capillary rheometer operating under the controlled condition that the inlet flow rate was held constant. The model postulated that the melts were a slightly compressible viscous fluid and allowed for slipping of the melt at the wall. The novel feature of that model was the use of an empirical switch law which governed the amount of wall slip. The model successfully accounted for the oscillatory behavior of the exit flow rate, typically referredmore » to as the melt fracture instability, but did not simultaneously yield the fine scale spatial oscillations in the melt typically referred to as shark skin. In this note a new model is advanced which simultaneously explains the melt fracture instability and shark skin phenomena. The model postulates that the polymer is a slightly compressible linearly viscous fluid but assumes no slip boundary conditions at the capillary wall. In simple shear the shear stress {tau}and strain rate d are assumed to be related by d = F{tau} where F ranges between F{sub 2} and F{sub 1} > F{sub 2}. A strain rate dependent yield function is introduced and this function governs whether F evolves towards F{sub 2} or F{sub 1}. This model accounts for the empirical observation that at high shears polymers align and slide more easily than at low shears and explains both the melt fracture and shark skin phenomena.« less
Gamal El-Dien, Omnia; Ratcliffe, Blaise; Klápště, Jaroslav; Porth, Ilga; Chen, Charles; El-Kassaby, Yousry A.
2016-01-01
The open-pollinated (OP) family testing combines the simplest known progeny evaluation and quantitative genetics analyses as candidates’ offspring are assumed to represent independent half-sib families. The accuracy of genetic parameter estimates is often questioned as the assumption of “half-sibling” in OP families may often be violated. We compared the pedigree- vs. marker-based genetic models by analysing 22-yr height and 30-yr wood density for 214 white spruce [Picea glauca (Moench) Voss] OP families represented by 1694 individuals growing on one site in Quebec, Canada. Assuming half-sibling, the pedigree-based model was limited to estimating the additive genetic variances which, in turn, were grossly overestimated as they were confounded by very minor dominance and major additive-by-additive epistatic genetic variances. In contrast, the implemented genomic pairwise realized relationship models allowed the disentanglement of additive from all nonadditive factors through genetic variance decomposition. The marker-based models produced more realistic narrow-sense heritability estimates and, for the first time, allowed estimating the dominance and epistatic genetic variances from OP testing. In addition, the genomic models showed better prediction accuracies compared to pedigree models and were able to predict individual breeding values for new individuals from untested families, which was not possible using the pedigree-based model. Clearly, the use of marker-based relationship approach is effective in estimating the quantitative genetic parameters of complex traits even under simple and shallow pedigree structure. PMID:26801647
Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir
2018-06-01
Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.
A geometric model for initial orientation errors in pigeon navigation.
Postlethwaite, Claire M; Walker, Michael M
2011-01-21
All mobile animals respond to gradients in signals in their environment, such as light, sound, odours and magnetic and electric fields, but it remains controversial how they might use these signals to navigate over long distances. The Earth's surface is essentially two-dimensional, so two stimuli are needed to act as coordinates for navigation. However, no environmental fields are known to be simple enough to act as perpendicular coordinates on a two-dimensional grid. Here, we propose a model for navigation in which we assume that an animal has a simplified 'cognitive map' in which environmental stimuli act as perpendicular coordinates. We then investigate how systematic deviation of the contour lines of the environmental signals from a simple orthogonal arrangement can cause errors in position determination and lead to systematic patterns of directional errors in initial homing directions taken by pigeons. The model reproduces patterns of initial orientation errors seen in previously collected data from homing pigeons, predicts that errors should increase with distance from the loft, and provides a basis for efforts to identify further sources of orientation errors made by homing pigeons. Copyright © 2010 Elsevier Ltd. All rights reserved.
A nonequilibrium model for a moderate pressure hydrogen microwave discharge plasma
NASA Technical Reports Server (NTRS)
Scott, Carl D.
1993-01-01
This document describes a simple nonequilibrium energy exchange and chemical reaction model to be used in a computational fluid dynamics calculation for a hydrogen plasma excited by microwaves. The model takes into account the exchange between the electrons and excited states of molecular and atomic hydrogen. Specifically, electron-translation, electron-vibration, translation-vibration, ionization, and dissociation are included. The model assumes three temperatures, translational/rotational, vibrational, and electron, each describing a Boltzmann distribution for its respective energy mode. The energy from the microwave source is coupled to the energy equation via a source term that depends on an effective electric field which must be calculated outside the present model. This electric field must be found by coupling the results of the fluid dynamics and kinetics solution with a solution to Maxwell's equations that includes the effects of the plasma permittivity. The solution to Maxwell's equations is not within the scope of this present paper.
>From individual choice to group decision-making
NASA Astrophysics Data System (ADS)
Galam, Serge; Zucker, Jean-Daniel
2000-12-01
Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions.
Robust image modeling techniques with an image restoration application
NASA Astrophysics Data System (ADS)
Kashyap, Rangasami L.; Eom, Kie-Bum
1988-08-01
A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
Prettejohn, Brenton J.; Berryman, Matthew J.; McDonnell, Mark D.
2011-01-01
Many simulations of networks in computational neuroscience assume completely homogenous random networks of the Erdös–Rényi type, or regular networks, despite it being recognized for some time that anatomical brain networks are more complex in their connectivity and can, for example, exhibit the “scale-free” and “small-world” properties. We review the most well known algorithms for constructing networks with given non-homogeneous statistical properties and provide simple pseudo-code for reproducing such networks in software simulations. We also review some useful mathematical results and approximations associated with the statistics that describe these network models, including degree distribution, average path length, and clustering coefficient. We demonstrate how such results can be used as partial verification and validation of implementations. Finally, we discuss a sometimes overlooked modeling choice that can be crucially important for the properties of simulated networks: that of network directedness. The most well known network algorithms produce undirected networks, and we emphasize this point by highlighting how simple adaptations can instead produce directed networks. PMID:21441986
NASA Astrophysics Data System (ADS)
Asfahani, J.; Tlas, M.
2015-10-01
An easy and practical method for interpreting residual gravity anomalies due to simple geometrically shaped models such as cylinders and spheres has been proposed in this paper. This proposed method is based on both the deconvolution technique and the simplex algorithm for linear optimization to most effectively estimate the model parameters, e.g., the depth from the surface to the center of a buried structure (sphere or horizontal cylinder) or the depth from the surface to the top of a buried object (vertical cylinder), and the amplitude coefficient from the residual gravity anomaly profile. The method was tested on synthetic data sets corrupted by different white Gaussian random noise levels to demonstrate the capability and reliability of the method. The results acquired show that the estimated parameter values derived by this proposed method are close to the assumed true parameter values. The validity of this method is also demonstrated using real field residual gravity anomalies from Cuba and Sweden. Comparable and acceptable agreement is shown between the results derived by this method and those derived from real field data.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Donoso-Bravo, A; Retamal, C; Carballa, M; Ruiz-Filippi, G; Chamy, R
2009-01-01
The effect of temperature on the kinetic parameters involved in the main reactions of the anaerobic digestion process was studied. Batch tests with starch, glucose and acetic acid as substrates for hydrolysis, acidogenesis and methanogenesis, respectively, were performed in a temperature range between 15 and 45 degrees C. First order kinetics was assumed to determine the hydrolysis rate constant, while Monod and Haldane kinetics were considered for acidogenesis and methanogenesis, respectively. The results obtained showed that the anaerobic process is strongly influenced by temperature, with acidogenesis exerting the highest effect. The Cardinal Temperature Model 1 with an inflection point (CTM1) fitted properly the experimental data in the whole temperature range, except for the maximum degradation rate of acidogenesis. A simple case-study assessing the effect of temperature on an anaerobic CSTR performance indicated that with relatively simple substrates, like starch, the limiting reaction would change depending on temperature. However, when more complex substrates are used (e.g. sewage sludge), the hydrolysis might become more quickly into the limiting step.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
Field, Edward H.
2015-01-01
A methodology is presented for computing elastic‐rebound‐based probabilities in an unsegmented fault or fault system, which involves computing along‐fault averages of renewal‐model parameters. The approach is less biased and more self‐consistent than a logical extension of that applied most recently for multisegment ruptures in California. It also enables the application of magnitude‐dependent aperiodicity values, which the previous approach does not. Monte Carlo simulations are used to analyze long‐term system behavior, which is generally found to be consistent with that of physics‐based earthquake simulators. Results cast doubt that recurrence‐interval distributions at points on faults look anything like traditionally applied renewal models, a fact that should be considered when interpreting paleoseismic data. We avoid such assumptions by changing the "probability of what" question (from offset at a point to the occurrence of a rupture, assuming it is the next event to occur). The new methodology is simple, although not perfect in terms of recovering long‐term rates in Monte Carlo simulations. It represents a reasonable, improved way to represent first‐order elastic‐rebound predictability, assuming it is there in the first place, and for a system that clearly exhibits other unmodeled complexities, such as aftershock triggering.
Swimming with stiff legs at low Reynolds number.
Takagi, Daisuke
2015-08-01
Locomotion at low Reynolds number is not possible with cycles of reciprocal motion, an example being the oscillation of a single pair of rigid paddles or legs. Here, I demonstrate the possibility of swimming with two or more pairs of legs. They are assumed to oscillate collectively in a metachronal wave pattern in a minimal model based on slender-body theory for Stokes flow. The model predicts locomotion in the direction of the traveling wave, as commonly observed along the body of free-swimming crustaceans. The displacement of the body and the swimming efficiency depend on the number of legs, the amplitude, and the phase of oscillations. This study shows that paddling legs with distinct orientations and phases offers a simple mechanism for driving flow.
An instructive model of entropy
NASA Astrophysics Data System (ADS)
Zimmerman, Seth
2010-09-01
This article first notes the misinterpretation of a common thought experiment, and the misleading comment that 'systems tend to flow from less probable to more probable macrostates'. It analyses the experiment, generalizes it and introduces a new tool of investigation, the simplectic structure. A time-symmetric model is built upon this structure, yielding several non-intuitive results. The approach is combinatorial rather than statistical, and assumes that entropy is equivalent to 'missing information'. The intention of this article is not only to present interesting results, but also, by deliberately starting with a simple example and developing it through proof and computer simulation, to clarify the often confusing subject of entropy. The article should be particularly stimulating to students and instructors of discrete mathematics or undergraduate physics.
NASA Technical Reports Server (NTRS)
Rosner, R.; An, C.-H.; Musielak, Z. E.; Moore, R. L.; Suess, S. T.
1991-01-01
A simple qualitative model for the origin of the coronal and mass-loss dividing lines separating late-type giants and supergiants with and without hot, X-ray-emitting corona, and with and without significant mass loss is discussed. The basic physical effects considered are the necessity of magnetic confinement for hot coronal material on the surface of such stars and the large reflection efficiency for Alfven waves in cool exponential atmospheres. The model assumes that the magnetic field geometry of these stars changes across the observed 'dividing lines' from being mostly closed on the high effective temperature side to being mostly open on the low effective temperature side.
Chimera regimes in a ring of oscillators with local nonlinear interaction
NASA Astrophysics Data System (ADS)
Shepelev, Igor A.; Zakharova, Anna; Vadivasova, Tatiana E.
2017-03-01
One of important problems concerning chimera states is the conditions of their existence and stability. Until now, it was assumed that chimeras could arise only in ensembles with nonlocal character of interactions. However, this assumption is not exactly right. In some special cases chimeras can be realized for local type of coupling [1-3]. We propose a simple model of ensemble with local coupling when chimeras are realized. This model is a ring of linear oscillators with the local nonlinear unidirectional interaction. Chimera structures in the ring are found using computer simulations for wide area of values of parameters. Diagram of the regimes on plane of control parameters is plotted and scenario of chimera destruction are studied when the parameters are changed.
Solid phase extraction of copper(II) by fixed bed procedure on cation exchange complexing resins.
Pesavento, Maria; Sturini, Michela; D'Agostino, Girolamo; Biesuz, Raffaela
2010-02-19
The efficiency of the metal ion recovery by solid phase extraction (SPE) in complexing resins columns is predicted by a simple model based on two parameters reflecting the sorption equilibria and kinetics of the metal ion on the considered resin. The parameter related to the adsorption equilibria was evaluated by the Gibbs-Donnan model, and that related to the kinetics by assuming that the ion exchange is the adsorption rate determining step. The predicted parameters make it possible to evaluate the breakthrough volume of the considered metal ion, Cu(II), from different kinds of complexing resins, and at different conditions, such as acidity and ionic composition. Copyright 2009. Published by Elsevier B.V.
The IDEA model: A single equation approach to the Ebola forecasting challenge.
Tuite, Ashleigh R; Fisman, David N
2018-03-01
Mathematical modeling is increasingly accepted as a tool that can inform disease control policy in the face of emerging infectious diseases, such as the 2014-2015 West African Ebola epidemic, but little is known about the relative performance of alternate forecasting approaches. The RAPIDD Ebola Forecasting Challenge (REFC) tested the ability of eight mathematical models to generate useful forecasts in the face of simulated Ebola outbreaks. We used a simple, phenomenological single-equation model (the "IDEA" model), which relies only on case counts, in the REFC. Model fits were performed using a maximum likelihood approach. We found that the model performed reasonably well relative to other more complex approaches, with performance metrics ranked on average 4th or 5th among participating models. IDEA appeared better suited to long- than short-term forecasts, and could be fit using nothing but reported case counts. Several limitations were identified, including difficulty in identifying epidemic peak (even retrospectively), unrealistically precise confidence intervals, and difficulty interpolating daily case counts when using a model scaled to epidemic generation time. More realistic confidence intervals were generated when case counts were assumed to follow a negative binomial, rather than Poisson, distribution. Nonetheless, IDEA represents a simple phenomenological model, easily implemented in widely available software packages that could be used by frontline public health personnel to generate forecasts with accuracy that approximates that which is achieved using more complex methodologies. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Random noise effects in pulse-mode digital multilayer neural networks.
Kim, Y C; Shanblatt, M A
1995-01-01
A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.
Analytic derivation of bacterial growth laws from a simple model of intracellular chemical dynamics.
Pandey, Parth Pratim; Jain, Sanjay
2016-09-01
Experiments have found that the growth rate and certain other macroscopic properties of bacterial cells in steady-state cultures depend upon the medium in a surprisingly simple manner; these dependencies are referred to as 'growth laws'. Here we construct a dynamical model of interacting intracellular populations to understand some of the growth laws. The model has only three population variables: an amino acid pool, a pool of enzymes that transport an external nutrient and produce the amino acids, and ribosomes that catalyze their own and the enzymes' production from the amino acids. We assume that the cell allocates its resources between the enzyme sector and the ribosomal sector to maximize its growth rate. We show that the empirical growth laws follow from this assumption and derive analytic expressions for the phenomenological parameters in terms of the more basic model parameters. Interestingly, the maximization of the growth rate of the cell as a whole implies that the cell allocates resources to the enzyme and ribosomal sectors in inverse proportion to their respective 'efficiencies'. The work introduces a mathematical scheme in which the cellular growth rate can be explicitly determined and shows that two large parameters, the number of amino acid residues per enzyme and per ribosome, are useful for making approximations.
On the importance of incorporating sampling weights in ...
Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h
Multi-regime transport model for leaching behavior of heterogeneous porous materials.
Sanchez, F; Massry, I W; Eighmy, T; Kosson, D S
2003-01-01
Utilization of secondary materials in civil engineering applications (e.g. as substitutes for natural aggregates or binder constituents) requires assessment of the physical and environment properties of the product. Environmental assessment often necessitates evaluation of the potential for constituent release through leaching. Currently most leaching models used to estimate long-term field performance assume that the species of concern is uniformly dispersed in a homogeneous porous material. However, waste materials are often comprised of distinct components such as coarse or fine aggregates in a cement concrete or waste encapsulated in a stabilized matrix. The specific objectives of the research presented here were to (1) develop a one-dimensional, multi-regime transport model (i.e. MRT model) to describe the release of species from heterogeneous porous materials and, (2) evaluate simple limit cases using the model for species when release is not dependent on pH. Two different idealized model systems were considered: (1) a porous material contaminated with the species of interest and containing inert aggregates and, (2) a porous material containing the contaminant of interest only in the aggregates. The effect of three factors on constituent release were examined: (1) volume fraction of material occupied by the aggregates compared to a homogeneous porous material, (2) aggregate size and, (3) differences in mass transfer rates between the binder and the aggregates. Simulation results confirmed that assuming homogeneous materials to evaluate the release of contaminants from porous waste materials may result in erroneous long-term field performance assessment.
NASA Astrophysics Data System (ADS)
Kotrlová, A.; Šrámková, E.; Török, G.; Stuchlík, Z.; Goluchová, K.
2017-11-01
In our previous work (Paper I) we applied several models of high-frequency quasi-periodic oscillations (HF QPOs) to estimate the spin of the central compact object in three Galactic microquasars assuming the possibility that the central compact body is a super-spinning object (or a naked singularity) with external spacetime described by Kerr geometry with a dimensionless spin parameter a ≡ cJ/GM2 > 1. Here we extend our consideration, and in a consistent way investigate implications of a set of ten resonance models so far discussed only in the context of a < 1. The same physical arguments as in Paper I are applied to these models, I.e. only a small deviation of the spin estimate from a = 1, a ≳ 1, is assumed for a favoured model. For five of these models that involve Keplerian and radial epicyclic oscillations we find the existence of a unique specific QPO excitation radius. Consequently, there is a simple behaviour of dimensionless frequency M × νU(a) represented by a single continuous function having solely one maximum close to a ≳ 1. Only one of these models is compatible with the expectation of a ≳ 1. The other five models that involve the radial and vertical epicyclic oscillations imply the existence of multiple resonant radii. This signifies a more complicated behaviour of M × νU(a) that cannot be represented by single functions. Each of these five models is compatible with the expectation of a ≳ 1.
The Mass-dependent Star Formation Histories of Disk Galaxies: Infall Model Versus Observations
NASA Astrophysics Data System (ADS)
Chang, R. X.; Hou, J. L.; Shen, S. Y.; Shu, C. G.
2010-10-01
We introduce a simple model to explore the star formation histories of disk galaxies. We assume that the disk originate and grows by continuous gas infall. The gas infall rate is parameterized by the Gaussian formula with one free parameter: the infall-peak time tp . The Kennicutt star formation law is adopted to describe how much cold gas turns into stars. The gas outflow process is also considered in our model. We find that, at a given galactic stellar mass M *, the model adopting a late infall-peak time tp results in blue colors, low-metallicity, high specific star formation rate (SFR), and high gas fraction, while the gas outflow rate mainly influences the gas-phase metallicity and star formation efficiency mainly influences the gas fraction. Motivated by the local observed scaling relations, we "construct" a mass-dependent model by assuming that the low-mass galaxy has a later infall-peak time tp and a larger gas outflow rate than massive systems. It is shown that this model can be in agreement with not only the local observations, but also with the observed correlations between specific SFR and galactic stellar mass SFR/M * ~ M * at intermediate redshifts z < 1. Comparison between the Gaussian-infall model and the exponential-infall model is also presented. It shows that the exponential-infall model predicts a higher SFR at early stage and a lower SFR later than that of Gaussian infall. Our results suggest that the Gaussian infall rate may be more reasonable in describing the gas cooling process than the exponential infall rate, especially for low-mass systems.
Direct dark matter search by annual modulation in XMASS-I
NASA Astrophysics Data System (ADS)
Abe, K.; Hiraide, K.; Ichimura, K.; Kishimoto, Y.; Kobayashi, K.; Kobayashi, M.; Moriyama, S.; Nakahata, M.; Norita, T.; Ogawa, H.; Sekiya, H.; Takachio, O.; Takeda, A.; Yamashita, M.; Yang, B. S.; Kim, N. Y.; Kim, Y. D.; Tasaka, S.; Fushimi, K.; Liu, J.; Martens, K.; Suzuki, Y.; Xu, B. D.; Fujita, R.; Hosokawa, K.; Miuchi, K.; Onishi, Y.; Oka, N.; Takeuchi, Y.; Kim, Y. H.; Lee, J. S.; Lee, K. B.; Lee, M. K.; Fukuda, Y.; Itow, Y.; Kegasa, R.; Kobayashi, K.; Masuda, K.; Takiya, H.; Nishijima, K.; Nakamura, S.; Xmass Collaboration
2016-08-01
A search for dark matter was conducted by looking for an annual modulation signal due to the Earth's rotation around the Sun using XMASS, a single phase liquid xenon detector. The data used for this analysis was 359.2 live days times 832 kg of exposure accumulated between November 2013 and March 2015. When we assume Weakly Interacting Massive Particle (WIMP) dark matter elastically scattering on the target nuclei, the exclusion upper limit of the WIMP-nucleon cross section 4.3 ×10-41 cm2 at 8 GeV/c2 was obtained and we exclude almost all the DAMA/LIBRA allowed region in the 6 to 16 GeV/c2 range at ∼10-40 cm2. The result of a simple modulation analysis, without assuming any specific dark matter model but including electron/γ events, showed a slight negative amplitude. The p-values obtained with two independent analyses are 0.014 and 0.068 for null hypothesis, respectively. We obtained 90% C.L. upper bounds that can be used to test various models. This is the first extensive annual modulation search probing this region with an exposure comparable to DAMA/LIBRA.
Embracing complexity: theory, cases and the future of bioethics.
Wilson, James
2014-01-01
This paper reflects on the relationship between theory and practice in bioethics, by using various concepts drawn from debates on innovation in healthcare research--in particular debates around how best to connect up blue skies 'basic' research with practical innovations that can improve human lives. It argues that it is a mistake to assume that the most difficult and important questions in bioethics are the most abstract ones, and also a mistake to assume that getting clear about abstract cases will automatically be of much help in getting clear about more complex cases. It replaces this implicitly linear model with a more complex one that draws on the idea of translational research in healthcare. On the translational model, there is a continuum of cases from the most simple and abstract (thought experiments) to the most concrete and complex (real world cases). Insights need to travel in both directions along this continuum--from the more abstract to the more concrete and from the more concrete to the more abstract. The paper maps out some difficulties in moving from simpler to more complex cases, and in doing so makes recommendations about the future of bioethics.
The Effects of Neutral Gas Release on Vehicle Charging: Experiment and Theory
NASA Astrophysics Data System (ADS)
Walker, D. N.; Amatucci, W. E.; Bowles, J. H.; Fernsler, R. F.; Siefring, C. L.; Antoniades, J. A.; Keskinen, M. J.
1998-11-01
This paper describes an experimental and theoretical research effort related to the mitigation of spacecraft charging by Neutral Gas Release (NGR). The Space Power Experiments Aboard Rockets programs (SPEAR I and III) [Mandel et al., 1998; Berg et al., 1995] and other earlier efforts have demonstrated that NGR is an effective method of controlling discharges in space. The laboratory experimentswere conducted in the large volume Space Physics Simulation Chamber (SPSC) at the Naval Research Laboratory (NRL). A realistic near-earth space environment can be simulated in this device for whichminimumscalingneeds to be performedtorelate the data to space plasma regimes. This environment is similar to that encountered by LEO spacecraft, e.g., the Space Station, Shuttle, and high inclination satellites. The experimental arrangement consists of an aluminum cylinder which can be biased to high negative voltage (0.4 kV
Charge transfer kinetics at the solid-solid interface in porous electrodes
NASA Astrophysics Data System (ADS)
Bai, Peng; Bazant, Martin Z.
2014-04-01
Interfacial charge transfer is widely assumed to obey the Butler-Volmer kinetics. For certain liquid-solid interfaces, the Marcus-Hush-Chidsey theory is more accurate and predictive, but it has not been applied to porous electrodes. Here we report a simple method to extract the charge transfer rates in carbon-coated LiFePO4 porous electrodes from chronoamperometry experiments, obtaining curved Tafel plots that contradict the Butler-Volmer equation but fit the Marcus-Hush-Chidsey prediction over a range of temperatures. The fitted reorganization energy matches the Born solvation energy for electron transfer from carbon to the iron redox site. The kinetics are thus limited by electron transfer at the solid-solid (carbon-LixFePO4) interface rather than by ion transfer at the liquid-solid interface, as previously assumed. The proposed experimental method generalizes Chidsey’s method for phase-transforming particles and porous electrodes, and the results show the need to incorporate Marcus kinetics in modelling batteries and other electrochemical systems.
Symmetry relations in charmless B{yields}PPP decays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gronau, Michael; Rosner, Jonathan L.; Enrico Fermi Institute and Department of Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, Illinois 60637
2005-11-01
Strangeness-changing decays of B mesons to three-body final states of pions and kaons are studied, assuming that they are dominated by a {delta}I=0 penguin amplitude with flavor structure b{yields}s. Numerous isospin relations for B{yields}K{pi}{pi} and for underlying quasi-two-body decays are compared successfully with experiment, in some cases resolving ambiguities in fitting resonance parameters. The only exception is a somewhat small branching ratio noted in B{sup 0}{yields}K*{sup 0}{pi}{sup 0}, interpreted in terms of destructive interference between a penguin amplitude and an enhanced electroweak penguin contribution. Relations for B decays into three kaons are derived in terms of final states involving K{submore » S} or K{sub L}, assuming that {phi}K-subtracted decay amplitudes are symmetric in K and K, as has been observed experimentally. Rates due to nonresonant backgrounds are studied using a simple model, which may reduce discrete ambiguities in Dalitz plot analyses.« less
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Relaxed Poisson cure rate models.
Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N
2016-03-01
The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ninokata, H.; Deguchi, A.; Kawahara, A.
1995-09-01
A new void drift model for the subchannel analysis method is presented for the thermohydraulics calculation of two-phase flows in rod bundles where the flow model uses a two-fluid formulation for the conservation of mass, momentum and energy. A void drift model is constructed based on the experimental data obtained in a geometrically simple inter-connected two circular channel test sections using air-water as working fluids. The void drift force is assumed to be an origin of void drift velocity components of the two-phase cross-flow in a gap area between two adjacent rods and to overcome the momentum exchanges at themore » phase interface and wall-fluid interface. This void drift force is implemented in the cross flow momentum equations. Computational results have been successfully compared to experimental data available including 3x3 rod bundle data.« less
A simple model for remineralization of subsurface lesions in tooth enamel
NASA Astrophysics Data System (ADS)
Christoffersen, J.; Christoffersen, M. R.; Arends, J.
1982-12-01
A model for remineralization of subsurface lesions in tooth enamel is presented. The important assumption on which the model is based is that the rate-controlling process is the crystal surface process by which ions are incorporated in the crystallites; that is, the transport of ions through small holes in the so-called intact surface layer does not influence the rate of mineral uptake at the crystal surface. Further, the density of mineral in the lesion is assumed to increase down the lesion, when the remineralization process is started. It is shown that the dimension of the initial holes in the enamel surface layer must be larger than the dimension of the individual crystallites in order to prevent the formation of arrested lesions. Theoretical expressions for the progress of remineralization are given. The suggested model emphasizes the need for measurements of mineral densities in the lesion, prior to, and during the lesion repair.
A Local-Realistic Model of Quantum Mechanics Based on a Discrete Spacetime
NASA Astrophysics Data System (ADS)
Sciarretta, Antonio
2018-01-01
This paper presents a realistic, stochastic, and local model that reproduces nonrelativistic quantum mechanics (QM) results without using its mathematical formulation. The proposed model only uses integer-valued quantities and operations on probabilities, in particular assuming a discrete spacetime under the form of a Euclidean lattice. Individual (spinless) particle trajectories are described as random walks. Transition probabilities are simple functions of a few quantities that are either randomly associated to the particles during their preparation, or stored in the lattice nodes they visit during the walk. QM predictions are retrieved as probability distributions of similarly-prepared ensembles of particles. The scenarios considered to assess the model comprise of free particle, constant external force, harmonic oscillator, particle in a box, the Delta potential, particle on a ring, particle on a sphere and include quantization of energy levels and angular momentum, as well as momentum entanglement.
Computer modeling and simulation in inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrory, R.L.; Verdon, C.P.
1989-03-01
The complex hydrodynamic and transport processes associated with the implosion of an inertial confinement fusion (ICF) pellet place considerable demands on numerical simulation programs. Processes associated with implosion can usually be described using relatively simple models, but their complex interplay requires that programs model most of the relevant physical phenomena accurately. Most hydrodynamic codes used in ICF incorporate a one-fluid, two-temperature model. Electrons and ions are assumed to flow as one fluid (no charge separation). Due to the relatively weak coupling between the ions and electrons, each species is treated separately in terms of its temperature. In this paper wemore » describe some of the major components associated with an ICF hydrodynamics simulation code. To serve as an example we draw heavily on a two-dimensional Lagrangian hydrodynamic code (ORCHID) written at the University of Rochester's Laboratory for Laser Energetics. 46 refs., 19 figs., 1 tab.« less
A non-LTE model for the Jovian methane infrared emissions at high spectral resolution
NASA Technical Reports Server (NTRS)
Halthore, Rangasayi N.; Allen, J. E., Jr.; Decola, Philip L.
1994-01-01
High resolution spectra of Jupiter in the 3.3 micrometer region have so far failed to reveal either the continuum or the line emissions that can be unambiguously attributed to the nu(sub 3) band of methane (Drossart et al. 1993; Kim et al. 1991). Nu(sub 3) line intensities predicted with the help of two simple non-Local Thermodynamic Equilibrium (LTE) models -- a two-level model and a three-level model, using experimentally determined relaxation coefficients, are shown to be one to three orders of magnitude respectively below the 3-sigma noise level of these observations. Predicted nu(sub 4) emission intensities are consistent with observed values. If the methane mixing ratio below the homopause is assumed as 2 x 10(exp -3), a value of about 300 K is derived as an upper limit to the temperature of the high stratosphere at microbar levels.
Stabilizing effect of cannibalism in a two stages population model.
Rault, Jonathan; Benoît, Eric; Gouzé, Jean-Luc
2013-03-01
In this paper we build a prey-predator model with discrete weight structure for the predator. This model will conserve the number of individuals and the biomass and both growth and reproduction of the predator will depend on the food ingested. Moreover the model allows cannibalism which means that the predator can eat the prey but also other predators. We will focus on a simple version with two weight classes or stage (larvae and adults) and present some general mathematical results. In the last part, we will assume that the dynamics of the prey is fast compared to the predator's one to go further in the results and eventually conclude that under some conditions, cannibalism can stabilize the system: more precisely, an unstable equilibrium without cannibalism will become almost globally stable with some cannibalism. Some numerical simulations are done to illustrate this result.
Internal friction and mode relaxation in a simple chain model.
Fugmann, S; Sokolov, I M
2009-12-21
We consider the equilibrium relaxation properties of the end-to-end distance and of the principal components in a one-dimensional polymer chain model with nonlinear interaction between the beads. While for the single-well potentials these properties are similar to the ones of a Rouse chain, for the double-well interaction potentials, modeling internal friction, they differ vastly from the ones of the harmonic chain at intermediate times and intermediate temperatures. This minimal description within a one-dimensional model mimics the relaxation properties found in much more complex polymer systems. Thus, the relaxation time of the end-to-end distance may grow by orders of magnitude at intermediate temperatures. The principal components (whose directions are shown to coincide with the normal modes of the harmonic chain, whatever interaction potential is assumed) not only display larger relaxation times but also subdiffusive scaling.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Risk perception in epidemic modeling
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Liò, Pietro; Sguanci, Luca
2007-12-01
We investigate the effects of risk perception in a simple model of epidemic spreading. We assume that the perception of the risk of being infected depends on the fraction of neighbors that are ill. The effect of this factor is to decrease the infectivity, that therefore becomes a dynamical component of the model. We study the problem in the mean-field approximation and by numerical simulations for regular, random, and scale-free networks. We show that for homogeneous and random networks, there is always a value of perception that stops the epidemics. In the “worst-case” scenario of a scale-free network with diverging input connectivity, a linear perception cannot stop the epidemics; however, we show that a nonlinear increase of the perception risk may lead to the extinction of the disease. This transition is discontinuous, and is not predicted by the mean-field analysis.
Silicate Inclusions in the Kodaikanal IIE Iron Meteorite
NASA Technical Reports Server (NTRS)
Kurat, G.; Varela, M. E.; Zinner, E.
2005-01-01
Silicate inclusions in iron meteorites display an astonishing chemical and mineralogical variety, ranging from chondritic to highly fractionated, silica- and alkali-rich assemblages. In spite of this, their origin is commonly considered to be a simple one: mixing of silicates, fractionated or unfractionated, with metal. The latter had to be liquid in order to accommodate the former in a pore-free way which all models accomplish by assuming shock melting. II-E iron meteorites are particularly interesting because they contain an exotic zoo of silicate inclusions, including some chemically strongly fractionated ones. They also pose a formidable conundrum: young silicates are enclosed by very old metal. This and many other incompatibilities between models and reality forced the formulation of an alternative genetic model for irons. Here we present preliminary findings in our study of Kodaikanal silicate inclusions.
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari
2017-09-01
Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.
Adventures in heterotic string phenomenology
NASA Astrophysics Data System (ADS)
Dundee, George Benjamin
In this Dissertation, we consider three topics in the study of effective field theories derived from orbifold compactifications of the heterotic string. In Chapter 2 we provide a primer for those interested in building models based on orbifold compactifications of the heterotic string. In Chapter 3, we analyze gauge coupling unification in the context of heterotic strings on anisotropic orbifolds. This construction is very much analogous to effective five dimensional orbifold GUT field theories. Our analysis assumes three fundamental scales, the string scale, M S, a compactification scale, MC, and a mass scale for some of the vector-like exotics, MEX; the other exotics are assumed to get mass at MS. In the particular models analyzed, we show that gauge coupling unification is not possible with MEX = M C and in fact we require MEX << MC ˜ 3 x 1016 GeV. We find that about 10% of the parameter space has a proton lifetime (from dimension six gauge exchange) 1033 yr ≲ tau(p → pi0e+) ≲ 1036 yr, which is potentially observable by the next generation of proton decay experiments. 80% of the parameter space gives proton lifetimes below Super-K bounds. In Chapter 4, we examine the relationship between the string coupling constant, gSTRING, and the grand unified gauge coupling constant, alphaGUT, in the models of Chapter 3. We find that the requirement that the theory be perturbative provides a non-trivial constraint on these models. Interestingly, there is a correlation between the proton decay rate (due to dimension six operators) and the string coupling constant in this class of models. Finally, we make some comments concerning the extension of these models to the six (and higher) dimensional case. In Chapter 5, we discuss the issues of supersymmetry breaking and moduli stabilization within the context of E8 ⊗ E8 heterotic orbifold constructions and, in particular, we focus on the class of "mini-landscape" models. These theories contain a non-Abelian hidden gauge sector which generates a non-perturbative superpotential leading to supersymmetry breaking and moduli stabilization. We demonstrate this effect in a simple model which contains many of the features of the more general construction. In addition, we argue that once supersymmetry is broken in a restricted sector of the theory, then all moduli are stabilized by supergravity effects. Finally, we obtain the low energy superparticle spectrum resulting from this simple model.
Modeling and Analysis of Ultrarelativistic Heavy Ion Collisions
NASA Astrophysics Data System (ADS)
McCormack, William; Pratt, Scott
2014-09-01
High-energy collisions of heavy ions, such as gold, copper, or uranium serve as an important means of studying quantum chromodynamic matter. When relativistic nuclei collide, a hot, energetic fireball of dissociated partonic matter is created; this super-hadronic matter is believed to be the quark gluon plasma (QGP), which is theorized to have comprised the universe immediately following the big bang. As the fireball expands and cools, it reaches freeze-out temperatures, and quarks hadronize into baryons and mesons. To characterize this super-hadronic matter, one can use balance functions, a means of studying correlations due to local charge conservation. In particular, the simple model used in this research assumed two waves of localized charge-anticharge production, with an abrupt transition from the QGP stage to hadronization. Balance functions were constructed as the sum of these two charge production components, and four parameters were manipulated to match the model's output with experimental data taken from the STAR Collaboration at RHIC. Results show that the chemical composition of the super-hadronic matter are consistent with that of a thermally equilibrated QGP. High-energy collisions of heavy ions, such as gold, copper, or uranium serve as an important means of studying quantum chromodynamic matter. When relativistic nuclei collide, a hot, energetic fireball of dissociated partonic matter is created; this super-hadronic matter is believed to be the quark gluon plasma (QGP), which is theorized to have comprised the universe immediately following the big bang. As the fireball expands and cools, it reaches freeze-out temperatures, and quarks hadronize into baryons and mesons. To characterize this super-hadronic matter, one can use balance functions, a means of studying correlations due to local charge conservation. In particular, the simple model used in this research assumed two waves of localized charge-anticharge production, with an abrupt transition from the QGP stage to hadronization. Balance functions were constructed as the sum of these two charge production components, and four parameters were manipulated to match the model's output with experimental data taken from the STAR Collaboration at RHIC. Results show that the chemical composition of the super-hadronic matter are consistent with that of a thermally equilibrated QGP. An MSU REU Project.
A solvable model of Vlasov-kinetic plasma turbulence in Fourier-Hermite phase space
NASA Astrophysics Data System (ADS)
Adkins, T.; Schekochihin, A. A.
2018-02-01
A class of simple kinetic systems is considered, described by the one-dimensional Vlasov-Landau equation with Poisson or Boltzmann electrostatic response and an energy source. Assuming a stochastic electric field, a solvable model is constructed for the phase-space turbulence of the particle distribution. The model is a kinetic analogue of the Kraichnan-Batchelor model of chaotic advection. The solution of the model is found in Fourier-Hermite space and shows that the free-energy flux from low to high Hermite moments is suppressed, with phase mixing cancelled on average by anti-phase-mixing (stochastic plasma echo). This implies that Landau damping is an ineffective route to dissipation (i.e. to thermalisation of electric energy via velocity space). The full Fourier-Hermite spectrum is derived. Its asymptotics are -3/2$ at low wavenumbers and high Hermite moments ( ) and -1/2k-2$ at low Hermite moments and high wavenumbers ( ). These conclusions hold at wavenumbers below a certain cutoff (analogue of Kolmogorov scale), which increases with the amplitude of the stochastic electric field and scales as inverse square of the collision rate. The energy distribution and flows in phase space are a simple and, therefore, useful example of competition between phase mixing and nonlinear dynamics in kinetic turbulence, reminiscent of more realistic but more complicated multi-dimensional systems that have not so far been amenable to complete analytical solution.
Nishiura, Hiroshi
2011-02-16
Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.
Energy Dependence of Synchrotron X-Ray Rims in Tycho's Supernova Remnant
NASA Technical Reports Server (NTRS)
Tran, Aaron; Williams, Brian J.; Petre, Robert; Ressler, Sean M.; Reynolds, Stephen P.
2015-01-01
Several young supernova remnants exhibit thin X-ray bright rims of synchrotron radiation at their forward shocks. Thin rims require strong magnetic field amplification beyond simple shock compression if rim widths are only limited by electron energy losses. But, magnetic field damping behind the shock could produce similarly thin rims with less extreme field amplification. Variation of rim width with energy may thus discriminate between competing influences on rim widths. We measured rim widths around Tycho's supernova remnant in 5 energy bands using an archival 750 ks Chandra observation. Rims narrow with increasing energy and are well described by either loss-limited or damped scenarios, so X-ray rim width-energy dependence does not uniquely specify a model. But, radio counterparts to thin rims are not loss-limited and better reflect magnetic field structure. Joint radio and X-ray modeling favors magnetic damping in Tycho's SNR with damping lengths approximately 1-5% of remnant radius and magnetic field strengths approximately 50-400 micron G assuming Bohm diffusion. X-ray rim widths are approximately 1% of remnant radius, somewhat smaller than inferred damping lengths. Electron energy losses are important in all models of X-ray rims, suggesting that the distinction between loss-limited and damped models is blurred in soft X-rays. All loss-limited and damping models require magnetic fields approximately greater than 20 micron G, arming the necessity of magnetic field amplification beyond simple compression.
Hou, Chen; Amunugama, Kaushalya
2015-07-01
The relationship between energy expenditure and longevity has been a central theme in aging studies. Empirical studies have yielded controversial results, which cannot be reconciled by existing theories. In this paper, we present a simple theoretical model based on first principles of energy conservation and allometric scaling laws. The model takes into considerations the energy tradeoffs between life history traits and the efficiency of the energy utilization, and offers quantitative and qualitative explanations for a set of seemingly contradictory empirical results. We show that oxidative metabolism can affect cellular damage and longevity in different ways in animals with different life histories and under different experimental conditions. Qualitative data and the linearity between energy expenditure, cellular damage, and lifespan assumed in previous studies are not sufficient to understand the complexity of the relationships. Our model provides a theoretical framework for quantitative analyses and predictions. The model is supported by a variety of empirical studies, including studies on the cellular damage profile during ontogeny; the intra- and inter-specific correlations between body mass, metabolic rate, and lifespan; and the effects on lifespan of (1) diet restriction and genetic modification of growth hormone, (2) the cold and exercise stresses, and (3) manipulations of antioxidant. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
A minimalist feedback-regulated model for galaxy formation during the epoch of reionization
NASA Astrophysics Data System (ADS)
Furlanetto, Steven R.; Mirocha, Jordan; Mebane, Richard H.; Sun, Guochao
2017-12-01
Near-infrared surveys have now determined the luminosity functions of galaxies at 6 ≲ z ≲ 8 to impressive precision and identified a number of candidates at even earlier times. Here, we develop a simple analytic model to describe these populations that allows physically motivated extrapolation to earlier times and fainter luminosities. We assume that galaxies grow through accretion on to dark matter haloes, which we model by matching haloes at fixed number density across redshift, and that stellar feedback limits the star formation rate. We allow for a variety of feedback mechanisms, including regulation through supernova energy and momentum from radiation pressure. We show that reasonable choices for the feedback parameters can fit the available galaxy data, which in turn substantially limits the range of plausible extrapolations of the luminosity function to earlier times and fainter luminosities: for example, the global star formation rate declines rapidly (by a factor of ∼20 from z = 6 to 15 in our fiducial model), but the bright galaxies accessible to observations decline even faster (by a factor ≳ 400 over the same range). Our framework helps us develop intuition for the range of expectations permitted by simple models of high-z galaxies that build on our understanding of 'normal' galaxy evolution. We also provide predictions for galaxy measurements by future facilities, including James Webb Space Telescope and Wide-Field Infrared Survey Telescope.
Convenient models of the atmosphere: optics and solar radiation
NASA Astrophysics Data System (ADS)
Alexander, Ginsburg; Victor, Frolkis; Irina, Melnikova; Sergey, Novikov; Dmitriy, Samulenkov; Maxim, Sapunov
2017-11-01
Simple optical models of clear and cloudy atmosphere are proposed. Four versions of atmospheric aerosols content are considered: a complete lack of aerosols in the atmosphere, low background concentration (500 cm-3), high concentrations (2000 cm-3) and very high content of particles (5000 cm-3). In a cloud scenario, the model of external mixture is assumed. The values of optical thickness and single scattering albedo for 13 wavelengths are calculated in the short wavelength range of 0.28-0.90 µm, with regard to the molecular absorption bands, that is simulated with triangle function. A comparison of the proposed optical parameters with results of various measurements and retrieval (lidar measurement, sampling, processing radiation measurements) is presented. For a cloudy atmosphere models of single-layer and two-layer atmosphere are proposed. It is found that cloud optical parameters with assuming the "external mixture" agrees with retrieved values from airborne observations. The results of calculating hemispherical fluxes of the reflected and transmitted solar radiation and the radiative divergence are obtained with the Delta-Eddington approach. The calculation is done for surface albedo values of 0, 0.5, 0.9 and for spectral values of the sandy surface. Four values of solar zenith angle: 0°, 30°, 40° and 60° are taken. The obtained values are compared with data of radiative airborne observations. Estimating the local instantaneous radiative forcing of atmospheric aerosols and clouds for considered models is presented together with the heating rate.
NASA Astrophysics Data System (ADS)
Gauthier, D.; Hutchinson, D. J.
2012-04-01
We present simple estimates of the maximum possible critical length of damage or fracture in a weak snowpack layer required to maintain the propagation that leads to avalanche release, based on observations of 'en-echelon' slab fractures during avalanche release. These slab fractures may be preserved in situ if the slab does not slide down slope. The en-echelon fractures are spaced evenly, normally with one every one to ten metres or more. We consider a simple two-dimensional model of a slab and weak layer, with upslope fracture propagating the weak layer, and examine the relationship between the weak layer and en-echelon slab fractures. We assume that the slab fracture occurs in tension, and initiates at either the base or surface of the slab in the area of peak tensile stress at the tip of the weak layer fracture. We also assume that if at the time the slab is completely bisected by fracture the propagation in the weak layer will arrest spontaneously if it has not advanced beyond the critical length. In this scenario, en-echelon slab fractures may only form when the weak layer fracture repeatedly exceeds the critical length; otherwise, there could be only a single slab fracture. We estimate the position of the weak layer fracture at the time of slab bisection using the slab thickness and ratio between the fracture speeds in the weak layer and slab. We show that in the simple model en-echelon fractures only form if the slab thickness multiplied by the velocity ratio is greater than the critical length. Of course, the critical length must also be less than the en-echelon spacing. It follows that the first relationship must be valid independent of the occurrence of en-echelon fractures, although the speed ratio may be process-dependent and difficult to estimate. We use this method to calculate maximum critical lengths for propagation in actual avalanches with and without en echelon fractures, and discuss the implications for comparing competing propagation models. Furthermore, we discuss the possible applications to other cases of progressive basal failure and en-echelon fracturing, e.g. the ribbed flow bowls or so-called 'thumbprint' morphology which sometimes develops during landsliding in sensitive clay soils.
Stochastic theory of log-periodic patterns
NASA Astrophysics Data System (ADS)
Canessa, Enrique
2000-12-01
We introduce an analytical model based on birth-death clustering processes to help in understanding the empirical log-periodic corrections to power law scaling and the finite-time singularity as reported in several domains including rupture, earthquakes, world population and financial systems. In our stochastic theory log-periodicities are a consequence of transient clusters induced by an entropy-like term that may reflect the amount of co-operative information carried by the state of a large system of different species. The clustering completion rates for the system are assumed to be given by a simple linear death process. The singularity at t0 is derived in terms of birth-death clustering coefficients.
NASA Technical Reports Server (NTRS)
Weller, T.
1977-01-01
The applicability and adequacy of several computer techniques in predicting satisfactorily the nonlinear/inelastic response of angle ply laminates were evaluated. The analytical predictions were correlated with the results of a test program on the inelastic response under axial compression of a large variety of graphite-epoxy and boron-epoxy angle ply laminates. These comparison studies indicate that neither of the abovementioned analyses can satisfactorily predict either the mode of response or the ultimate stress value corresponding to a particular angle ply laminate configuration. Consequently, also the simple failure mechanisms assumed in the analytical models were not verified.
NASA Technical Reports Server (NTRS)
Wilson, Lonnie A.
1987-01-01
Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.
Effect of gamma-ray irradiation on the surface states of MOS tunnel junctions
NASA Technical Reports Server (NTRS)
Ma, T. P.; Barker, R. C.
1974-01-01
Gamma-ray irradiation with doses up to 8 megarad produces no significant change on either the C(V) or the G(V) characteristics of MOS tunnel junctions with intermediate oxide thicknesses (40-60 A), whereas the expected flat-band shift toward negative electrode voltages occurs in control thick oxide capacitors. A simple tunneling model would explain the results if the radiation-generated hole traps are assumed to lie below the valence band of the silicon. The experiments also suggest that the observed radiation-generated interface states in conventional MOS devices are not due to the radiation damage of the silicon surface.
Possible relation between pulsar rotation and evolution of magnetic inclination
NASA Astrophysics Data System (ADS)
Tian, Jun
2018-05-01
The pulsar timing is observed to be different from predicted by a simple magnetic dipole radiation. We choose eight pulsars whose braking index was reliably determined. Assuming the smaller values of braking index are dominated by the secular evolution of the magnetic inclination, we calculate the increasing rate of the magnetic inclination for each pulsar. We find a possible relation between the rotation frequency of each pulsar and the inferred evolution of the magnetic inclination. Due to the model-dependent fit of the magnetic inclination and other effects, more observational indicators for the change rate of magnetic inclination are needed to test the relation.
Novel Models of Visual Topographic Map Alignment in the Superior Colliculus
El-Ghazawi, Tarek A.; Triplett, Jason W.
2016-01-01
The establishment of precise neuronal connectivity during development is critical for sensing the external environment and informing appropriate behavioral responses. In the visual system, many connections are organized topographically, which preserves the spatial order of the visual scene. The superior colliculus (SC) is a midbrain nucleus that integrates visual inputs from the retina and primary visual cortex (V1) to regulate goal-directed eye movements. In the SC, topographically organized inputs from the retina and V1 must be aligned to facilitate integration. Previously, we showed that retinal input instructs the alignment of V1 inputs in the SC in a manner dependent on spontaneous neuronal activity; however, the mechanism of activity-dependent instruction remains unclear. To begin to address this gap, we developed two novel computational models of visual map alignment in the SC that incorporate distinct activity-dependent components. First, a Correlational Model assumes that V1 inputs achieve alignment with established retinal inputs through simple correlative firing mechanisms. A second Integrational Model assumes that V1 inputs contribute to the firing of SC neurons during alignment. Both models accurately replicate in vivo findings in wild type, transgenic and combination mutant mouse models, suggesting either activity-dependent mechanism is plausible. In silico experiments reveal distinct behaviors in response to weakening retinal drive, providing insight into the nature of the system governing map alignment depending on the activity-dependent strategy utilized. Overall, we describe novel computational frameworks of visual map alignment that accurately model many aspects of the in vivo process and propose experiments to test them. PMID:28027309
Quantifying seismic anisotropy induced by small-scale chemical heterogeneities
NASA Astrophysics Data System (ADS)
Alder, C.; Bodin, T.; Ricard, Y.; Capdeville, Y.; Debayle, E.; Montagner, J. P.
2017-12-01
Observations of seismic anisotropy are usually used as a proxy for lattice-preferred orientation (LPO) of anisotropic minerals in the Earth's mantle. In this way, seismic anisotropy observed in tomographic models provides important constraints on the geometry of mantle deformation associated with thermal convection and plate tectonics. However, in addition to LPO, small-scale heterogeneities that cannot be resolved by long-period seismic waves may also produce anisotropy. The observed (i.e. apparent) anisotropy is then a combination of an intrinsic and an extrinsic component. Assuming the Earth's mantle exhibits petrological inhomogeneities at all scales, tomographic models built from long-period seismic waves may thus display extrinsic anisotropy. In this paper, we investigate the relation between the amplitude of seismic heterogeneities and the level of induced S-wave radial anisotropy as seen by long-period seismic waves. We generate some simple 1-D and 2-D isotropic models that exhibit a power spectrum of heterogeneities as what is expected for the Earth's mantle, that is, varying as 1/k, with k the wavenumber of these heterogeneities. The 1-D toy models correspond to simple layered media. In the 2-D case, our models depict marble-cake patterns in which an anomaly in shear wave velocity has been advected within convective cells. The long-wavelength equivalents of these models are computed using upscaling relations that link properties of a rapidly varying elastic medium to properties of the effective, that is, apparent, medium as seen by long-period waves. The resulting homogenized media exhibit extrinsic anisotropy and represent what would be observed in tomography. In the 1-D case, we analytically show that the level of anisotropy increases with the square of the amplitude of heterogeneities. This relation is numerically verified for both 1-D and 2-D media. In addition, we predict that 10 per cent of chemical heterogeneities in 2-D marble-cake models can induce more than 3.9 per cent of extrinsic radial S-wave anisotropy. We thus predict that a non-negligible part of the observed anisotropy in tomographic models may be the result of unmapped small-scale heterogeneities in the mantle, mainly in the form of fine layering, and that caution should be taken when interpreting observed anisotropy in terms of LPO and mantle deformation. This effect may be particularly strong in the lithosphere where chemical heterogeneities are assumed to be the strongest.
Kelvin-Voigt model of wave propagation in fragmented geomaterials with impact damping
NASA Astrophysics Data System (ADS)
Khudyakov, Maxim; Pasternak, Elena; Dyskin, Arcady
2017-04-01
When a wave propagates through real materials, energy dissipation occurs. The effect of loss of energy in homogeneous materials can be accounted for by using simple viscous models. However, a reliable model representing the effect in fragmented geomaterials has not been established yet. The main reason for that is a mechanism how vibrations are transmitted between the elements (fragments) in these materials. It is hypothesised that the fragments strike against each other, in the process of oscillation, and the impacts lead to the energy loss. We assume that the energy loss is well represented by the restitution coefficient. The principal element of this concept is the interaction of two adjacent blocks. We model it by a simple linear oscillator (a mass on an elastic spring) with an additional condition: each time the system travels through the neutral point, where the displacement is equal to zero, the velocity reduces by multiplying itself by the restitution coefficient, which characterises an impact of the fragments. This additional condition renders the system non-linear. We show that the behaviour of such a model averaged over times much larger than the system period can approximately be represented by a conventional linear oscillator with linear damping characterised by a damping coefficient expressible through the restitution coefficient. Based on this the wave propagation at times considerably greater than the resonance period of oscillations of the neighbouring blocks can be modelled using the Kelvin-Voigt model. The wave velocities and the dispersion relations are obtained.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.
2017-12-01
Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.
Mechanical model for a collagen fibril pair in extracellular matrix.
Chan, Yue; Cox, Grant M; Haverkamp, Richard G; Hill, James M
2009-04-01
In this paper, we model the mechanics of a collagen pair in the connective tissue extracellular matrix that exists in abundance throughout animals, including the human body. This connective tissue comprises repeated units of two main structures, namely collagens as well as axial, parallel and regular anionic glycosaminoglycan between collagens. The collagen fibril can be modeled by Hooke's law whereas anionic glycosaminoglycan behaves more like a rubber-band rod and as such can be better modeled by the worm-like chain model. While both computer simulations and continuum mechanics models have been investigated for the behavior of this connective tissue typically, authors either assume a simple form of the molecular potential energy or entirely ignore the microscopic structure of the connective tissue. Here, we apply basic physical methodologies and simple applied mathematical modeling techniques to describe the collagen pair quantitatively. We found that the growth of fibrils was intimately related to the maximum length of the anionic glycosaminoglycan and the relative displacement of two adjacent fibrils, which in return was closely related to the effectiveness of anionic glycosaminoglycan in transmitting forces between fibrils. These reveal the importance of the anionic glycosaminoglycan in maintaining the structural shape of the connective tissue extracellular matrix and eventually the shape modulus of human tissues. We also found that some macroscopic properties, like the maximum molecular energy and the breaking fraction of the collagen, were also related to the microscopic characteristics of the anionic glycosaminoglycan.
Using computational modeling of river flow with remotely sensed data to infer channel bathymetry
Nelson, Jonathan M.; McDonald, Richard R.; Kinzel, Paul J.; Shimizu, Y.
2012-01-01
As part of an ongoing investigation into the use of computational river flow and morphodynamic models for the purpose of correcting and extending remotely sensed river datasets, a simple method for inferring channel bathymetry is developed and discussed. The method is based on an inversion of the equations expressing conservation of mass and momentum to develop equations that can be solved for depth given known values of vertically-averaged velocity and water-surface elevation. The ultimate goal of this work is to combine imperfect remotely sensed data on river planform, water-surface elevation and water-surface velocity in order to estimate depth and other physical parameters of river channels. In this paper, the technique is examined using synthetic data sets that are developed directly from the application of forward two-and three-dimensional flow models. These data sets are constrained to satisfy conservation of mass and momentum, unlike typical remotely sensed field data sets. This provides a better understanding of the process and also allows assessment of how simple inaccuracies in remotely sensed estimates might propagate into depth estimates. The technique is applied to three simple cases: First, depth is extracted from a synthetic dataset of vertically averaged velocity and water-surface elevation; second, depth is extracted from the same data set but with a normally-distributed random error added to the water-surface elevation; third, depth is extracted from a synthetic data set for the same river reach using computed water-surface velocities (in place of depth-integrated values) and water-surface elevations. In each case, the extracted depths are compared to the actual measured depths used to construct the synthetic data sets (with two- and three-dimensional flow models). Errors in water-surface elevation and velocity that are very small degrade depth estimates and cannot be recovered. Errors in depth estimates associated with assuming water-surface velocities equal to depth-integrated velocities are substantial, but can be reduced with simple corrections.
Neogene rotations and quasicontinuous deformation of the Pacific Northwest continental margin
England, Philip; Wells, Ray E.
1991-01-01
Paleomagnetically determined rotations about vertical axes of 15 to 12 Ma flows of the Miocene Columbia River Basalt Group of Oregon and Washington decrease smoothly with distance from the plate margin, consistent with a simple physical model for continental deformation that assumes the lithosphere behaves as a thin layer of fluid. The average rate of northward translation of the continental margin since 15 Ma calculated from the rotations, using this model, is about 15 mm/yr, which suggests that much of the tangential motion between the Juan de Fuca and North American plates since middle Miocene time has been taken up by deformation of North America. The fluid-like character of the large-scale deformation implies that the brittle upper crust follows the motions of the deeper parts of the lithosphere.
Inheritance of magma ocean differentiation during lunar origin by giant impact
NASA Technical Reports Server (NTRS)
Warren, Paul H.
1992-01-01
The giant impact model for the Moon has won widespread support. It seems to satisfactorily explain the high angular momentum of the Earth-Moon system, and the strong depletion of FeNi in the Moon. This model is usually assumed to entail no significant fractionation of nonvolatile lithophile elements relative to a simple binary mixture of impactor silicates plus protoearth silicates. Although the Earth may have been hot enough before the impact to be completely molten, analysis of the likely number and timing of major impacts in the prehistory of the impactor indicates that a fully molten, undifferentiated condition for that relatively small body is unlikely. Given selective sampling by the giant impact, any significant vertical differentiation within the noncore portion of the impactor would have been largely inherited by the Moon.
An investigation into the causes of stratospheric ozone loss in the southern Australasian region
NASA Astrophysics Data System (ADS)
Lehmann, P.; Karoly, D. J.; Newmann, P. A.; Clarkson, T. S.; Matthews, W. A.
1992-07-01
Measurements of total ozone at Macquarie Island (55 deg S, 159 deg E) reveal statistically significant reductions of approximately twelve percent during July to September when comparing the mean levels for 1987-90 with those in the seventies. In order to investigate the possibility that these ozone changes may not be a result of dynamic variability of the stratosphere, a simple linear model of ozone was created from statistical analysis of tropopause height and isentropic transient eddy heat flux, which were assumed representative of the dominant dynamic influences. Comparison of measured and modeled ozone indicates that the recent downward trend in ozone at Macquarie Island is not related to stratospheric dynamic variability and therefore suggests another mechanism, possibly changes in photochemical destruction of ozone.
NASA Astrophysics Data System (ADS)
Bose, Sanjay K.; Gordon, J. J.
The modeling and analysis of a system providing integrated voice/data services to mobile terminals over a power-limited satellite channel are discussed. The mobiles use slotted Aloha random access to send requests for channel assignments to a central station. For successful requests, the actual transmission of voice/data within a call is done using the channel assigned for this purpose by the central station. The satellite channel is assumed to be power limited. Taking into account the known burstiness of voice sources (which use a voice-activated switch), the central station overassigns channels so that the average total power is below the power limit of the satellite transponder. The performance of this model is analyzed. Certain simple, static control strategies for improving performance are also proposed.
A simple method for EEG guided transcranial electrical stimulation without models.
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom
2016-06-01
There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
A simple method for EEG guided transcranial electrical stimulation without models
NASA Astrophysics Data System (ADS)
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom
2016-06-01
Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
26 CFR 1.651(b)-1 - Deduction for distributions to beneficiaries.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Deduction for distributions to beneficiaries. In computing its taxable income, a simple trust is allowed a...), relating to tax-exempt interest, foreign income, and excluded dividends. For example: Assume that the...
Mantle Flow in the Western United States Constrained by Seismic Anisotropy
NASA Astrophysics Data System (ADS)
Niday, W.; Humphreys, E.
2017-12-01
Shear wave splitting, caused by the lattice preferred orientation (LPO) of olivine crystals under shear deformation, provide a useful constraint on numerical models of mantle flow. Although it is sometimes assumed that shear wave splitting fast directions correspond with mantle flow directions, this is only true in simple shear flows that do not vary strongly with space or time. Observed shear wave splitting in the western United States is complex and inconsistent with simple shear driven by North American and Pacific plate motion, suggesting that the effects of time-dependent subduction history and spatial heterogeneity are important. Liu and Stegman (2011) reproduce the pattern of fast seismic anomalies below the western US from Farallon subduction history, and Chaparro and Stegman (2017) reproduce the circular anisotropy field below the Great Basin. We extend this to consider anisotropic structure outside the Great Basin and evaluate the density and viscosity of seismic anomalies such as slabs and Yellowstone. We use the mantle convection code ASPECT to simulate 3D buoyancy-driven flow in the mantle below the western US, and predict LPO using the modeled flow fields. We present results from a suite of models varying the sub-lithospheric structures of the western US and constraints on density and viscosity variations in the upper mantle.
On measuring community participation in research.
Khodyakov, Dmitry; Stockdale, Susan; Jones, Andrea; Mango, Joseph; Jones, Felica; Lizaola, Elizabeth
2013-06-01
Active participation of community partners in research aspects of community-academic partnered projects is often assumed to have a positive impact on the outcomes of such projects. The value of community engagement in research, however, cannot be empirically determined without good measures of the level of community participation in research activities. Based on our recent evaluation of community-academic partnered projects centered around behavioral health issues, this article uses semistructured interview and survey data to outline two complementary approaches to measuring the level of community participation in research-a "three-model" approach that differentiates between the levels of community participation and a Community Engagement in Research Index (CERI) that offers a multidimensional view of community engagement in the research process. The primary goal of this article is to present and compare these approaches, discuss their strengths and limitations, summarize the lessons learned, and offer directions for future research. We find that whereas the three-model approach is a simple measure of the perception of community participation in research activities, CERI allows for a more nuanced understanding by capturing multiple aspects of such participation. Although additional research is needed to validate these measures, our study makes a significant contribution by illustrating the complexity of measuring community participation in research and the lack of reliability in simple scores offered by the three-model approach.
Satellite and in situ monitoring data used for modeling of forest vegetation reflectance
NASA Astrophysics Data System (ADS)
Zoran, M. A.; Savastru, R. S.; Savastru, D. M.; Miclos, S. I.; Tautan, M. N.; Baschir, L.
2010-10-01
As climatic variability and anthropogenic stressors are growing up continuously, must be defined the proper criteria for forest vegetation assessment. In order to characterize current and future state of forest vegetation satellite imagery is a very useful tool. Vegetation can be distinguished using remote sensing data from most other (mainly inorganic) materials by virtue of its notable absorption in the red and blue segments of the visible spectrum, its higher green reflectance and, especially, its very strong reflectance in the near-IR. Vegetation reflectance has variations with sun zenith angle, view zenith angle, and terrain slope angle. To provide corrections of these effects, for visible and near-infrared light, was used a developed a simple physical model of vegetation reflectance, by assuming homogeneous and closed vegetation canopy with randomly oriented leaves. A simple physical model of forest vegetation reflectance was applied and validated for Cernica forested area, near Bucharest town through two ASTER satellite data , acquired within minutes from one another ,a nadir and off-nadir for band 3 lying in the near infra red, most radiance differences between the two scenes can be attributed to the BRDF effect. Other satellite data MODIS, Landsat TM and ETM as well as, IKONOS have been used for different NDVI and classification analysis.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
NASA Astrophysics Data System (ADS)
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
A simple code for use in shielding and radiation dosage analyses
NASA Technical Reports Server (NTRS)
Wan, C. C.
1972-01-01
A simple code for use in analyses of gamma radiation effects in laminated materials is described. Simple and good geometry is assumed so that all multiple collision and scattering events are excluded from consideration. The code is capable of handling laminates up to six layers. However, for laminates of more than six layers, the same code may be used to incorporate two additional layers at a time, making use of punch-tape outputs from previous computation on all preceding layers. Spectrum of attenuated radiation are obtained as both printed output and punch tape output as desired.
Evidence for a Significant Level of Extrinsic Anisotropy Due to Heterogeneities in the Mantle.
NASA Astrophysics Data System (ADS)
Alder, C.; Bodin, T.; Ricard, Y. R.; Capdeville, Y.; Debayle, E.; Montagner, J. P.
2017-12-01
Observations of seismic anisotropy are used as a proxy for lattice-preferred orientation (LPO) of anisotropic minerals in the Earth's mantle. In this way, it provides important constraints on the geometry of mantle deformation. However, in addition to LPO, small-scale heterogeneities that cannot be resolved by long-period seismic waves may also produce anisotropy. The observed (i.e. apparent) anisotropy is then a combination of an intrinsic and an extrinsic component. Assuming the Earth's mantle exhibits petrological inhomogeneities at all scales, tomographic models built from long-period seismic waves may thus display extrinsic anisotropy. Here, we investigate the relation between the amplitude of seismic heterogeneities and the level of induced S-wave radial anisotropy as seen by long-period seismic waves. We generate some simple 1D and 2D isotropic models that exhibit a power spectrum of heterogeneities as what is expected for the Earth's mantle, i.e. varying as 1/k, with k the wavenumber of these heterogeneities. The 1D toy models correspond to simple layered media. In the 2D case, our models depict marble-cake patterns in which an anomaly in S-wave velocity has been advected within convective cells. The long-wavelength equivalents of these models are computed using upscaling relations that link properties of a rapidly varying elastic medium to properties of the effective, i.e. apparent, medium as seen by long-period waves. The resulting homogenized media exhibit extrinsic anisotropy and represent what would be observed in tomography. In the 1D case, we analytically show that the level of anisotropy increases with the square of the amplitude of heterogeneities. This relation is numerically verified for both 1D and 2D media. In addition, we predict that 10 % of chemical heterogeneities in 2D marble-cake models can induce more than 3.9 % of extrinsic radial S-wave anisotropy. We thus predict that a non-negligible part of the observed anisotropy in tomographic models may be the result of unmapped small-scale heterogeneities in the mantle, mainly in the form of fine layering, and that caution should be taken when interpreting observed anisotropy in terms of LPO and mantle deformation. This effect may be particularly strong in the lithosphere where chemical heterogeneities are assumed to be the strongest.
A revised model of ex-vivo reduction of hexavalent chromium in human and rodent gastric juices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlosser, Paul M., E-mail: schlosser.paul@epa.gov; Sasso, Alan F.
Chronic oral exposure to hexavalent chromium (Cr-VI) in drinking water has been shown to induce tumors in the mouse gastrointestinal (GI) tract and rat oral cavity. The same is not true for trivalent chromium (Cr-III). Thus reduction of Cr-VI to Cr-III in gastric juices is considered a protective mechanism, and it has been suggested that the difference between the rate of reduction among mice, rats, and humans could explain or predict differences in sensitivity to Cr-VI. We evaluated previously published models of gastric reduction and believe that they do not fully describe the data on reduction as a function ofmore » Cr-VI concentration, time, and (in humans) pH. The previous models are parsimonious in assuming only a single reducing agent in rodents and describing pH-dependence using a simple function. We present a revised model that assumes three pools of reducing agents in rats and mice with pH-dependence based on known speciation chemistry. While the revised model uses more fitted parameters than the original model, they are adequately identifiable given the available data, and the fit of the revised model to the full range of data is shown to be significantly improved. Hence the revised model should provide better predictions of Cr-VI reduction when integrated into a corresponding PBPK model. - Highlights: • Hexavalent chromium (Cr-VI) reduction in gastric juices is a key detoxifying step. • pH-dependent Cr-VI reduction rates are explained using known chemical speciation. • Reduction in rodents appears to involve multiple pools of electron donors. • Reduction appears to continue after 60 min, although more slowly than initial rates.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanna, Steven R.; Baja, Emmanuel; Flaherty, Julia E.
2008-01-30
A simple urban dispersion model is tested that is based on the Gaussian plume model and the Briggs’ urban dispersion curves. A key aspect of the model is that an initial dispersion coefficient (sigma) of 40 m is assumed to apply in the x, y, and z directions in built-up downtown areas. This initial sigma accounts for mixing in the local street canyon and/or building wakes. At short distances (i.e., when the release is in the same street canyon as the receptor and there are no obstructions in between), the initial lateral sigma is assumed to be less, 10 m.more » Observations from tracer experiments during the Madison Square Garden 2005 (MSG05) field study are used for model testing. MSG05 took place in a 1 km by 1 km area in Manhattan surrounding Madison Square Garden. Six different perfluorocarbon tracer (PFT) gases were released concurrently from five different locations around MSG, and concentrations in the air were observed by 20 samplers near the surface and seven samplers on building tops. There were two separate continuous 60 minute tracer release periods on each day, beginning at 9 am and at 11:30 am. Releases took place on two separate days (March 10 and 14). The samplers provided 30 minute averaged PFT concentrations from 9 am through 2 pm. This analysis focuses on the maximum 60-minute averaged PFT gas concentration at each sampler location for each PFT for each release period. Stability was assumed to be nearly neutral, because of the moderate winds and the mechanical mixing generated by the buildings. Input wind direction was the average observed building-top wind direction (285° on March 10 and 315° on March 14). Input wind speed was the average street-level observed wind speed (1.5 m/s for both days). To be considered in the evaluation, both the observed and predicted concentration had to exceed the threshold. Concentrations normalized by source release rate, C/Q, were tested. For all PFTs, samplers, and release times, the median observed and predicted C/Q are within 40% of each other, and 43 % of the time the concentration predictions are within a factor of two of the observations. The scatter plots show that the typical error is about the same magnitude as the mean concentration. When only the surface observations are considered, the performance is better, with the median observed and predicted C/Qs within 10 % of each other. The overall 60 minute-averaged maximum C/Q is underpredicted by about 40 % for the surface samplers and is overpredicted by about 25 % for the building-top samplers.« less
NASA Astrophysics Data System (ADS)
Chandran, Benjamin D. G.; Hollweg, Joseph V.
2009-12-01
We study the propagation, reflection, and turbulent dissipation of Alfvén waves in coronal holes and the solar wind. We start with the Heinemann-Olbert equations, which describe non-compressive magnetohydrodynamic fluctuations in an inhomogeneous medium with a background flow parallel to the background magnetic field. Following the approach of Dmitruk et al., we model the nonlinear terms in these equations using a simple phenomenology for the cascade and dissipation of wave energy and assume that there is much more energy in waves propagating away from the Sun than waves propagating toward the Sun. We then solve the equations analytically for waves with periods of hours and longer to obtain expressions for the wave amplitudes and turbulent heating rate as a function of heliocentric distance. We also develop a second approximate model that includes waves with periods of roughly one minute to one hour, which undergo less reflection than the longer-period waves, and compare our models to observations. Our models generalize the phenomenological model of Dmitruk et al. by accounting for the solar wind velocity, so that the turbulent heating rate can be evaluated from the coronal base out past the Alfvén critical point—that is, throughout the region in which most of the heating and acceleration occurs. The simple analytical expressions that we obtain can be used to incorporate Alfvén-wave reflection and turbulent heating into fluid models of the solar wind.
Modeling Creep Effects within SiC/SiC Turbine Components
NASA Technical Reports Server (NTRS)
DiCarlo, J. A.; Lang, J.
2008-01-01
Anticipating the implementation of advanced SiC/SiC ceramic composites into the hot section components of future gas turbine engines, the primary objective of this on-going study is to develop physics-based analytical and finite-element modeling tools to predict the effects of constituent creep on SiC/SiC component service life. A second objective is to understand how to possibly select and manipulate constituent materials, processes, and geometries in order to minimize these effects. In initial studies aimed at SiC/SiC components experiencing through-thickness stress gradients, creep models were developed that allowed an understanding of detrimental residual stress effects that can develop globally within the component walls. It was assumed that the SiC/SiC composites behaved as isotropic visco-elastic materials with temperature-dependent creep behavior as experimentally measured in-plane in the fiber direction of advanced thin-walled 2D SiC/SiC panels. The creep models and their key results are discussed assuming state-of-the-art SiC/SiC materials within a simple cylindrical thin-walled tubular structure, which is currently being employed to model creep-related effects for turbine airfoil leading edges subjected to through-thickness thermal stress gradients. Improvements in the creep models are also presented which focus on constituent behavior with more realistic non-linear stress dependencies in order to predict such key creep-related SiC/SiC properties as time-dependent matrix stress, constituent creep and content effects on composite creep rates and rupture times, and stresses on fiber and matrix during and after creep.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Magnetic properties of Proxima Centauri b analogues
NASA Astrophysics Data System (ADS)
Zuluaga, Jorge I.; Bustamante, Sebastian
2018-03-01
The discovery of a planet around the closest star to our Sun, Proxima Centauri, represents a quantum leap in the testability of exoplanetary models. Unlike any other discovered exoplanet, models of Proxima b could be contrasted against near future telescopic observations and far future in-situ measurements. In this paper we aim at predicting the planetary radius and the magnetic properties (dynamo lifetime and magnetic dipole moment) of Proxima b analogues (solid planets with masses of ∼ 1 - 3M⊕ , rotation periods of several days and habitable conditions). For this purpose we build a grid of planetary models with a wide range of compositions and masses. For each point in the grid we run the planetary evolution model developed in Zuluaga et al. (2013). Our model assumes small orbital eccentricity, negligible tidal heating and earth-like radiogenic mantle elements abundances. We devise a statistical methodology to estimate the posterior distribution of the desired planetary properties assuming simple lprior distributions for the orbital inclination and bulk composition. Our model predicts that Proxima b would have a mass 1.3 ≤Mp ≤ 2.3M⊕ and a radius Rp =1.4-0.2+0.3R⊕ . In our simulations, most Proxima b analogues develop intrinsic dynamos that last for ≥4 Gyr (the estimated age of the host star). If alive, the dynamo of Proxima b have a dipole moment ℳdip >0.32÷2.9×2.3ℳdip , ⊕ . These results are not restricted to Proxima b but they also apply to earth-like planets having similar observed properties.
Finding Mount Everest and handling voids.
Storch, Tobias
2011-01-01
Evolutionary algorithms (EAs) are randomized search heuristics that solve problems successfully in many cases. Their behavior is often described in terms of strategies to find a high location on Earth's surface. Unfortunately, many digital elevation models describing it contain void elements. These are elements not assigned an elevation. Therefore, we design and analyze simple EAs with different strategies to handle such partially defined functions. They are experimentally investigated on a dataset describing the elevation of Earth's surface. The largest value found by an EA within a certain runtime is measured, and the median over a few runs is computed and compared for the different EAs. For the dataset, the distribution of void elements seems to be neither random nor adversarial. They are so-called semirandomly distributed. To deepen our understanding of the behavior of the different EAs, they are theoretically considered on well-known pseudo-Boolean functions transferred to partially defined ones. These modifications are also performed in a semirandom way. The typical runtime until an optimum is found by an EA is analyzed, namely bounded from above and below, and compared for the different EAs. We figure out that for the random model it is a good strategy to assume that a void element has a worse function value than all previous elements. Whereas for the adversary model it is a good strategy to assume that a void element has the best function value of all previous elements.
Binary neutron star merger rate via the luminosity function of short gamma-ray bursts
NASA Astrophysics Data System (ADS)
Paul, Debdutta
2018-04-01
The luminosity function of short Gamma Ray Bursts (GRBs) is modelled by using the available catalogue data of all short GRBs (sGRBs) detected till October, 2017. The luminosities are estimated via the `pseudo-redshifts' obtained from the `Yonetoku correlation', assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. While the simple powerlaw is ruled out to high confidence, the data is fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs are derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks. Stringent lower limits of 1.87yr-1 for the aLIGO-VIRGO, and 3.11yr-1 for the upcoming aLIGO-VIRGO-KAGRA-LIGO/India configurations are thus derived for the BNSM rate at 68% confidence. The BNSM rates calculated from this work and that independently inferred from the observation of the only confirmed BNSM observed till date, are shown to have a mild tension; however the scenario that all BNSMs produce sGRBs cannot be ruled out.
Binary neutron star merger rate via the luminosity function of short gamma-ray bursts
NASA Astrophysics Data System (ADS)
Paul, Debdutta
2018-07-01
The luminosity function of short gamma ray bursts (GRBs) is modelled by using the available catalogue data of all short GRBs (sGRBs) detected till 2017 October. The luminosities are estimated via the `pseudo-redshifts' obtained from the `Yonetoku correlation', assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. While the simple power law is ruled out to high confidence, the data is fit well both by exponential cutoff power law and broken power law models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of sGRBs is derived. Assuming a sGRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present, and future configurations of the GW detector networks. Stringent lower limits of 1.87 { yr^{-1}} for the aLIGO-VIRGO, and 3.11 { yr^{-1}} for the upcoming aLIGO-VIRGO-KAGRA-LIGO/India configurations are thus derived for the BNSM rate at 68 per cent confidence. The BNSM rates calculated from this work and that independently inferred from the observation of the only confirmed BNSM observed till date are shown to have a mild tension; however, the scenario that all BNSMs produce sGRBs cannot be ruled out.
Aad, G.; Abbott, B.; Abdallah, J.; ...
2015-10-29
This paper reviews and extends searches for the direct pair production of the scalar supersymmetric partners of the top and bottom quarks in proton–proton collisions collected by the ATLAS collaboration during the LHC Run 1. Most of the analyses use 20 fb -1 of collisions at a centre-of-mass energy of √s = 8 TeV, although in some case an additional 4.7 fb -1 of collision data at √s = 7 TeV are used. New analyses are introduced to improve the sensitivity to specific regions of the model parameter space. Since no evidence of third-generation squarks is found, exclusion limits aremore » derived by combining several analyses and are presented in both a simplified model framework, assuming simple decay chains, as well as within the context of more elaborate phenomenological supersymmetric models.« less
Atmospheric Fragmentation of the Canyon Diablo Meteoroid
NASA Technical Reports Server (NTRS)
Pierazzo, E.; Artemieva, N. A.
2005-01-01
About 50 kyr ago the impact of an iron meteoroid excavated Meteor Crater, Arizona, the first terrestrial structure widely recognized as a meteorite impact crater. Recent studies of ballistically dispersed impact melts from Meteor Crater indicate a compositionally unusually heterogeneous impact melt with high SiO2 and exceptionally high (10 to 25% on average) levels of projectile contamination. These are observations that must be explained by any theoretical modeling of the impact event. Simple atmospheric entry models for an iron meteorite similar to Canyon Diablo indicate that the surface impact speed should have been around 12 km/s [Melosh, personal comm.], not the 15-20 km/s generally assumed in previous impact models. This may help explaining the unusual characteristics of the impact melt at Meteor Crater. We present alternative initial estimates of the motion in the atmosphere of an iron projectile similar to Canyon Diablo, to constraint the initial conditions of the impact event that generated Meteor Crater.
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
Modeling and Scaling of the Distribution of Trade Avalanches in a STOCK Market
NASA Astrophysics Data System (ADS)
Kim, Hyun-Joo
We study the trading activity in the Korea Stock Exchange by considering trade avalanches. A series of successive trading with small trade time interval is regarded as a trade avalanche of which the size s is defined as the number of trade in a series of successive trades. We measure the distribution of trade avalanches sizes P(s) and find that it follows the power-law behavior P(s) ~ s-α with the exponent α ≈ 2 for two stocks with the largest number of trades. A simple stochastic model which describes the power-law behavior of the distribution of trade avalanche size is introduced. In the model it is assumed that the some trades induce the accompanying trades, which results in the trade avalanches and we find that the distribution of the trade avalanche size also follows power-law behavior with the exponent α ≈ 2.
A simple tandem disk model for a cross-wind machine
NASA Astrophysics Data System (ADS)
Healey, J. V.
The relative power coefficients, area expansion ratio, and crosswind forces for a crosswind tubine, e.g., the Darrieus, were examined with a tandem-disk, single-streamtube model. The upwind disk is assumed to be rectangular and the downwind disk is modeled as filling the wake of the upwind disk. Velocity and force triangles are devised for the factors operating at each blade. Attention was given to the NACA 0012 and 0018, and Go 735 and 420 airfoils as blades, with Reynolds number just under 500,000. The 0018 was found to be the best airfoil, followed by the 0012, the 735, and, very far behind in terms of the power coefficient, the 420. The forces on the two disks were calculated to be equal at low tip speed ratios with symmetrical airfoil, while the Go cambered profiles yielded negative values upwind in the same conditions.
Coupling surface and mantle dynamics: A novel experimental approach
NASA Astrophysics Data System (ADS)
Kiraly, Agnes; Faccenna, Claudio; Funiciello, Francesca; Sembroni, Andrea
2015-05-01
Recent modeling shows that surface processes, such as erosion and deposition, may drive the deformation of the Earth's surface, interfering with deeper crustal and mantle signals. To investigate the coupling between the surface and deep process, we designed a three-dimensional laboratory apparatus, to analyze the role of erosion and sedimentation, triggered by deep mantle instability. The setup is constituted and scaled down to natural gravity field using a thin viscous sheet model, with mantle and lithosphere simulated by Newtonian viscous glucose syrup and silicon putty, respectively. The surface process is simulated assuming a simple erosion law producing the downhill flow of a thin viscous material away from high topography. The deep mantle upwelling is triggered by the rise of a buoyant sphere. The results of these models along with the parametric analysis show how surface processes influence uplift velocity and topography signals.
NASA Technical Reports Server (NTRS)
Ball, Danny (Technical Monitor); Pagitz, M.; Pellegrino, Xu S.
2004-01-01
This paper presents a computational study of the stability of simple lobed balloon structures. Two approaches are presented, one based on a wrinkled material model and one based on a variable Poisson s ratio model that eliminates compressive stresses iteratively. The first approach is used to investigate the stability of both a single isotensoid and a stack of four isotensoids, for perturbations of in.nitesimally small amplitude. It is found that both structures are stable for global deformation modes, but unstable for local modes at su.ciently large pressure. Both structures are stable if an isotropic model is assumed. The second approach is used to investigate the stability of the isotensoid stack for large shape perturbations, taking into account contact between di.erent surfaces. For this structure a distorted, stable configuration is found. It is also found that the volume enclosed by this con.guration is smaller than that enclosed by the undistorted structure.
NASA Technical Reports Server (NTRS)
Wiley, P. H.; Bostian, C. W.; Stutzman, W. L.
1973-01-01
The influence of polarization on millimeter wave propagation is investigated from both an experimental and a theoretical viewpoint. First, previous theoretical and experimental work relating to the attenuation and depolarization of millimeter waves by rainfall is discussed. Considerable detail is included in the literature review. Next, a theoretical model is developed to predict the cross polarization level during rainfall from the path average rain rate and the scattered field from a single raindrop. Finally, data from the VPI and SU depolarization experiment are presented as verification of the new model, and a comparison is made with other theories and experiments. Aspects of the new model are: (1) spherical rather than plane waves are assumed, (2) the average drop diameter is used rather than a drop size distribution, and (3) it is simple enough so that the effect which changing one or more parameters has on the crosspolarization level is easily seen.
Thermodynamic properties derived from the free volume model of liquids
NASA Technical Reports Server (NTRS)
Miller, R. I.
1974-01-01
An equation of state and expressions for the isothermal compressibility, thermal expansion coefficient, heat capacity, and entropy of liquids have been derived from the free volume model partition function suggested by Turnbull. The simple definition of the free volume is used, and it is assumed that the specific volume is directly related to the cube of the intermolecular separation by a proportionality factor which is found to be a function of temperature and pressure as well as specific volume. When values of the proportionality factor are calculated from experimental data for real liquids, it is found to be approximately constant over ranges of temperature and pressure which correspond to the dense liquid phase. This result provides a single-parameter method for calculating dense liquid thermodynamic properties and is consistent with the fact that the free volume model is designed to describe liquids near the solidification point.
A quantile regression model for failure-time data with time-dependent covariates
Gorfine, Malka; Goldberg, Yair; Ritov, Ya’acov
2017-01-01
Summary Since survival data occur over time, often important covariates that we wish to consider also change over time. Such covariates are referred as time-dependent covariates. Quantile regression offers flexible modeling of survival data by allowing the covariates to vary with quantiles. This article provides a novel quantile regression model accommodating time-dependent covariates, for analyzing survival data subject to right censoring. Our simple estimation technique assumes the existence of instrumental variables. In addition, we present a doubly-robust estimator in the sense of Robins and Rotnitzky (1992, Recovery of information and adjustment for dependent censoring using surrogate markers. In: Jewell, N. P., Dietz, K. and Farewell, V. T. (editors), AIDS Epidemiology. Boston: Birkhaäuser, pp. 297–331.). The asymptotic properties of the estimators are rigorously studied. Finite-sample properties are demonstrated by a simulation study. The utility of the proposed methodology is demonstrated using the Stanford heart transplant dataset. PMID:27485534
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Does Spike-Timing-Dependent Synaptic Plasticity Couple or Decouple Neurons Firing in Synchrony?
Knoblauch, Andreas; Hauser, Florian; Gewaltig, Marc-Oliver; Körner, Edgar; Palm, Günther
2012-01-01
Spike synchronization is thought to have a constructive role for feature integration, attention, associative learning, and the formation of bidirectionally connected Hebbian cell assemblies. By contrast, theoretical studies on spike-timing-dependent plasticity (STDP) report an inherently decoupling influence of spike synchronization on synaptic connections of coactivated neurons. For example, bidirectional synaptic connections as found in cortical areas could be reproduced only by assuming realistic models of STDP and rate coding. We resolve this conflict by theoretical analysis and simulation of various simple and realistic STDP models that provide a more complete characterization of conditions when STDP leads to either coupling or decoupling of neurons firing in synchrony. In particular, we show that STDP consistently couples synchronized neurons if key model parameters are matched to physiological data: First, synaptic potentiation must be significantly stronger than synaptic depression for small (positive or negative) time lags between presynaptic and postsynaptic spikes. Second, spike synchronization must be sufficiently imprecise, for example, within a time window of 5–10 ms instead of 1 ms. Third, axonal propagation delays should not be much larger than dendritic delays. Under these assumptions synchronized neurons will be strongly coupled leading to a dominance of bidirectional synaptic connections even for simple STDP models and low mean firing rates at the level of spontaneous activity. PMID:22936909
Xu, Lei; Jeavons, Peter
2015-11-01
Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.
Marinsky, J.A.; Reddy, M.M.
1984-01-01
We summarize here experimental studies of proton and metal ion binding to a peat and a humic acid. Data analysis is based on a unified physico-chemical model for reaction of simple ions with polyelectrolytes employing a modified Henderson-Hasselbalch equation. Peat exhibited an apparent intrinsic acid dissociation constant of 10-4.05, and an apparent intrinsic metal ion binding constant of: 400 for cadmium ion; 600 for zinc ion; 4000 for copper ion; 20000 for lead ion. A humic acid was found to have an apparent intrinsic proton binding constant of 10-2.6. Copper ion binding to this humic acid sample occurred at two types of sites. The first site exhibited reaction characteristics which were independent of solution pH and required the interaction of two ligands on the humic acid matrix to simultaneously complex with each copper ion. The second complex species is assumed to be a simple monodentate copper ion-carboxylate species with a stability constant of 18. ?? 1984.
Ostrom, Elinor; Janssen, Marco A.; Anderies, John M.
2007-01-01
In the context of governance of human–environment interactions, a panacea refers to a blueprint for a single type of governance system (e.g., government ownership, privatization, community property) that is applied to all environmental problems. The aim of this special feature is to provide theoretical analysis and empirical evidence to caution against the tendency, when confronted with pervasive uncertainty, to believe that scholars can generate simple models of linked social–ecological systems and deduce general solutions to the overuse of resources. Practitioners and scholars who fall into panacea traps falsely assume that all problems of resource governance can be represented by a small set of simple models, because they falsely perceive that the preferences and perceptions of most resource users are the same. Readers of this special feature will become acquainted with many cases in which panaceas fail. The articles provide an excellent overview of why they fail. Furthermore, the articles in this special feature address how scholars and public officials can increase the prospects for future sustainable resource use by facilitating a diagnostic approach in selecting appropriate starting points for governance and monitoring, as well as by learning from the outcomes of new policies and adapting in light of effective feedback. PMID:17881583
Simplified adaptive control of an orbiting flexible spacecraft
NASA Astrophysics Data System (ADS)
Maganti, Ganesh B.; Singh, Sahjendra N.
2007-10-01
The paper presents the design of a new simple adaptive system for the rotational maneuver and vibration suppression of an orbiting spacecraft with flexible appendages. A moment generating device located on the central rigid body of the spacecraft is used for the attitude control. It is assumed that the system parameters are unknown and the truncated model of the spacecraft has finite but arbitrary dimension. In addition, only the pitch angle and its derivative are measured and elastic modes are not available for feedback. The control output variable is chosen as the linear combination of the pitch angle and the pitch rate. Exploiting the hyper minimum phase nature of the spacecraft, a simple adaptive control law is derived for the pitch angle control and elastic mode stabilization. The adaptation rule requires only four adjustable parameters and the structure of the control system does not depend on the order of the truncated spacecraft model. For the synthesis of control system, the measured output error and the states of a third-order command generator are used. Simulation results are presented which show that in the closed-loop system adaptive output regulation is accomplished in spite of large parameter uncertainties and disturbance input.
Biomimetic Models for An Ecological Approach to Massively-Deployed Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2005-01-01
Promises of ubiquitous control of the physical environment by massively-deployed wireless sensor networks open avenues for new applications that will redefine the way we live and work. Due to small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors ubiquitous throughout our environment working in concert. Recent research has concentrated on developing techniques for performing relatively simple tasks with minimal energy expense, assuming some form of centralized control. Unfortunately, centralized control is not conducive to parallel activities and does not scale to massive size networks. Execution of simple tasks in sparse networks will not lead to the sophisticated applications predicted. We propose a new way of looking at massively-deployed sensor networks, motivated by lessons learned from the way biological ecosystems are organized. We demonstrate that in such a model, fully distributed data aggregation can be performed in a scalable fashion in massively deployed sensor networks, where motes operate on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects. We show that such architectures may be used to facilitate communication and synchronization in a fault-tolerant manner, while balancing workload and required energy expenditure throughout the network.
NASA Astrophysics Data System (ADS)
Magnani, Federico; Dewar, Roderick C.; Borghetti, Marco
2009-04-01
Leakage (spillover) refers to the unintended negative (positive) consequences of forest carbon (C) management in one area on C storage elsewhere. For example, the local C storage benefit of less intensive harvesting in one area may be offset, partly or completely, by intensified harvesting elsewhere in order to meet global timber demand. We present the results of a theoretical study aimed at identifying the key factors determining leakage and spillover, as a prerequisite for more realistic numerical studies. We use a simple model of C storage in managed forest ecosystems and their wood products to derive approximate analytical expressions for the leakage induced by decreasing the harvesting frequency of existing forest, and the spillover induced by establishing new plantations, assuming a fixed total wood production from local and remote (non-local) forests combined. We find that leakage and spillover depend crucially on the growth rates, wood product lifetimes and woody litter decomposition rates of local and remote forests. In particular, our results reveal critical thresholds for leakage and spillover, beyond which effects of forest management on remote C storage exceed local effects. Order of magnitude estimates of leakage indicate its potential importance at global scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.S.; Colestock, P.
1989-05-01
The highly anisotropic particle distribution function of minority tail ions driven by ion-cyclotron resonance heating at the fundamental harmonic is calculated in a two-dimensional velocity space. It is assumed that the heating is strong enough to drive most of the resonant ions above the in-electron critical slowing-down energy. Simple analytic expressions for the tail distribution are obtained fro the case when the Doppler effect is sufficiently large to flatten the sharp pitch angle dependence in the bounce averaged qualilinear heating coefficient, D/sub b/, and for the case when D/sub b/ is assumed to be constant in pitch angle and energy.more » It is found that a simple constant-D/sub b/ solution can be used instead of the more complicated sharp-D/sub b/ solution for many analytic purposes. 4 refs., 4 figs.« less
Bar, Nadav S.; Skogestad, Sigurd; Marçal, Jose M.; Ulanovsky, Nachum; Yovel, Yossi
2015-01-01
Animal flight requires fine motor control. However, it is unknown how flying animals rapidly transform noisy sensory information into adequate motor commands. Here we developed a sensorimotor control model that explains vertebrate flight guidance with high fidelity. This simple model accurately reconstructed complex trajectories of bats flying in the dark. The model implies that in order to apply appropriate motor commands, bats have to estimate not only the angle-to-target, as was previously assumed, but also the angular velocity (“proportional-derivative” controller). Next, we conducted experiments in which bats flew in light conditions. When using vision, bats altered their movements, reducing the flight curvature. This change was explained by the model via reduction in sensory noise under vision versus pure echolocation. These results imply a surprising link between sensory noise and movement dynamics. We propose that this sensory-motor link is fundamental to motion control in rapidly moving animals under different sensory conditions, on land, sea, or air. PMID:25629809
Blanton, Hart; Jaccard, James
2006-01-01
Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.
Age-dependence of the average and equivalent refractive indices of the crystalline lens
Charman, W. Neil; Atchison, David A.
2013-01-01
Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474
NASA Astrophysics Data System (ADS)
Kari, Leif
2017-09-01
The constitutive equations of chemically and physically ageing rubber in the audible frequency range are modelled as a function of ageing temperature, ageing time, actual temperature, time and frequency. The constitutive equations are derived by assuming nearly incompressible material with elastic spherical response and viscoelastic deviatoric response, using Mittag-Leffler relaxation function of fractional derivative type, the main advantage being the minimum material parameters needed to successfully fit experimental data over a broad frequency range. The material is furthermore assumed essentially entropic and thermo-mechanically simple while using a modified William-Landel-Ferry shift function to take into account temperature dependence and physical ageing, with fractional free volume evolution modelled by a nonlinear, fractional differential equation with relaxation time identical to that of the stress response and related to the fractional free volume by Doolittle equation. Physical ageing is a reversible ageing process, including trapping and freeing of polymer chain ends, polymer chain reorganizations and free volume changes. In contrast, chemical ageing is an irreversible process, mainly attributed to oxygen reaction with polymer network either damaging the network by scission or reformation of new polymer links. The chemical ageing is modelled by inner variables that are determined by inner fractional evolution equations. Finally, the model parameters are fitted to measurements results of natural rubber over a broad audible frequency range, and various parameter studies are performed including comparison with results obtained by ordinary, non-fractional ageing evolution differential equations.
Do fungi need to be included within environmental radiation protection assessment models?
Guillén, J; Baeza, A; Beresford, N A; Wood, M D
2017-09-01
Fungi are used as biomonitors of forest ecosystems, having comparatively high uptakes of anthropogenic and naturally occurring radionuclides. However, whilst they are known to accumulate radionuclides they are not typically considered in radiological assessment tools for environmental (non-human biota) assessment. In this paper the total dose rate to fungi is estimated using the ERICA Tool, assuming different fruiting body geometries, a single ellipsoid and more complex geometries considering the different components of the fruit body and their differing radionuclide contents based upon measurement data. Anthropogenic and naturally occurring radionuclide concentrations from the Mediterranean ecosystem (Spain) were used in this assessment. The total estimated weighted dose rate was in the range 0.31-3.4 μGy/h (5 th -95 th percentile), similar to natural exposure rates reported for other wild groups. The total estimated dose was dominated by internal exposure, especially from 226 Ra and 210 Po. Differences in dose rate between complex geometries and a simple ellipsoid model were negligible. Therefore, the simple ellipsoid model is recommended to assess dose rates to fungal fruiting bodies. Fungal mycelium was also modelled assuming a long filament. Using these geometries, assessments for fungal fruiting bodies and mycelium under different scenarios (post-accident, planned release and existing exposure) were conducted, each being based on available monitoring data. The estimated total dose rate in each case was below the ERICA screening benchmark dose, except for the example post-accident existing exposure scenario (the Chernobyl Exclusion Zone) for which a dose rate in excess of 35 μGy/h was estimated for the fruiting body. Estimated mycelium dose rate in this post-accident existing exposure scenario was close to the 400 μGy/h benchmark for plants, although fungi are generally considered to be less radiosensitive than plants. Further research on appropriate mycelium geometries and their radionuclide content is required. Based on the assessments presented in this paper, there is no need to recommend that fungi should be added to the existing assessment tools and frameworks; if required some tools allow a geometry representing fungi to be created and used within a dose assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accounting for groundwater in stream fish thermal habitat responses to climate change
Snyder, Craig D.; Hitt, Nathaniel P.; Young, John A.
2015-01-01
Forecasting climate change effects on aquatic fauna and their habitat requires an understanding of how water temperature responds to changing air temperature (i.e., thermal sensitivity). Previous efforts to forecast climate effects on brook trout habitat have generally assumed uniform air-water temperature relationships over large areas that cannot account for groundwater inputs and other processes that operate at finer spatial scales. We developed regression models that accounted for groundwater influences on thermal sensitivity from measured air-water temperature relationships within forested watersheds in eastern North America (Shenandoah National Park, USA, 78 sites in 9 watersheds). We used these reach-scale models to forecast climate change effects on stream temperature and brook trout thermal habitat, and compared our results to previous forecasts based upon large-scale models. Observed stream temperatures were generally less sensitive to air temperature than previously assumed, and we attribute this to the moderating effect of shallow groundwater inputs. Predicted groundwater temperatures from air-water regression models corresponded well to observed groundwater temperatures elsewhere in the study area. Predictions of brook trout future habitat loss derived from our fine-grained models were far less pessimistic than those from prior models developed at coarser spatial resolutions. However, our models also revealed spatial variation in thermal sensitivity within and among catchments resulting in a patchy distribution of thermally suitable habitat. Habitat fragmentation due to thermal barriers therefore may have an increasingly important role for trout population viability in headwater streams. Our results demonstrate that simple adjustments to air-water temperature regression models can provide a powerful and cost-effective approach for predicting future stream temperatures while accounting for effects of groundwater.
Is cosmic acceleration proven by local cosmological probes?
NASA Astrophysics Data System (ADS)
Tutusaus, I.; Lamine, B.; Dupays, A.; Blanchard, A.
2017-06-01
Context. The cosmological concordance model (ΛCDM) matches the cosmological observations exceedingly well. This model has become the standard cosmological model with the evidence for an accelerated expansion provided by the type Ia supernovae (SNIa) Hubble diagram. However, the robustness of this evidence has been addressed recently with somewhat diverging conclusions. Aims: The purpose of this paper is to assess the robustness of the conclusion that the Universe is indeed accelerating if we rely only on low-redshift (z ≲ 2) observations, that is to say with SNIa, baryonic acoustic oscillations, measurements of the Hubble parameter at different redshifts, and measurements of the growth of matter perturbations. Methods: We used the standard statistical procedure of minimizing the χ2 function for the different probes to quantify the goodness of fit of a model for both ΛCDM and a simple nonaccelerated low-redshift power law model. In this analysis, we do not assume that supernovae intrinsic luminosity is independent of the redshift, which has been a fundamental assumption in most previous studies that cannot be tested. Results: We have found that, when SNIa intrinsic luminosity is not assumed to be redshift independent, a nonaccelerated low-redshift power law model is able to fit the low-redshift background data as well as, or even slightly better, than ΛCDM. When measurements of the growth of structures are added, a nonaccelerated low-redshift power law model still provides an excellent fit to the data for all the luminosity evolution models considered. Conclusions: Without the standard assumption that supernovae intrinsic luminosity is independent of the redshift, low-redshift probes are consistent with a nonaccelerated universe.
NASA Astrophysics Data System (ADS)
Mitsui, Takahito; Crucifix, Michel
2017-04-01
The last glacial period was punctuated by a series of abrupt climate shifts, the so-called Dansgaard-Oeschger (DO) events. The frequency of DO events varied in time, supposedly because of changes in background climate conditions. Here, the influence of external forcings on DO events is investigated with statistical modelling. We assume two types of simple stochastic dynamical systems models (double-well potential-type and oscillator-type), forced by the northern hemisphere summer insolation change and/or the global ice volume change. The model parameters are estimated by using the maximum likelihood method with the NGRIP Ca^{2+} record. The stochastic oscillator model with at least the ice volume forcing reproduces well the sample autocorrelation function of the record and the frequency changes of warming transitions in the last glacial period across MISs 2, 3, and 4. The model performance is improved with the additional insolation forcing. The BIC scores also suggest that the ice volume forcing is relatively more important than the insolation forcing, though the strength of evidence depends on the model assumption. Finally, we simulate the average number of warming transitions in the past four glacial periods, assuming the model can be extended beyond the last glacial, and compare the result with an Iberian margin sea-surface temperature (SST) record (Martrat et al. in Science 317(5837): 502-507, 2007). The simulation result supports the previous observation that abrupt millennial-scale climate changes in the penultimate glacial (MIS 6) are less frequent than in the last glacial (MISs 2-4). On the other hand, it suggests that the number of abrupt millennial-scale climate changes in older glacial periods (MISs 6, 8, and 10) might be larger than inferred from the SST record.
A frictional population model of seismicity rate change
Gomberg, J.; Reasenberg, P.; Cocco, M.; Belardinelli, M.E.
2005-01-01
We study models of seismicity rate changes caused by the application of a static stress perturbation to a population of faults and discuss our results with respect to the model proposed by Dieterich (1994). These models assume distribution of nucleation sites (e.g., faults) obeying rate-state frictional relations that fail at constant rate under tectonic loading alone, and predicts a positive static stress step at time to will cause an immediate increased seismicity rate that decays according to Omori's law. We show one way in which the Dieterich model may be constructed from simple general idead, illustratted using numerically computed synthetic seismicity and mathematical formulation. We show that seismicity rate change predicted by these models (1) depend on the particular relationship between the clock-advanced failure and fault maturity, (2) are largest for the faults closest to failure at to, (3) depend strongly on which state evolution law faults obey, and (4) are insensitive to some types of population hetrogeneity. We also find that if individual faults fail repeatedly and populations are finite, at timescales much longer than typical aftershock durations, quiescence follows at seismicity rate increase regardless of the specific frictional relations. For the examined models the quiescence duration is comparable to the ratio of stress change to stressing rate ????/??,which occurs after a time comparable to the average recurrence interval of the individual faults in the population and repeats in the absence of any new load may pertubations; this simple model may partly explain observations of repeated clustering of earthquakes. Copyright 2005 by the American Geophysical Union.
Two Back Stress Hardening Models in Rate Independent Rigid Plastic Deformation
NASA Astrophysics Data System (ADS)
Yun, Su-Jin
In the present work, the constitutive relations based on the combination of two back stresses are developed using the Armstrong-Frederick, Phillips and Ziegler’s type hardening rules. Various evolutions of the kinematic hardening parameter can be obtained by means of a simple combination of back stress rate using the rule of mixtures. Thus, a wide range of plastic deformation behavior can be depicted depending on the dominant back stress evolution. The ultimate back stress is also determined for the present combined kinematic hardening models. Since a kinematic hardening rule is assumed in the finite deformation regime, the stress rate is co-rotated with respect to the spin of substructure obtained by incorporating the plastic spin concept. A comparison of the various co-rotational rates is also included. Assuming rigid plasticity, the continuum body consists of the elastic deformation zone and the plastic deformation zone to form a hybrid finite element formulation. Then, the plastic deformation behavior is investigated under various loading conditions with an assumption of the J2 deformation theory. The plastic deformation localization turns out to be strongly dependent on the description of back stress evolution and its associated hardening parameters. The analysis for the shear deformation with fixed boundaries is carried out to examine the deformation localization behavior and the evolution of state variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, P.R.; Ramshaw, J.D.
MAGMA is a FORTRAN computer code designed to viscous flow in in situ vitrification melt pools. It models three-dimensional, incompressible, viscous flow and heat transfer. The momentum equation is coupled to the temperature field through the buoyancy force terms arising from the Boussinesq approximation. All fluid properties, except density, are assumed variable. Density is assumed constant except in the buoyancy force terms in the momentum equation. A simple melting model based on the enthalpy method allows the study of the melt front progression and latent heat effects. An indirect addressing scheme used in the numerical solution of the momentum equationmore » voids unnecessary calculations in cells devoid of liquid. Two-dimensional calculations can be performed using either rectangular or cylindrical coordinates, while three-dimensional calculations use rectangular coordinates. All derivatives are approximated by finite differences. The incompressible Navier-Stokes equations are solved using a new fully implicit iterative technique, while the energy equation is differenced explicitly in time. Spatial derivatives are written in conservative form using a uniform, rectangular, staggered mesh based on the marker and cell placement of variables. Convective terms are differenced using a weighted average of centered and donor cell differencing to ensure numerical stability. Complete descriptions of MAGMA governing equations, numerics, code structure, and code verification are provided. 14 refs.« less
Deflections of Uniformly Loaded Floors. A Beam-Spring Analog.
1984-09-01
joist floor systems have long been analyzed and Recently, the FEAFLO program was used to predict the designed by assuming that the joists act as...simple beams in behavior of floors constructed with joists whose properties carrying the design load. This simple method neglects many were determined in...uniform joist properties.) Designated N-3 for the floor with ’. nailed sheathing and G-3 for the floor with the sheathing 02 attached by means of a rigid
Hahn, Melinda W; O'Meliae, Charles R
2004-01-01
The deposition and reentrainment of particles in porous media have been examined theoretically and experimentally. A Brownian Dynamics/Monte Carlo (MC/BD) model has been developed that simulates the movement of Brownian particles near a collector under "unfavorable" chemical conditions and allows deposition in primary and secondary minima. A simple Maxwell approach has been used to estimate particle attachment efficiency by assuming deposition in the secondary minimum and calculating the probability of reentrainment. The MC/BD simulations and the Maxwell calculations support an alternative view of the deposition and reentrainment of Brownian particles under unfavorable chemical conditions. These calculations indicate that deposition into and subsequent release from secondary minima can explain reported discrepancies between classic model predictions that assume irreversible deposition in a primary well and experimentally determined deposition efficiencies that are orders of magnitude larger than Interaction Force Boundary Layer (IFBL) predictions. The commonly used IFBL model, for example, is based on the notion of transport over an energy barrier into the primary well and does not address contributions of secondary minimum deposition. A simple Maxwell model based on deposition into and reentrainment from secondary minima is much more accurate in predicting deposition rates for column experiments at low ionic strengths. It also greatly reduces the substantial particle size effects inherent in IFBL models, wherein particle attachment rates are predicted to decrease significantly with increasing particle size. This view is consistent with recent work by others addressing the composition and structure of the first few nanometers at solid-water interfaces including research on modeling water at solid-liquid interfaces, surface speciation, interfacial force measurements, and the rheological properties of concentrated suspensions. It follows that deposition under these conditions will depend on the depth of the secondary minimum and that some transition between secondary and primary depositions should occur when the height of the energy barrier is on the order of several kT. When deposition in secondary minima predominates, observed deposition should increase with increasing ionic strength, particle size, and Hamaker constant. Since an equilibrium can develop between bound and bulk particles, the collision efficiency [alpha] can no longer be considered a constant for a given physical and chemical system. Rather, in many cases it can decrease over time until it eventually reaches zero as equilibrium is established.
Fahnline, John B
2016-12-01
An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.
2011-01-01
Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153
Evaporation from a partially wet forest canopy
NASA Technical Reports Server (NTRS)
Hancock, N. H.; Sellers, P. J.; Crowther, J. M.
1983-01-01
The results of experimental studies of water storage in a Sitka-spruce canopy are presented and analyzed in terms of model simulations of evaporation. Wet-branch cantilever deflection was measured along with meteorological data on three days in August, 1976, to determine the relationship of canopy evaporation to wind speed and (hence) aerodynamic resistance. Two versions of a simple unilayer model of sensible and latent heat transport from a partially wet canopy were tested in the data analysis: model F1 forbids the exchange of heat between wet and dry foliage surfaces; model F2 assumes that this exchange is highly efficient. Model F1 is found to give results consistent with the rainfall-interception model of Rutter et al. (1971, 1975, 1977), but model F2 gives results which are more plausible and correspond to the multilayer simulations of Sellers and Lockwood (1981) and the experimental findings of Hancock and Crowther (1979). It is inferred that the role of eddy diffusivity for water vapor is enhanced relative to momentum transport, and that the similarity hypothesis used in conventional models may fail in the near vicinity of a forest canopy.
NASA Astrophysics Data System (ADS)
Giordano, V.; Chisari, C.; Rizzano, G.; Latour, M.
2017-10-01
The main aim of this work is to understand how the prediction of the seismic performance of moment-resisting (MR) steel frames depends on the modelling of their dissipative zones when the structure geometry (number of stories and bays) and seismic excitation source vary. In particular, a parametric analysis involving 4 frames was carried out, and, for each one, the full-strength beam-to-column connections were modelled according to 4 numerical approaches with different degrees of sophistication (Smooth Hysteretic Model, Bouc-Wen, Hysteretic and simple Elastic-Plastic models). Subsequently, Incremental Dynamic Analyses (IDA) were performed by considering two different earthquakes (Spitak and Kobe). The preliminary results collected so far pointed out that the influence of the joint modelling on the overall frame response is negligible up to interstorey drift ratio values equal to those conservatively assumed by the codes to define conventional collapse (0.03 rad). Conversely, if more realistic ultimate interstorey drift values are considered for the q-factor evaluation, the influence of joint modelling can be significant, and thus may require accurate modelling of its cyclic behavior.
Tang, Yongqiang
2017-12-01
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
Effect of lensing non-Gaussianity on the CMB power spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Antony; Pratten, Geraint, E-mail: antony@cosmologist.info, E-mail: geraint.pratten@gmail.com
2016-12-01
Observed CMB anisotropies are lensed, and the lensed power spectra can be calculated accurately assuming the lensing deflections are Gaussian. However, the lensing deflections are actually slightly non-Gaussian due to both non-linear large-scale structure growth and post-Born corrections. We calculate the leading correction to the lensed CMB power spectra from the non-Gaussianity, which is determined by the lensing bispectrum. Assuming no primordial non-Gaussianity, the lowest-order result gives ∼ 0.3% corrections to the BB and EE polarization spectra on small-scales. However we show that the effect on EE is reduced by about a factor of two by higher-order Gaussian lensing smoothing,more » rendering the total effect safely negligible for the foreseeable future. We give a simple analytic model for the signal expected from skewness of the large-scale lensing field; the effect is similar to a net demagnification and hence a small change in acoustic scale (and therefore out of phase with the dominant lensing smoothing that predominantly affects the peaks and troughs of the power spectrum).« less
Spectroscopic Measurements of Hydrogen Ion Temperature During Divertor Recombination
NASA Astrophysics Data System (ADS)
Stotler, D. P.; Skinner, C. H.; Karney, C. F. F.
1998-11-01
We explore the possibility of using the neutral H_α spectral line profile to measure the ion temperature Ti in a recombining plasma. Since the H_α emissions due to recombination are larger than those due to other mechanisms, interference from non-recombining regions contributing to the chord integrated data is insignificant. A chord integrated, Doppler and Stark broadened H_α spectrum is simulated by the DEGAS 2 Monte Carlo neutral transport code(D. Stotler and C. Karney, Contrib. Plasma Phys.) 34, 392 (1994). using assumed plasma conditions. The application of a simple fitting procedure to this spectrum yields an average electron density ne and Ti consistent with the assumed plasma parameters if the spectrum is dominated by recombination from a region of modest ne variation. The interpretation of experimental data is complicated by Zeeman splitting and light reflection off surfaces. Ion temperature measurements by H_α spectroscopy appear feasible within the context of a model for the entire divertor plasma that takes these effects into account.
Dynamic motion of red blood cells in simple shear flow
NASA Astrophysics Data System (ADS)
Sui, Y.; Chew, Y. T.; Roy, P.; Cheng, Y. P.; Low, H. T.
2008-11-01
A three-dimensional numerical model is proposed to simulate the dynamic motion of red blood cells (RBCs) in simple shear flow. The RBCs are approximated by ghost cells consisting of Newtonian liquid drops enclosed by Skalak membranes which take into account the membrane shear elasticity and the membrane area incompressibility. The RBCs have an initially biconcave discoid resting shape, and the internal liquid is assumed to have the same physical properties as the matrix fluid. The simulation is based on a hybrid method, in which the immersed boundary concept is introduced into the framework of the lattice Boltzmann method, and a finite element model is incorporated to obtain the forces acting on the nodes of the cell membrane which is discretized into flat triangular elements. The dynamic motion of RBCs is investigated in simple shear flow under a broad range of shear rates. At large shear rates, the cells are found to carry out a swinging motion, in which periodic inclination oscillation and shape deformation superimpose on the membrane tank treading motion. With the shear rate decreasing, the swinging amplitude of the cell increases, and finally triggers a transition to tumbling motion. This is the first direct numerical simulation that predicts both the swinging motion of the RBCs and the shear rate induced transition, which have been observed in a recent experiment. It is also found that as the mode changes from swinging to tumbling, the apparent viscosity of the suspension increases monotonically.
Global Langevin model of multidimensional biomolecular dynamics.
Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard
2016-11-14
Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F(). To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F(), which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.
Global Langevin model of multidimensional biomolecular dynamics
NASA Astrophysics Data System (ADS)
Schaudinnus, Norbert; Lickert, Benjamin; Biswas, Mithun; Stock, Gerhard
2016-11-01
Molecular dynamics simulations of biomolecular processes are often discussed in terms of diffusive motion on a low-dimensional free energy landscape F ( 𝒙 ) . To provide a theoretical basis for this interpretation, one may invoke the system-bath ansatz á la Zwanzig. That is, by assuming a time scale separation between the slow motion along the system coordinate x and the fast fluctuations of the bath, a memory-free Langevin equation can be derived that describes the system's motion on the free energy landscape F ( 𝒙 ) , which is damped by a friction field and driven by a stochastic force that is related to the friction via the fluctuation-dissipation theorem. While the theoretical formulation of Zwanzig typically assumes a highly idealized form of the bath Hamiltonian and the system-bath coupling, one would like to extend the approach to realistic data-based biomolecular systems. Here a practical method is proposed to construct an analytically defined global model of structural dynamics. Given a molecular dynamics simulation and adequate collective coordinates, the approach employs an "empirical valence bond"-type model which is suitable to represent multidimensional free energy landscapes as well as an approximate description of the friction field. Adopting alanine dipeptide and a three-dimensional model of heptaalanine as simple examples, the resulting Langevin model is shown to reproduce the results of the underlying all-atom simulations. Because the Langevin equation can also be shown to satisfy the underlying assumptions of the theory (such as a delta-correlated Gaussian-distributed noise), the global model provides a correct, albeit empirical, realization of Zwanzig's formulation. As an application, the model can be used to investigate the dependence of the system on parameter changes and to predict the effect of site-selective mutations on the dynamics.
How clear-sky polarization varies with wavelength in the visible-NIR
NASA Astrophysics Data System (ADS)
Pust, Nathan J.; Shaw, Joseph A.
2013-10-01
Because of the increasing variety of applications for polarization imaging and sensing, there is a growing need for information about polarization phenomenology in the natural environment, including the spectral distribution of polarization in the atmosphere. A computer model that has been validated in comparisons with measurements from our all-sky polarization imager has been used here to simulate the spectrum of clear-sky polarization at a many locations around the world, with a wide variety of underlying surface-reflectance and aerosol conditions. This study of the skylight polarization spectral variability shows that there is no simple spectrum that can be assumed or predicted without knowledge of the atmospheric aerosol properties and underlying surface reflectance.
Constraining the noncommutative spectral action via astrophysical observations.
Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi
2010-09-03
The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.
NASA Technical Reports Server (NTRS)
Brooks, D. R.
1980-01-01
Orbit dynamics of the solar occultation technique for satellite measurements of the Earth's atmosphere are described. A one-year mission is simulated and the orbit and mission design implications are discussed in detail. Geographical coverage capabilities are examined parametrically for a range of orbit conditions. The hypothetical mission is used to produce a simulated one-year data base of solar occultation measurements; each occultation event is assumed to produce a single number, or 'measurement' and some statistical properties of the data set are examined. A simple model is fitted to the data to demonstrate a procedure for examining global distributions of atmospheric constitutents with the solar occultation technique.
Hyper-Ramsey spectroscopy with probe-laser-intensity fluctuations
NASA Astrophysics Data System (ADS)
Beloy, K.
2018-03-01
We examine the influence of probe-laser-intensity fluctuations on hyper-Ramsey spectroscopy. We assume, as is appropriate for relevant cases of interest, that the probe-laser intensity I determines both the Rabi frequency (∝√{I } ) and the frequency shift to the atomic transition (∝I ) during probe-laser interactions with the atom. The spectroscopic signal depends on these two quantities that covary with fluctuations in the probe-laser intensity. Introducing a simple model for the fluctuations, we find that the signature robustness of the hyper-Ramsey method can be compromised. Taking the Yb+ electric octupole clock transition as an example, we quantify the clock error under different levels of probe-laser-intensity fluctuations.
Urey prize lecture: On the diversity of plausible planetary systems
NASA Technical Reports Server (NTRS)
Lissauer, J. J.
1995-01-01
Models of planet formation and of the orbital stability of planetary systems are used to predict the variety of planetary and satellite systems that may be present within our galaxy. A new approximate global criterion for orbital stability of planetary systems based on an extension of the local resonance overlap criterion is proposed. This criterion implies that at least some of Uranus' small inner moons are significantly less massive than predicted by estimates based on Voyager volumes and densities assumed to equal that of Miranda. Simple calculations (neglecting planetary gravity) suggest that giant planets which acrete substantial amounts of gas while their envelopes are extremely distended ultimately rotate rapidly in the prgrade direction.
Goldsztein, Guillermo H.
2016-01-01
Consider a person standing on a platform that oscillates laterally, i.e. to the right and left of the person. Assume the platform satisfies Hooke’s law. As the platform moves, the person reacts and moves its body attempting to keep its balance. We develop a simple model to study this phenomenon and show that the person, while attempting to keep its balance, may do positive work on the platform and increase the amplitude of its oscillations. The studies in this article are motivated by the oscillations in pedestrian bridges that are sometimes observed when large crowds cross them. PMID:27304857
Goldsztein, Guillermo H
2016-01-01
Consider a person standing on a platform that oscillates laterally, i.e. to the right and left of the person. Assume the platform satisfies Hooke's law. As the platform moves, the person reacts and moves its body attempting to keep its balance. We develop a simple model to study this phenomenon and show that the person, while attempting to keep its balance, may do positive work on the platform and increase the amplitude of its oscillations. The studies in this article are motivated by the oscillations in pedestrian bridges that are sometimes observed when large crowds cross them.
Numerical Simulation of the Detonation of Condensed Explosives
NASA Astrophysics Data System (ADS)
Wang, Cheng; Ye, Ting; Ning, Jianguo
Detonation process of a condensed explosive was simulated using a finite difference method. Euler equations were applied to describe the detonation flow field, an ignition and growth model for the chemical reaction and Jones-Wilkins-Lee (JWL) equations of state for the state of explosives and detonation products. Based on the simple mixture rule that assumes the reacting explosives to be a mixture of the reactant and product components, 1D and 2D codes were developed to simulate the detonation process of high explosive PBX9404. The numerical results are in good agreement with the experimental results, which demonstrates that the finite difference method, mixture rule and chemical reaction proposed in this paper are adequate and feasible.
Adversarial risk analysis with incomplete information: a level-k approach.
Rothschild, Casey; McLay, Laura; Guikema, Seth
2012-07-01
This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.
Effects of meteoroid fragmentation on radar observations of meteor trails
NASA Astrophysics Data System (ADS)
Elford, W. Graham; Campbell, L.
2001-11-01
Radar reflections from meteor trails often differ from the predictions of simple models. There is general consensus that these differences are probably the result of fragmentation of the meteoroid. Several examples taken from different types of meteor radar observations are considered in order to test the validity of the fragmentation hypothesis. The absence of the expected Fresnel oscillations in many observations of transverse scatter from meteor trails is readily explained by assuming a number of ablating fragments spread out along the trails. Observations of amplitude fluctuations in head echoes from "down-the-beam" meteoroids are explained by gross fragmentation of a meteoroid into two or more pieces. Another down-the-beam event is modeled by simulation of the differential retardation of two fragments of different mass, giving reasonable agreement between the observed and predicted radar signals.
Stochastic Modeling Approach to the Incubation Time of Prionic Diseases
NASA Astrophysics Data System (ADS)
Ferreira, A. S.; da Silva, M. A.; Cressoni, J. C.
2003-05-01
Transmissible spongiform encephalopathies are neurodegenerative diseases for which prions are the attributed pathogenic agents. A widely accepted theory assumes that prion replication is due to a direct interaction between the pathologic (PrPSc) form and the host-encoded (PrPC) conformation, in a kind of autocatalytic process. Here we show that the overall features of the incubation time of prion diseases are readily obtained if the prion reaction is described by a simple mean-field model. An analytical expression for the incubation time distribution then follows by associating the rate constant to a stochastic variable log normally distributed. The incubation time distribution is then also shown to be log normal and fits the observed BSE (bovine spongiform encephalopathy) data very well. Computer simulation results also yield the correct BSE incubation time distribution at low PrPC densities.
Constraints on cosmic ray propagation in the galaxy
NASA Technical Reports Server (NTRS)
Cordes, James M.
1992-01-01
The goal was to derive a more detailed picture of magnetohydrodynamic turbulence in the interstellar medium and its effects on cosmic ray propagation. To do so, radio astronomical observations (scattering and Faraday rotation) were combined with knowledge of solar system spacecraft observations of MHD turbulence, simulations of wave propagation, and modeling of the galactic distribution to improve the knowledge. A more sophisticated model was developed for the galactic distribution of electron density turbulence. Faraday rotation measure data was analyzed to constrain magnetic field fluctuations in the ISM. VLBI observations were acquired of compact sources behind the supernova remnant CTA1. Simple calculations were made about the energies of the turbulence assuming a direct link between electron density and magnetic field variations. A simulation is outlined of cosmic ray propagation through the galaxy using the above results.
Running of featureful primordial power spectra
NASA Astrophysics Data System (ADS)
Gariazzo, Stefano; Mena, Olga; Miralles, Victor; Ramírez, Héctor; Boubekeur, Lotfi
2017-06-01
Current measurements of the temperature and polarization anisotropy power spectra of the cosmic microwave background (CMB) seem to indicate that the naive expectation for the slow-roll hierarchy within the most simple inflationary paradigm may not be respected in nature. We show that a primordial power spectrum with localized features could in principle give rise to the observed slow-roll anarchy when fitted to a featureless power spectrum. From a model comparison perspective, and assuming that nature has chosen a featureless primordial power spectrum, we find that, while with mock Planck data there is only weak evidence against a model with localized features, upcoming CMB missions may provide compelling evidence against such a nonstandard primordial power spectrum. This evidence could be reinforced if a featureless primordial power spectrum is independently confirmed from bispectrum and/or galaxy clustering measurements.
NASA Astrophysics Data System (ADS)
Kajikawa, K.; Funaki, K.; Shikimachi, K.; Hirano, N.; Nagaya, S.
2010-11-01
AC losses in a superconductor strip are numerically evaluated by means of a finite element method formulated with a current vector potential. The expressions of AC losses in an infinite slab that corresponds to a simple model of infinitely stacked strips are also derived theoretically. It is assumed that the voltage-current characteristics of the superconductors are represented by Bean's critical state model. The typical operation pattern of a Superconducting Magnetic Energy Storage (SMES) coil with direct and alternating transport currents in an external AC magnetic field is taken into account as the electromagnetic environment for both the single strip and the infinite slab. By using the obtained results of AC losses, the influences of the transport currents on the total losses are discussed quantitatively.
Karamisheva, Ralica D; Islam, M A
2005-01-01
Assuming that settling takes place in two zones (a constant rate zone and a variable rate zone), a model using four parameters accounting for the nature of the water-suspension system has been proposed for describing batch sedimentation processes. The sludge volume index (SVI) has been expressed in terms of these parameters. Some disadvantages of the SVI application as a design parameter have been pointed out, and it has been shown that a relationship between zone settling velocity and sludge concentration is more consistent for describing the settling behavior and for design of settling tanks. The permissible overflow rate has been related to the technological parameters of secondary settling tank by simple working equations. The graphical representations of these equations could be used to optimize the design and operation of secondary settling tanks.
Applying a Particle-only Model to the HL Tau Disk
NASA Astrophysics Data System (ADS)
Tabeshian, Maryam; Wiegert, Paul A.
2018-04-01
Observations have revealed rich structures in protoplanetary disks, offering clues about their embedded planets. Due to the complexities introduced by the abundance of gas in these disks, modeling their structure in detail is computationally intensive, requiring complex hydrodynamic codes and substantial computing power. It would be advantageous if computationally simpler models could provide some preliminary information on these disks. Here we apply a particle-only model (that we developed for gas-poor debris disks) to the gas-rich disk, HL Tauri, to address the question of whether such simple models can inform the study of these systems. Assuming three potentially embedded planets, we match HL Tau’s radial profile fairly well and derive best-fit planetary masses and orbital radii (0.40, 0.02, 0.21 Jupiter masses for the planets orbiting a 0.55 M ⊙ star at 11.22, 29.67, 64.23 au). Our derived parameters are comparable to those estimated by others, except for the mass of the second planet. Our simulations also reproduce some narrower gaps seen in the ALMA image away from the orbits of the planets. The nature of these gaps is debated but, based on our simulations, we argue they could result from planet–disk interactions via mean-motion resonances, and need not contain planets. Our results suggest that a simple particle-only model can be used as a first step to understanding dynamical structures in gas disks, particularly those formed by planets, and determine some parameters of their hidden planets, serving as useful initial inputs to hydrodynamic models which are needed to investigate disk and planet properties more thoroughly.
NASA Technical Reports Server (NTRS)
Lacis, A. A.; Wang, W. C.; Hansen, J. E.
1979-01-01
A radiative transfer method appropriate for use in simple climate models and three dimensional global climate models was developed. It is fully interactive with climate changes, such as in the temperature-pressure profile, cloud distribution, and atmospheric composition, and it is accurate throughout the troposphere and stratosphere. The vertical inhomogeneity of the atmosphere is accounted for by assuming a correlation of gaseous k-distributions of different pressures and temperatures. Line-by-line calculations are made to demonstrate that The method is remarkably accurate. The method is then used in a one-dimensional radiative-convective climate model to study the effect of cirrus clouds on surface temperature. It is shown that an increase in cirrus cloud cover can cause a significant warming of the troposphere and the Earth's surface, by the mechanism of an enhanced green-house effect. The dependence of this phenomenon on cloud optical thickness, altitude, and latitude is investigated.
Unsteady solute-transport simulation in streamflow using a finite-difference model
Land, Larry F.
1978-01-01
This report documents a rather simple, general purpose, one-dimensional, one-parameter, mass-transport model for field use. The model assumes a well-mixed conservative solute that may be coming from an unsteady source and is moving in unsteady streamflow. The quantity of solute being transported is in the units of concentration. Results are reported as such. An implicit finite-difference technique is used to solve the mass transport equation. It consists of creating a tridiagonal matrix and using the Thomas algorithm to solve the matrix for the unknown concentrations at the new time step. The computer program pesented is designed to compute the concentration of a water-quality constituent at any point and at any preselected time in a one-dimensional stream. The model is driven by the inflowing concentration of solute at the upstream boundary and is influenced by the solute entering the stream from tributaries and lateral ground-water inflow and from a source or sink. (Woodard-USGS)
Helicon thruster plasma modeling: Two-dimensional fluid-dynamics and propulsive performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahedo, Eduardo; Navarro-Cavalle, Jaume
2013-04-15
An axisymmetric macroscopic model of the magnetized plasma flow inside the helicon thruster chamber is derived, assuming that the power absorbed from the helicon antenna emission is known. Ionization, confinement, subsonic flows, and production efficiency are discussed in terms of design and operation parameters. Analytical solutions and simple scaling laws for ideal plasma conditions are obtained. The chamber model is then matched with a model of the external magnetic nozzle in order to characterize the whole plasma flow and assess thruster performances. Thermal, electric, and magnetic contributions to thrust are evaluated. The energy balance provides the power conversion between ionsmore » and electrons in chamber and nozzle, and the power distribution among beam power, ionization losses, and wall losses. Thruster efficiency is assessed, and the main causes of inefficiency are identified. The thermodynamic behavior of the collisionless electron population in the nozzle is acknowledged to be poorly known and crucial for a complete plasma expansion and good thrust efficiency.« less
CaveCAD: a tool for architectural design in immersive virtual environments
NASA Astrophysics Data System (ADS)
Schulze, Jürgen P.; Hughes, Cathleen E.; Zhang, Lelin; Edelstein, Eve; Macagno, Eduardo
2014-02-01
Existing 3D modeling tools were designed to run on desktop computers with monitor, keyboard and mouse. To make 3D modeling possible with mouse and keyboard, many 3D interactions, such as point placement or translations of geometry, had to be mapped to the 2D parameter space of the mouse, possibly supported by mouse buttons or keyboard keys. We hypothesize that had the designers of these existing systems had been able to assume immersive virtual reality systems as their target platforms, they would have been able to design 3D interactions much more intuitively. In collaboration with professional architects, we created a simple, but complete 3D modeling tool for virtual environments from the ground up and use direct 3D interaction wherever possible and adequate. In this publication, we present our approaches for interactions for typical 3D modeling functions, such as geometry creation, modification of existing geometry, and assignment of surface materials. We also discuss preliminary user experiences with this system.
Lerner, Itamar; Bentin, Shlomo; Shriki, Oren
2014-01-01
Semantic priming has long been recognized to reflect, along with automatic semantic mechanisms, the contribution of controlled strategies. However, previous theories of controlled priming were mostly qualitative, lacking common grounds with modern mathematical models of automatic priming based on neural networks. Recently, we have introduced a novel attractor network model of automatic semantic priming with latching dynamics. Here, we extend this work to show how the same model can also account for important findings regarding controlled processes. Assuming the rate of semantic transitions in the network can be adapted using simple reinforcement learning, we show how basic findings attributed to controlled processes in priming can be achieved, including their dependency on stimulus onset asynchrony and relatedness proportion and their unique effect on associative, category-exemplar, mediated and backward prime-target relations. We discuss how our mechanism relates to the classic expectancy theory and how it can be further extended in future developments of the model. PMID:24890261
Cherdieu, Mélaine; Versace, Rémy; Rey, Amandine E; Vallet, Guillaume T; Mazza, Stéphanie
2018-06-01
Numerous studies have explored the effect of sleep on memory. It is well known that a period of sleep, compared to a similar period of wakefulness, protects memories from interference, improves performance, and might also reorganize memory traces in a way that encourages creativity and rule extraction. It is assumed that these benefits come from the reactivation of brain networks, mainly involving the hippocampal structure, as well as from their synchronization with neocortical networks during sleep, thereby underpinning sleep-dependent memory consolidation and reorganization. However, this memory reorganization is difficult to explain within classical memory models. The present paper aims to describe whether the influence of sleep on memory could be explained using a multiple trace memory model that is consistent with the concept of embodied cognition: the Act-In (activation-integration) memory model. We propose an original approach to the results observed in sleep research on the basis of two simple mechanisms, namely activation and integration. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the spectrum and polarization of magnetar flare emission
NASA Astrophysics Data System (ADS)
Taverna, R.; Turolla, R.
2017-12-01
Bursts and flares are among the distinctive observational manifestations of magnetars, isolated neutron stars endowed with an ultra-strong magnetic field (B ≈ 1014-1015 G). It is believed that these events arise in a hot electron-positron plasma, injected in the magnetosphere, due to a magnetic field instability, which remains trapped within the closed magnetic field lines (the “trapped-fireball” model). We have developed a simple radiative transfer model to simulate magnetar flare emission in the case of a steady trapped fireball. We assume that magnetic Thomson scattering is the dominant source of opacity in the fireball medium, and neglect contributions from second-order radiative processes. The spectra we obtained in the 1-100 keV energy range are in broad agreement with those of available observations. The large degree of polarization (≳ 80%) predicted by our model should be easily measured by new-generation X-ray polarimeters, like IXPE, XIPE and eXTP, allowing one to confirm the model predictions.
Sudden spreading of infections in an epidemic model with a finite seed fraction
NASA Astrophysics Data System (ADS)
Hasegawa, Takehisa; Nemoto, Koji
2018-03-01
We study a simple case of the susceptible-weakened-infected-removed model in regular random graphs in a situation where an epidemic starts from a finite fraction of initially infected nodes (seeds). Previous studies have shown that, assuming a single seed, this model exhibits a kind of discontinuous transition at a certain value of infection rate. Performing Monte Carlo simulations and evaluating approximate master equations, we find that the present model has two critical infection rates for the case with a finite seed fraction. At the first critical rate the system shows a percolation transition of clusters composed of removed nodes, and at the second critical rate, which is larger than the first one, a giant cluster suddenly grows and the order parameter jumps even though it has been already rising. Numerical evaluation of the master equations shows that such sudden epidemic spreading does occur if the degree of the underlying network is large and the seed fraction is small.
The Evolution of the Exponent of Zipf's Law in Language Ontogeny
Baixeries, Jaume; Elvevåg, Brita; Ferrer-i-Cancho, Ramon
2013-01-01
It is well-known that word frequencies arrange themselves according to Zipf's law. However, little is known about the dependency of the parameters of the law and the complexity of a communication system. Many models of the evolution of language assume that the exponent of the law remains constant as the complexity of a communication systems increases. Using longitudinal studies of child language, we analysed the word rank distribution for the speech of children and adults participating in conversations. The adults typically included family members (e.g., parents) or the investigators conducting the research. Our analysis of the evolution of Zipf's law yields two main unexpected results. First, in children the exponent of the law tends to decrease over time while this tendency is weaker in adults, thus suggesting this is not a mere mirror effect of adult speech. Second, although the exponent of the law is more stable in adults, their exponents fall below 1 which is the typical value of the exponent assumed in both children and adults. Our analysis also shows a tendency of the mean length of utterances (MLU), a simple estimate of syntactic complexity, to increase as the exponent decreases. The parallel evolution of the exponent and a simple indicator of syntactic complexity (MLU) supports the hypothesis that the exponent of Zipf's law and linguistic complexity are inter-related. The assumption that Zipf's law for word ranks is a power-law with a constant exponent of one in both adults and children needs to be revised. PMID:23516390
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, P.; Sigut, T. A. A.; Landstreet, J. D., E-mail: ppatel54@uwo.ca
2017-02-20
We investigate the physical properties of the inner gaseous disks of three hot Herbig B2e stars, HD 76534, HD 114981, and HD 216629, by modeling CFHT-ESPaDOns spectra using non-LTE radiative transfer codes. We assume that the emission lines are produced in a circumstellar disk heated solely by photospheric radiation from the central star in order to test whether the optical and near-infrared emission lines can be reproduced without invoking magnetospheric accretion. The inner gaseous disk density was assumed to follow a simple power-law in the equatorial plane, and we searched for models that could reproduce observed lines of H imore » (H α and H β ), He i, Ca ii, and Fe ii. For the three stars, good matches were found for all emission line profiles individually; however, no density model based on a single power-law was able to reproduce all of the observed emission lines. Among the single power-law models, the one with the gas density varying as ∼10{sup −10}( R {sub *}/ R ){sup 3} g cm{sup −3} in the equatorial plane of a 25 R {sub *} (0.78 au) disk did the best overall job of representing the optical emission lines of the three stars. This model implies a mass for the H α -emitting portion of the inner gaseous disk of ∼10{sup −9} M {sub *}. We conclude that the optical emission line spectra of these HBe stars can be qualitatively reproduced by a ≈1 au, geometrically thin, circumstellar disk of negligible mass compared to the central star in Keplerian rotation and radiative equilibrium.« less
Resonant Tidal Excitation of Internal Waves in the Earth's Fluid Core
NASA Technical Reports Server (NTRS)
Tyler, Robert H.; Kuang, Weijia
2014-01-01
It has long been speculated that there is a stably stratified layer below the core-mantle boundary, and two recent studies have improved the constraints on the parameters describing this stratification. Here we consider the dynamical implications of this layer using a simplified model. We first show that the stratification in this surface layer has sensitive control over the rate at which tidal energy is transferred to the core. We then show that when the stratification parameters from the recent studies are used in this model, a resonant configuration arrives whereby tidal forces perform elevated rates of work in exciting core flow. Specifically, the internal wave speed derived from the two independent studies (150 and 155 m/s) are in remarkable agreement with the speed (152 m/s) required for excitation of the primary normal mode of oscillation as calculated from full solutions of the Laplace Tidal Equations applied to a reduced-gravity idealized model representing the stratified layer. In evaluating this agreement it is noteworthy that the idealized model assumed may be regarded as the most reduced representation of the stratified dynamics of the layer, in that there are no non-essential dynamical terms in the governing equations assumed. While it is certainly possible that a more realistic treatment may require additional dynamical terms or coupling, it is also clear that this reduced representation includes no freedom for coercing the correlation described. This suggests that one must accept either (1) that tidal forces resonantly excite core flow and this is predicted by a simple model or (2) that either the independent estimates or the dynamical model does not accurately portray the core surface layer and there has simply been an unlikely coincidence between three estimates of a stratification parameter which would otherwise have a broad plausible range.
Baroukh, Caroline; Muñoz-Tamayo, Rafael; Steyer, Jean-Philippe; Bernard, Olivier
2014-01-01
Metabolic modeling is a powerful tool to understand, predict and optimize bioprocesses, particularly when they imply intracellular molecules of interest. Unfortunately, the use of metabolic models for time varying metabolic fluxes is hampered by the lack of experimental data required to define and calibrate the kinetic reaction rates of the metabolic pathways. For this reason, metabolic models are often used under the balanced growth hypothesis. However, for some processes such as the photoautotrophic metabolism of microalgae, the balanced-growth assumption appears to be unreasonable because of the synchronization of their circadian cycle on the daily light. Yet, understanding microalgae metabolism is necessary to optimize the production yield of bioprocesses based on this microorganism, as for example production of third-generation biofuels. In this paper, we propose DRUM, a new dynamic metabolic modeling framework that handles the non-balanced growth condition and hence accumulation of intracellular metabolites. The first stage of the approach consists in splitting the metabolic network into sub-networks describing reactions which are spatially close, and which are assumed to satisfy balanced growth condition. The left metabolites interconnecting the sub-networks behave dynamically. Then, thanks to Elementary Flux Mode analysis, each sub-network is reduced to macroscopic reactions, for which simple kinetics are assumed. Finally, an Ordinary Differential Equation system is obtained to describe substrate consumption, biomass production, products excretion and accumulation of some internal metabolites. DRUM was applied to the accumulation of lipids and carbohydrates of the microalgae Tisochrysis lutea under day/night cycles. The resulting model describes accurately experimental data obtained in day/night conditions. It efficiently predicts the accumulation and consumption of lipids and carbohydrates. PMID:25105494
Analysis of non-equilibrium phenomena in inductively coupled plasma generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, W.; Panesi, M., E-mail: mpanesi@illinois.edu; Lani, A.
This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) Amore » Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.« less
Analysis of non-equilibrium phenomena in inductively coupled plasma generators
NASA Astrophysics Data System (ADS)
Zhang, W.; Lani, A.; Panesi, M.
2016-07-01
This work addresses the modeling of non-equilibrium phenomena in inductively coupled plasma discharges. In the proposed computational model, the electromagnetic induction equation is solved together with the set of Navier-Stokes equations in order to compute the electromagnetic and flow fields, accounting for their mutual interaction. Semi-classical statistical thermodynamics is used to determine the plasma thermodynamic properties, while transport properties are obtained from kinetic principles, with the method of Chapman and Enskog. Particle ambipolar diffusive fluxes are found by solving the Stefan-Maxwell equations with a simple iterative method. Two physico-mathematical formulations are used to model the chemical reaction processes: (1) A Local Thermodynamics Equilibrium (LTE) formulation and (2) a thermo-chemical non-equilibrium (TCNEQ) formulation. In the TCNEQ model, thermal non-equilibrium between the translational energy mode of the gas and the vibrational energy mode of individual molecules is accounted for. The electronic states of the chemical species are assumed in equilibrium with the vibrational temperature, whereas the rotational energy mode is assumed to be equilibrated with translation. Three different physical models are used to account for the coupling of chemistry and energy transfer processes. Numerical simulations obtained with the LTE and TCNEQ formulations are used to characterize the extent of non-equilibrium of the flow inside the Plasmatron facility at the von Karman Institute. Each model was tested using different kinetic mechanisms to assess the sensitivity of the results to variations in the reaction parameters. A comparison of temperatures and composition profiles at the outlet of the torch demonstrates that the flow is in non-equilibrium for operating conditions characterized by pressures below 30 000 Pa, frequency 0.37 MHz, input power 80 kW, and mass flow 8 g/s.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
NASA Astrophysics Data System (ADS)
Osczevski, Randall J.
2014-08-01
Ben Shabat et al. (Int J Biometeorol 56(4):639-51, 2013) present revised charts for wind chill equivalent temperatures (WCET) and facial skin temperatures (FST) that differ significantly from currently accepted charts. They credit these differences to their more sophisticated calculation model and to the human-based equation that it used for finding the convective heat transfer coefficient (Ben Shabat and Shitzer, Int J Biometeorol 56:639-651, 2012). Because a version of the simple model that was used to create the current charts accurately reproduces their results when it uses the human-based equation, the differences that they found must be entirely due to this equation. In deriving it, Ben Shabat and Shitzer assumed that all of the heat transfer from the surface of their cylindrical model was due to forced convection alone. Because several modes of heat transfer were occurring in the human experiments they were attempting to simulate, notably radiation, their coefficients are actually total external heat transfer coefficients, not purely convective ones, as the calculation models assume. Data from the one human experiment that used heat flux sensors supports this conclusion and exposes the hazard of using a numerical model with several adjustable parameters that cannot be measured. Because the human-based equation is faulty, the values in the proposed charts are not correct. The equation that Ben Shabat et al. (Int J Biometeorol 56(4):639-51, 2013) propose to calculate WCET should not be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zentler, J.M.
The following is a survey of boundary conditions available to the JASON user for solving 2-D electrostatics problems. Simple examples are included for illumination. It is assumed that the reader has some familiarity with JASON terminology, such as ''PC'' and ''CC'', and has access to the JASON user's manual.
Designing for time-dependent material response in spacecraft structures
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Oleksuk, Lynda L. S.; Bowles, D. E.
1992-01-01
To study the influence on overall deformations of the time-dependent constitutive properties of fiber-reinforced polymeric matrix composite materials being considered for use in orbiting precision segmented reflectors, simple sandwich beam models are developed. The beam models include layers representing the face sheets, the core, and the adhesive bonding of the face sheets to the core. A three-layer model lumps the adhesive layers with the face sheets or core, while a five-layer model considers the adhesive layers explicitly. The deformation response of the three-layer and five-layer sandwich beam models to a midspan point load is studied. This elementary loading leads to a simple analysis, and it is easy to create this loading in the laboratory. Using the correspondence principle of viscoelasticity, the models representing the elastic behavior of the two beams are transformed into time-dependent models. Representative cases of time-dependent material behavior for the facesheet material, the core material, and the adhesive are used to evaluate the influence of these constituents being time-dependent on the deformations of the beam. As an example of the results presented, if it assumed that, as a worst case, the polymer-dominated shear properties of the core behave as a Maxwell fluid such that under constant shear stress the shear strain increases by a factor of 10 in 20 years, then it is shown that the beam deflection increases by a factor of 1.4 during that time. In addition to quantitative conclusions, several assumptions are discussed which simplify the analyses for use with more complicated material models. Finally, it is shown that the simpler three-layer model suffices in many situations.
Modelling food and population dynamics in honey bee colonies.
Khoury, David S; Barron, Andrew B; Myerscough, Mary R
2013-01-01
Honey bees (Apis mellifera) are increasingly in demand as pollinators for various key agricultural food crops, but globally honey bee populations are in decline, and honey bee colony failure rates have increased. This scenario highlights a need to understand the conditions in which colonies flourish and in which colonies fail. To aid this investigation we present a compartment model of bee population dynamics to explore how food availability and bee death rates interact to determine colony growth and development. Our model uses simple differential equations to represent the transitions of eggs laid by the queen to brood, then hive bees and finally forager bees, and the process of social inhibition that regulates the rate at which hive bees begin to forage. We assume that food availability can influence both the number of brood successfully reared to adulthood and the rate at which bees transition from hive duties to foraging. The model predicts complex interactions between food availability and forager death rates in shaping colony fate. Low death rates and high food availability results in stable bee populations at equilibrium (with population size strongly determined by forager death rate) but consistently increasing food reserves. At higher death rates food stores in a colony settle at a finite equilibrium reflecting the balance of food collection and food use. When forager death rates exceed a critical threshold the colony fails but residual food remains. Our model presents a simple mathematical framework for exploring the interactions of food and forager mortality on colony fate, and provides the mathematical basis for more involved simulation models of hive performance.
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
Manuel, D G; Ho, T H; Harper, S; Anderson, G M; Lynch, J; Rosella, L C
2014-07-01
Most individual preventive therapies potentially narrow or widen health disparities depending on the difference in community effectiveness across socioeconomic position (SEP). The equity tipping point (defined as the point at which health disparities become larger) can be calculated by varying components of community effectiveness such as baseline risk of disease, intervention coverage and/or intervention efficacy across SEP. We used a simple modelling approach to estimate the community effectiveness of diabetes prevention across SEP in Canada under different scenarios of intervention coverage. Five-year baseline diabetes risk differed between the lowest and highest income groups by 1.76%. Assuming complete coverage across all income groups, the difference was reduced to 0.90% (144 000 cases prevented) with lifestyle interventions and 1.24% (88 100 cases prevented) with pharmacotherapy. The equity tipping point was estimated to be a coverage difference of 30% for preventive interventions (100% and 70% coverage among the highest and lowest income earners, respectively). Disparities in diabetes risk could be measurably reduced if existing interventions were equally adopted across SEP. However, disparities in coverage could lead to increased inequity in risk. Simple modelling approaches can be used to examine the community effectiveness of individual preventive interventions and their potential to reduce (or increase) disparities. The equity tipping point can be used as a critical threshold for disparities analyses.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
The Relationship Between School Holidays and Transmission of Influenza in England and Wales
Jackson, Charlotte; Vynnycky, Emilia; Mangtani, Punam
2016-01-01
Abstract School closure is often considered as an influenza control measure, but its effects on transmission are poorly understood. We used 2 approaches to estimate how school holidays affect the contact parameter (the per capita rate of contact sufficient for infection transmission) for influenza using primary care data from England and Wales (1967–2000). Firstly, we fitted an age-structured susceptible-infectious-recovered model to each year's data to estimate the proportional change in the contact parameter during school holidays as compared with termtime. Secondly, we calculated the percentage difference in the contact parameter between holidays and termtime from weekly values of the contact parameter, estimated directly from simple mass-action models. Estimates were combined using random-effects meta-analysis, where appropriate. From fitting to the data, the difference in the contact parameter among children aged 5–14 years during holidays as compared with termtime ranged from a 36% reduction to a 17% increase; estimates were too heterogeneous for meta-analysis. Based on the simple mass-action model, the contact parameter was 17% (95% confidence interval: 10, 25) lower during holidays than during termtime. Results were robust to the assumed proportions of infections that were reported and individuals who were susceptible when the influenza season started. We conclude that school closure may reduce transmission during influenza outbreaks. PMID:27744384
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken, T; Mayeda, K; Hofstetter, A
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
Evaluating the multimedia fate of organic chemicals: A level III fugacity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackay, D.; Paterson, S.
A multimedia model is developed and applied to selected organic chemicals in evaluative and real regional environments. The model employs the fugacity concept and treats four bulk compartments: air, water, soil, and bottom sediment, which consist of subcompartments of varying proportions of air, water, and mineral and organic matter. Chemical equilibrium is assumed to apply within (but not between) each bulk compartment. Expressions are included for emissions, advective flows, degrading reactions, and interphase transport by diffusive and non-diffusive processes. Input to the model consists of a description of the environment, the physical-chemical and reaction properties of the chemical, and emissionmore » rates. For steady-state conditions the solution is a simple algebraic expression. The model is applied to six chemicals in the region of southern Ontario and the calculated fate and concentrations are compared with observations. The results suggest that the model may be used to determine the processes that control the environmental fate of chemicals in a region and provide approximate estimates of relative media concentrations.« less
A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite
NASA Astrophysics Data System (ADS)
Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.
1992-05-01
A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.
A new colloid transport model is introduced that is conceptually simple but captures the essential features of complicated attachment and detachment behavior of colloids when conditions of secondary minimum attachment exist. This model eliminates the empirical concept of collision efficiency; the attachment rate is computed directly from colloid filtration theory. Also, a new paradigm for colloid detachment based on colloid population heterogeneity is introduced. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of colloids that attach irreversibly and (2) the rate at which reversibly attached colloids leave themore » surface. These two parameters were correlated to physical parameters that control colloid transport such as the depth of the secondary minimum and pore water velocity. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport. This model can be extended to heterogeneous systems characterized by both primary and secondary minimum deposition by simply increasing the fraction of colloids that attach irreversibly.« less
Evaluation of a computational model to predict elbow range of motion
Nishiwaki, Masao; Johnson, James A.; King, Graham J. W.; Athwal, George S.
2014-01-01
Computer models capable of predicting elbow flexion and extension range of motion (ROM) limits would be useful for assisting surgeons in improving the outcomes of surgical treatment of patients with elbow contractures. A simple and robust computer-based model was developed that predicts elbow joint ROM using bone geometries calculated from computed tomography image data. The model assumes a hinge-like flexion-extension axis, and that elbow passive ROM limits can be based on terminal bony impingement. The model was validated against experimental results with a cadaveric specimen, and was able to predict the flexion and extension limits of the intact joint to 0° and 3°, respectively. The model was also able to predict the flexion and extension limits to 1° and 2°, respectively, when simulated osteophytes were inserted into the joint. Future studies based on this approach will be used for the prediction of elbow flexion-extension ROM in patients with primary osteoarthritis to help identify motion-limiting hypertrophic osteophytes, and will eventually permit real-time computer-assisted navigated excisions. PMID:24841799
A creep cavity growth model for creep-fatigue life prediction of a unidirectional W/Cu composite
NASA Technical Reports Server (NTRS)
Kim, Young-Suk; Verrilli, Michael J.; Halford, Gary R.
1992-01-01
A microstructural model was developed to predict creep-fatigue life in a (0)(sub 4), 9 volume percent tungsten fiber-reinforced copper matrix composite at the temperature of 833 K. The mechanism of failure of the composite is assumed to be governed by the growth of quasi-equilibrium cavities in the copper matrix of the composite, based on the microscopically observed failure mechanisms. The methodology uses a cavity growth model developed for prediction of creep fracture. Instantaneous values of strain rate and stress in the copper matrix during fatigue cycles were calculated and incorporated in the model to predict cyclic life. The stress in the copper matrix was determined by use of a simple two-bar model for the fiber and matrix during cyclic loading. The model successfully predicted the composite creep-fatigue life under tension-tension cyclic loading through the use of this instantaneous matrix stress level. Inclusion of additional mechanisms such as cavity nucleation, grain boundary sliding, and the effect of fibers on matrix-stress level would result in more generalized predictions of creep-fatigue life.
Model reduction in a subset of the original states
NASA Technical Reports Server (NTRS)
Yae, K. H.; Inman, D. J.
1992-01-01
A model reduction method is investigated to provide a smaller structural dynamic model for subsequent structural control design. A structural dynamic model is assumed to be derived from finite element analysis. It is first converted into the state space form, and is further reduced by the internal balancing method. Through the co-ordinate transformation derived from the states that are deleted during reduction, the reduced model is finally expressed with the states that are members of the original states. Therefore, the states in the final reduced model represent the degrees of freedom of the nodes that are selected by the designer. The procedure provides a more practical implementation of model reduction for applications in which specific nodes, such as sensor and/or actuator attachment points, are to be retained in the reduced model. Thus, it ensures that the reduced model is under the same input and output condition as the original physical model. The procedure is applied to two simple examples and comparisons are made between the full and reduced order models. The method can be applied to a linear, continuous and time-invariant model of structural dynamics with nonproportional viscous damping.
Scaling laws of passive-scalar diffusion in the interstellar medium
NASA Astrophysics Data System (ADS)
Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan
2017-05-01
Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.
Abrahamsen, B; Hansen, T B; Høgsberg, I M; Pedersen, F B; Beck-Nielsen, H
1996-01-01
Dual X-ray absorptiometry (DXA) performs noninvasive assessment of bone and soft tissue with high precision. However, soft tissue algorithms assume that 73.2% of the lean body mass is water, a potential source of error in fluid retention. We evaluated DXA (model QDR-2000; Hologic Inc, Waltham, MA), bioelectrical impedance analysis (BIA), and simple anthropometry in 19 patients (9 women and 10 men, mean age 46 y) before and after hemodialysis, removing 0.9-4.3 L (x: 2.8L) of ultrafiltrate. The reduction in fat-free mass (FFM) measured by DXA was highly correlated with the ultrafiltrate, as determined by the reduction in gravimetric weight (r = 0.975, P < 0.0001; SEE: 233 g), whereas BIA was considerably less accurate in assessing FFM reductions (r = 0.66, P < 0.01; SEE: 757 g). Lumbar bone mineral density (BMD) was unaffected by dialysis, as were whole-body fat and BMD. Whole-body bone mineral content, however, was estimated to be 0.6% lower after dialysis. None of the simple anthropometric measurements correlated significantly with the reduction in FFM. In an unmodified clinical setting, DXA appears to be superior to other simple noninvasive methods for determining body composition, particularly when the emphasis is on repeated measurements.
NASA Astrophysics Data System (ADS)
Madani, Nima; Kimball, John S.; Running, Steven W.
2017-11-01
In the light use efficiency (LUE) approach of estimating the gross primary productivity (GPP), plant productivity is linearly related to absorbed photosynthetically active radiation assuming that plants absorb and convert solar energy into biomass within a maximum LUE (LUEmax) rate, which is assumed to vary conservatively within a given biome type. However, it has been shown that photosynthetic efficiency can vary within biomes. In this study, we used 149 global CO2 flux towers to derive the optimum LUE (LUEopt) under prevailing climate conditions for each tower location, stratified according to model training and test sites. Unlike LUEmax, LUEopt varies according to heterogeneous landscape characteristics and species traits. The LUEopt data showed large spatial variability within and between biome types, so that a simple biome classification explained only 29% of LUEopt variability over 95 global tower training sites. The use of explanatory variables in a mixed effect regression model explained 62.2% of the spatial variability in tower LUEopt data. The resulting regression model was used for global extrapolation of the LUEopt data and GPP estimation. The GPP estimated using the new LUEopt map showed significant improvement relative to global tower data, including a 15% R2 increase and 34% root-mean-square error reduction relative to baseline GPP calculations derived from biome-specific LUEmax constants. The new global LUEopt map is expected to improve the performance of LUE-based GPP algorithms for better assessment and monitoring of global terrestrial productivity and carbon dynamics.
Large-eddy simulations with wall models
NASA Technical Reports Server (NTRS)
Cabot, W.
1995-01-01
The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.
NASA Astrophysics Data System (ADS)
Multhaup, K.; Spohn, T.
2007-08-01
A thermal history model developed for medium-sized icy satellites containing silicate rock at low volume fractions is applied to Charon and five satellites of Uranus. The model assumes stagnant lid convection in homogeneously accreted bodies either confined to a spherical shell or encompassing the whole interior below the immobile surface layer. We employ a simple model for accretion assuming that infalling planetesimals deposit a fraction of their kinetic energy as heat at the instantaneous surface of the growing moon. Rheology parameters are chosen to match those of ice I, although the satellites under consideration likely contain admixtures of lighter constituents. Consequences thereof are discussed. Thermal evolution calculations considering radiogenic heating by long-lived isotopes suggest that Ariel, Umbriel, Titania, Oberon and Charon may have started to differentiate after a few hundred million years of evolution. Results for Miranda - the smallest satellite of Uranus - however, indicate that it never convected or differentiated. Miranda's interior temperature was found to be not even close to the melting temperatures of reasonable mixtures of water and ammonia. This finding is in contrast to its heavily modified surface and supports theories that propose alternative heating mechanisms such as early tidal heating. Except for Miranda, our results lend support to differentiated icy satellite models. We also point out parallels to previously published results obtained for several of Saturn's icy satellites (Multhaup and Spohn, 2007). The predicted early histories of Ariel, Umbriel and Charon are evocative of Dione's and Rhea's, while Miranda's resembles that of Mimas.
A Step-by-Step Picture of Pulsed (Time-Domain) NMR.
ERIC Educational Resources Information Center
Schwartz, Leslie J.
1988-01-01
Discusses a method for teaching time pulsed NMR principals that are as simple and pictorial as possible. Uses xyz coordinate figures and presents theoretical explanations using a Fourier transformation spectrum. Assumes no previous knowledge of quantum mechanics for students. Usable for undergraduates. (MVL)
Astrometric Observation of MACHO Gravitational Microlensing
NASA Technical Reports Server (NTRS)
Boden, A. F.; Shao, M.; Van Buren, D.
1997-01-01
This paper discusses the prospects for astrometric observation of MACHO gravitational microlensing events. We derive the expected astrometric observables for a simple microlensing event assuming a dark MACHO, and demonstrate that accurate astrometry can determine the lens mass, distance, and proper motion in a very general fashion.
The feasibility of inverting for flow in the lowermost mantle (Invited)
NASA Astrophysics Data System (ADS)
Nowacki, A.; Walpole, J.; Wookey, J. M.; Walker, A.; Forte, A. M.; Masters, G.; Kendall, J. M.
2013-12-01
At the core-mantle boundary (CMB), the largest change in physical properties occurs within the Earth. Furthermore, up to a few hundred kilometres above the CMB--the region known as D″--the largest lateral variations in seismic wave speed are observed outside the upper mantle. Observations of shear wave splitting in D″ shows that these variations are dependent not only on position, but also wave propagation direction and polarisation; that is, strong seismic anisotropy is a pervasive feature of D″, just as in the upper mantle (UM). Similarly to the UM, it is frequently argued that alignment of anisotropic minerals due to flow is the cause of this. Were this the case, this anisotropy could be used to retrieve the recent strain history of the lowermost mantle. Recent modelling of mineral alignment in D″ [1,2] has shown that quite simple models of mantle flow do not produce simple anisotropy, hence one must make use of the most information about the type and orientation of anisotropy possible. Global inversion for radial anisotropy permits complete coverage of the CMB but so far has relied on core-diffracted waves (Sdiff) which are challenging to accurately interpret [3]. The assumption of radial anisotropy may also be too restrictive [4]. Shear wave splitting studies do not impose any assumed type of anisotropy but have been traditionally limited in their geographical scope. We present the results of a consistent analysis of core-reflected shear waves (ScS) for shear wave splitting, producing near-global coverage [5] of D″. Over 12,000 individual measurements are made, from ~470 events. Along well-studied paths such as beneath the Caribbean, our results agree excellently with previous work. Elsewhere, a full range of fast orientations are observed, indicating that simple SV-SH comparisons may not accurately reflect the elasticity present. We compare these results to candidate models of D″ anisotropy assuming a simple flow model derived from geophysical observables. A number of different mechanisms (different slip systems causing alignment of MgSiO3-perovskite, -post-perovskite or MgO) are possible, hence we compute the expected seismic response for several. To accurately recover the wave field, no constraints on symmetry or type of anisotropy are possible, so we make use of the spectral element method. It is necessary to model wave propagation at the correct frequencies (~0.2 Hz), so computations must be performed on thousands of CPUs, using TBs of memory. We use a modified version of SPECFEM3D_GLOBE which does not require disk I/O, removing the main computational bottleneck. This suite of results allows us to contemplate the challenges to be faced in recovering dynamics from measurements of seismic anisotropy in the lowermost mantle. While robustly testing competing models of flow and deformation is within reach, direct inversion is still very much a work in progress. [1] Walker et al. (2011) Geochem., Geophys., Geosys., 12:Q10006. [2] Wenk et al. (2011) Earth Planet. Sci. Lett., 306:33-45. [3] Maupin (1994) Phys. Earth Planet. Inter., 87:1-32. [4] Nowacki et al. (2010) Nature, 467:1091-1095. [5] Houser et al. (2008) Geophys. J. Int., 174:195-212.
Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity
Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott
2008-01-01
The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic forcing. Copyright 2008 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Basu, A.; Das, B.; Middya, T. R.; Bhattacharya, D. P.
2017-01-01
The phonon growth characteristic in a degenerate semiconductor has been calculated under the condition of low temperature. If the lattice temperature is high, the energy of the intravalley acoustic phonon is negligibly small compared to the average thermal energy of the electrons. Hence one can traditionally assume the electron-phonon collisions to be elastic and approximate the Bose-Einstein (B.E.) distribution for the phonons by the simple equipartition law. However, in the present analysis at the low lattice temperatures, the interaction of the non equilibrium electrons with the acoustic phonons becomes inelastic and the simple equipartition law for the phonon distribution is not valid. Hence the analysis is made taking into account the inelastic collisions and the complete form of the B.E. distribution. The high-field distribution function of the carriers given by Fermi-Dirac (F.D.) function at the field dependent carrier temperature, has been approximated by a well tested model that apparently overcomes the intrinsic problem of correct evaluation of the integrals involving the product and powers of the Fermi function. Hence the results thus obtained are more reliable compared to the rough estimation that one may obtain from using the exact F.D. function, but taking recourse to some over simplified approximations.
Pathogen evolution and the immunological niche
Cobey, Sarah
2014-01-01
Host immunity is a major driver of pathogen evolution and thus a major determinant of pathogen diversity. Explanations for pathogen diversity traditionally assume simple interactions between pathogens and the immune system, a view encapsulated by the susceptible–infected–recovered (SIR) model. However, there is growing evidence that the complexity of many host–pathogen interactions is dynamically important. This revised perspective requires broadening the definition of a pathogen's immunological phenotype, or what can be thought of as its immunological niche. After reviewing evidence that interactions between pathogens and host immunity drive much of pathogen evolution, I introduce the concept of a pathogen's immunological phenotype. Models that depart from the SIR paradigm demonstrate the utility of this perspective and show that it is particularly useful in understanding vaccine-induced evolution. This paper highlights questions in immunology, evolution, and ecology that must be answered to advance theories of pathogen diversity. PMID:25040161
Bioturbation, advection, and diffusion of a conserved tracer in a laboratory flume
NASA Astrophysics Data System (ADS)
Work, P. A.; Moore, P. R.; Reible, D. D.
2002-06-01
Laboratory experiments indicating the relative influences of advection, diffusion, and bioturbation on transport of NaCl tracer between a stream and streambed are described. Data were collected in a recirculating flume housing a box filled with test sediments. Peclet numbers ranged from 0 to 1.5. Sediment components included a medium sand (d50 = 0.31 mm), kaolinite, and topsoil. Lumbriculus variegatus were introduced as bioturbators. Conductivity probes were employed to document the flux of the tracer solution out of the bed. Measurements are compared to one-dimensional effective diffusion models assuming one or two horizontal sediment layers. These simple models provide a good indication of tracer half-life in the bed if a suitable effective diffusion coefficient is chosen but underpredict initial flux and overpredict flux at long times. Organism activity was limited to the upper reaches of the sediment test box but eventually exerts a secondary influence on flux from deeper regions.
Oxygen transfer rate estimation in oxidation ditches from clean water measurements.
Abusam, A; Keesman, K J; Meinema, K; Van Straten, G
2001-06-01
Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).
Contaminated water delivery as a simple and effective method of experimental Salmonella infection
O’Donnell, Hope; Pham, Oanh H.; Benoun, Joseph M.; Ravesloot-Chávez, Marietta M.; McSorley, Stephen J.
2016-01-01
Aims In most infectious disease models, it is assumed that gavage needle infection is the most reliable means of pathogen delivery to the gastrointestinal tract. However, this methodology can cause esophageal tearing and induces stress in experimental animals, both of which have the potential to impact early infection and the subsequent immune response. Materials and Methods C57BL/6 mice were orally infected with virulent Salmonella Typhimurium SL1344 either by intragastric gavage preceded by sodium bicarbonate, or by contamination of drinking water. Results We demonstrate that water contamination delivery of Salmonella is equivalent to gavage inoculation in providing a consistent model of infection. Furthermore, exposure of mice to contaminated drinking water for as little as 4 hours allowed maximal mucosal and systemic infection, suggesting an abbreviated window exists for natural intestinal entry. Conclusions Together, these data question the need for gavage delivery for infection with oral pathogens. PMID:26439708
Modelling the excitation field of an optical resonator
NASA Astrophysics Data System (ADS)
Romanini, Daniele
2014-06-01
Assuming the paraxial approximation, we derive efficient recursive expressions for the projection coefficients of a Gaussian beam over the Gauss--Hermite transverse electro-magnetic (TEM) modes of an optical cavity. While previous studies considered cavities with cylindrical symmetry, our derivation accounts for "simple" astigmatism and ellipticity, which allows to deal with more realistic optical systems. The resulting expansion of the Gaussian beam over the cavity TEM modes provides accurate simulation of the excitation field distribution inside the cavity, in transmission, and in reflection. In particular, this requires including counter-propagating TEM modes, usually neglected in textbooks. As an illustrative application to a complex case, we simulate reentrant cavity configurations where Herriott spots are obtained at cavity output. We show that the case of an astigmatic cavity is also easily modelled. To our knowledge, such relevant applications are usually treated under the simplified geometrical optics approximation, or using heavier numerical methods.
The economics of bootstrapping space industries - Development of an analytic computer model
NASA Technical Reports Server (NTRS)
Goldberg, A. H.; Criswell, D. R.
1982-01-01
A simple economic model of 'bootstrapping' industrial growth in space and on the Moon is presented. An initial space manufacturing facility (SMF) is assumed to consume lunar materials to enlarge the productive capacity in space. After reaching a predetermined throughput, the enlarged SMF is devoted to products which generate revenue continuously in proportion to the accumulated output mass (such as space solar power stations). Present discounted value and physical estimates for the general factors of production (transport, capital efficiency, labor, etc.) are combined to explore optimum growth in terms of maximized discounted revenues. It is found that 'bootstrapping' reduces the fractional cost to a space industry of transport off-Earth, permits more efficient use of a given transport fleet. It is concluded that more attention should be given to structuring 'bootstrapping' scenarios in which 'learning while doing' can be more fully incorporated in program analysis.
The very low frequency power spectrum of Centaurus X-3
NASA Technical Reports Server (NTRS)
Gruber, D. E.
1988-01-01
The long-term variability of Cen X-3 on time scales ranging from days to years has been examined by combining data obtained by the HEAO 1 A-4 instrument with data from Vela 5B. A simple interpretation of the data is made in terms of the standard alpha-disk model of accretion disk structure and dynamics. Assuming that the low-frequency variance represents the inherent variability of the mass transfer from the companion, the decline in power at higher frequencies results from the leveling of radial structure in the accretion disk through viscous mixing. The shape of the observed power spectrum is shown to be in excellent agreement with a calculation based on a simplified form of this model. The observed low-frequency power spectrum of Cen X-3 is consistent with a disk in which viscous mixing occurs about as rapidly as possible and on the largest scale possible.
Photosynthetic capacity regulation is uncoupled from nutrient limitation
NASA Astrophysics Data System (ADS)
Smith, N. G.; Keenan, T. F.; Prentice, I. C.; Wang, H.
2017-12-01
Ecosystem and Earth system models need information on leaf-level photosynthetic capacity, but to date typically rely on empirical estimates and an assumed dependence on nitrogen supply. Recent evidence suggests that leaf nitrogen is actively controlled though plant responses to photosynthetic demand. Here, we propose and test a theory of demand-driven coordination of photosynthetic processes, and use it to assess the relative roles of nutrient supply and photosynthetic demand. The theory captured 63% of observed variability in a global dataset of Rubisco carboxylation capacity (Vcmax; 3,939 values at 219 sites), suggesting that environmentally regulated biophysical costs and light availability are the first-order drivers of photosynthetic capacity. Leaf nitrogen, on the other hand, was a weak secondary driver of Vcmax, explaining less than 6% of additional observed variability. We conclude that leaf nutrient allocation is primarily driven by demand. Our theory offers a simple, robust strategy for dynamically predicting leaf-level photosynthetic capacity in global models.
Identification and impact of discoverers in online social systems
Medo, Matúš; Mariani, Manuel S.; Zeng, An; Zhang, Yi-Cheng
2016-01-01
Understanding the behavior of users in online systems is of essential importance for sociology, system design, e-commerce, and beyond. Most existing models assume that individuals in diverse systems, ranging from social networks to e-commerce platforms, tend to what is already popular. We propose a statistical time-aware framework to identify the users who differ from the usual behavior by being repeatedly and persistently among the first to collect the items that later become hugely popular. Since these users effectively discover future hits, we refer them as discoverers. We use the proposed framework to demonstrate that discoverers are present in a wide range of real systems. Once identified, discoverers can be used to predict the future success of new items. We finally introduce a simple network model which reproduces the discovery patterns observed in the real data. Our results open the door to quantitative study of detailed temporal patterns in social systems. PMID:27687588
Saturn systems holddown acoustic efficiency and normalized acoustic power spectrum.
NASA Technical Reports Server (NTRS)
Gilbert, D. W.
1972-01-01
Saturn systems field acoustic data are used to derive mid- and far-field prediction parameters for rocket engine noise. The data were obtained during Saturn vehicle launches at the Kennedy Space Center. The data base is a sorted set of acoustic data measured during the period 1961 through 1971 for Saturn system launches SA-1 through AS-509. The model assumes hemispherical radiation from a simple source located at the intersection of the longitudinal axis of each booster and the engine exit plane. The model parameters are evaluated only during vehicle holddown. The acoustic normalized power spectrum and efficiency for each system are isolated as a composite from the data using linear numerical methods. The specific definitions of each allows separation. The resulting power spectra are nondimensionalized as a function of rocket engine parameters. The nondimensional Saturn system acoustic spectrum and efficiencies are compared as a function of Strouhal number with power spectra from other systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereira, S.H.; Pinho, A.S.S.; Silva, J.M. Hoff da
In this work the exact Friedmann-Robertson-Walker equations for an Elko spinor field coupled to gravity in an Einstein-Cartan framework are presented. The torsion functions coupling the Elko field spin-connection to gravity can be exactly solved and the FRW equations for the system assume a relatively simple form. In the limit of a slowly varying Elko spinor field there is a relevant contribution to the field equations acting exactly as a time varying cosmological model Λ( t )=Λ{sub *}+3β H {sup 2}, where Λ{sub *} and β are constants. Observational data using distance luminosity from magnitudes of supernovae constraint the parametersmore » Ω {sub m} and β, which leads to a lower limit to the Elko mass. Such model mimics, then, the effects of a dark energy fluid, here sourced by the Elko spinor field. The density perturbations in the linear regime were also studied in the pseudo-Newtonian formalism.« less
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
On the modulation of X ray fluxes in thunderstorms
NASA Technical Reports Server (NTRS)
Mccarthy, Michael P.; Parks, George K.
1992-01-01
The production of X-ray fluxes in thunderstorms has been attributed to bremsstrahlung. Assuming this, another question arises. How can a thunderstorm modulate the number density of electrons which are sufficiently energetic to produce X-rays? As a partial answer to this question, the effects of typical thunderstorm electric fields on a background population of energetic electrons, such as produced by cosmic ray secondaries and their decays or the decay of airborne radionuclides, are considered. The observed variation of X-ray flux is shown to be accounted for by a simple model involving typical electric field strengths. A necessary background electron number density is found from the model and is determined to be more than 2 orders of magnitude higher than that available from radon decay and a factor of 8 higher than that available from cosmic ray secondaries. The ionization enhancement due to energetic electrons and X-rays is discussed.
NASA Astrophysics Data System (ADS)
Dias, R. G.; Gouveia, J. D.
2015-11-01
We present a method of construction of exact localized many-body eigenstates of the Hubbard model in decorated lattices, both for U = 0 and U → ∞. These states are localized in what concerns both hole and particle movement. The starting point of the method is the construction of a plaquette or a set of plaquettes with a higher symmetry than that of the whole lattice. Using a simple set of rules, the tight-binding localized state in such a plaquette can be divided, folded and unfolded to new plaquette geometries. This set of rules is also valid for the construction of a localized state for one hole in the U → ∞ limit of the same plaquette, assuming a spin configuration which is a uniform linear combination of all possible permutations of the set of spins in the plaquette.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
Temporal correlation functions of concentration fluctuations: an anomalous case.
Lubelski, Ariel; Klafter, Joseph
2008-10-09
We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.
NASA Astrophysics Data System (ADS)
McNeill, A.; Fitzsimmons, A.; Jedicke, R.; Wainscoat, R.; Denneau, L.; Vereš, P.; Magnier, E.; Chambers, K. C.; Kaiser, N.; Waters, C.
2016-07-01
The rotational state of asteroids is controlled by various physical mechanisms including collisions, internal damping and the Yarkovsky-O'Keefe-Radzievskii-Paddack effect. We have analysed the changes in magnitude between consecutive detections of ˜60 000 asteroids measured by the Panoramic Survey Telescope and Rapid Response System (PanSTARRS) 1 survey during its first 18 months of operations. We have attempted to explain the derived brightness changes physically and through the application of a simple model. We have found a tendency towards smaller magnitude variations with decreasing diameter for objects of 1 < D < 8 km. Assuming the shape distribution of objects in this size range to be independent of size and composition our model suggests a population with average axial ratios 1 : 0.85 ± 0.13 : 0.71 ± 0.13, with larger objects more likely to have spin axes perpendicular to the orbital plane.
Search for massive long-lived particles decaying semileptonically in the LHCb detector.
Aaij, R; Adeva, B; Adinolfi, M; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Archilli, F; d'Argent, P; Arnau Romeu, J; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Babuschkin, I; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baker, S; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Baszczyk, M; Batozskaya, V; Batsukh, B; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Betancourt, C; Betti, F; Bettler, M-O; van Beuzekom, M; Bezshyiko, Ia; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bitadze, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Boettcher, T; Bondar, A; Bondar, N; Bonivento, W; Bordyuzhin, I; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bossu, F; Boubdir, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D H; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Chamont, D; Charles, M; Charpentier, Ph; Chatzikonstantinidis, G; Chefdeville, M; Chen, S; Cheung, S-F; Chobanova, V; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombs, G; Coquereau, S; Corti, G; Corvo, M; Costa Sobral, C M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Da Cunha Marinho, F; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Serio, M; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Demmer, M; Dendek, A; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Dijkstra, H; Dordei, F; Dorigo, M; Dosil Suárez, A; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Déléage, N; Easo, S; Ebert, M; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Farley, N; Farry, S; Fay, R; Fazzini, D; Ferguson, D; Fernandez Prieto, A; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fini, R A; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Franco Lima, V; Frank, M; Frei, C; Fu, J; Funk, W; Furfaro, E; Färber, C; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garcia Martin, L M; García Pardiñas, J; Garra Tico, J; Garrido, L; Garsed, P J; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gizdov, K; Gligorov, V V; Golubkov, D; Golutvin, A; Gomes, A; Gorelov, I V; Gotti, C; Gándara, M Grabalosa; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Griffith, P; Grillo, L; Gruberg Cazon, B R; Grünberg, O; Gushchin, E; Guz, Yu; Gys, T; Göbel, C; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hatch, M; He, J; Head, T; Heister, A; Hennessy, K; Henrard, P; Henry, L; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hombach, C; Hopchev, H; Hulsbergen, W; Humair, T; Hushchyn, M; Hussain, N; Hutchcroft, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jiang, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Kariuki, J M; Karodia, S; Kecke, M; Kelsey, M; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Koliiev, S; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kosmyntseva, A; Kozachuk, A; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Leflat, A; Lefrançois, J; Lefèvre, R; Lemaitre, F; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, T; Li, Y; Likhomanenko, T; Lindner, R; Linn, C; Lionetto, F; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusiani, A; Lyu, X; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Maltsev, T; Manca, G; Mancinelli, G; Manning, P; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Mogini, A; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Mulder, M; Mussini, M; Muster, B; Müller, D; Müller, J; Müller, K; Müller, V; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, T D; Nguyen-Mau, C; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Oldeman, R; Onderwater, C J G; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Pais, P R; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Parker, W; Parkes, C; Passaleva, G; Pastore, A; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petrov, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Pomery, G J; Popov, A; Popov, D; Popovici, B; Poslavskii, S; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Ratnikov, F; Raven, G; Redi, F; Reichert, S; Dos Reis, A C; Remon Alepuz, C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Rollings, A; Romanovskiy, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Rudolph, M S; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sadykhov, E; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schellenberg, M; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubert, K; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Simone, S; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Soares Lavra, L; Sokoloff, M D; Soler, F J P; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefko, P; Stefkova, S; Steinkamp, O; Stemmle, S; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Syropoulos, V; Szczekowski, M; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Teklishyn, M; Tellarini, G; Teubert, F; Thomas, E; van Tilburg, J; Tilley, M J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Toriello, F; Tournefier, E; Tourneur, S; Trabelsi, K; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tully, A; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valassi, A; Valat, S; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Venkateswaran, A; Vernet, M; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Viemann, H; Vilasis-Cardona, X; Vitti, M; Volkov, V; Vollhardt, A; Voneki, B; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Vázquez Sierra, C; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Wark, H M; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wyllie, K; Xie, Y; Xing, Z; Xu, Z; Yang, Z; Yao, Y; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zarebski, K A; Zavertyaev, M; Zhang, L; Zhang, Y; Zhang, Y; Zhelezov, A; Zheng, Y; Zhu, X; Zhukov, V; Zucchelli, S
2017-01-01
A search is presented for massive long-lived particles decaying into a muon and two quarks. The dataset consists of proton-proton interactions at centre-of-mass energies of 7 and 8 TeV, corresponding to integrated luminosities of 1 and 2[Formula: see text], respectively. The analysis is performed assuming a set of production mechanisms with simple topologies, including the production of a Higgs-like particle decaying into two long-lived particles. The mass range from 20 to 80 [Formula: see text] and lifetimes from 5 to 100[Formula: see text] are explored. Results are also interpreted in terms of neutralino production in different R-Parity violating supersymmetric models, with masses in the 23-198 GeV/[Formula: see text] range. No excess above the background expectation is observed and upper limits are set on the production cross-section for various points in the parameter space of theoretical models.
Identification and impact of discoverers in online social systems
NASA Astrophysics Data System (ADS)
Medo, Matúš; Mariani, Manuel S.; Zeng, An; Zhang, Yi-Cheng
2016-09-01
Understanding the behavior of users in online systems is of essential importance for sociology, system design, e-commerce, and beyond. Most existing models assume that individuals in diverse systems, ranging from social networks to e-commerce platforms, tend to what is already popular. We propose a statistical time-aware framework to identify the users who differ from the usual behavior by being repeatedly and persistently among the first to collect the items that later become hugely popular. Since these users effectively discover future hits, we refer them as discoverers. We use the proposed framework to demonstrate that discoverers are present in a wide range of real systems. Once identified, discoverers can be used to predict the future success of new items. We finally introduce a simple network model which reproduces the discovery patterns observed in the real data. Our results open the door to quantitative study of detailed temporal patterns in social systems.
Identification and impact of discoverers in online social systems.
Medo, Matúš; Mariani, Manuel S; Zeng, An; Zhang, Yi-Cheng
2016-09-30
Understanding the behavior of users in online systems is of essential importance for sociology, system design, e-commerce, and beyond. Most existing models assume that individuals in diverse systems, ranging from social networks to e-commerce platforms, tend to what is already popular. We propose a statistical time-aware framework to identify the users who differ from the usual behavior by being repeatedly and persistently among the first to collect the items that later become hugely popular. Since these users effectively discover future hits, we refer them as discoverers. We use the proposed framework to demonstrate that discoverers are present in a wide range of real systems. Once identified, discoverers can be used to predict the future success of new items. We finally introduce a simple network model which reproduces the discovery patterns observed in the real data. Our results open the door to quantitative study of detailed temporal patterns in social systems.
Contaminated water delivery as a simple and effective method of experimental Salmonella infection.
O'Donnell, Hope; Pham, Oanh H; Benoun, Joseph M; Ravesloot-Chávez, Marietta M; McSorley, Stephen J
2015-01-01
In most infectious disease models, it is assumed that gavage needle infection is the most reliable means of pathogen delivery to the GI tract. However, this methodology can cause esophageal tearing and induces stress in experimental animals, both of which have the potential to impact early infection and the subsequent immune response. C57BL/6 mice were orally infected with virulent Salmonella Typhimurium SL1344 either by intragastric gavage preceded by sodium bicarbonate, or by contamination of drinking water. We demonstrate that water contamination delivery of Salmonella is equivalent to gavage inoculation in providing a consistent model of infection. Furthermore, exposure of mice to contaminated drinking water for as little as 4 h allowed maximal mucosal and systemic infection, suggesting an abbreviated window exists for natural intestinal entry. Together, these data question the need for gavage delivery for infection with oral pathogens.
Leconte, Jérémy; Wu, Hanbo; Menou, Kristen; Murray, Norman
2015-02-06
Planets in the habitable zone of lower-mass stars are often assumed to be in a state of tidally synchronized rotation, which would considerably affect their putative habitability. Although thermal tides cause Venus to rotate retrogradely, simple scaling arguments tend to attribute this peculiarity to the massive Venusian atmosphere. Using a global climate model, we show that even a relatively thin atmosphere can drive terrestrial planets' rotation away from synchronicity. We derive a more realistic atmospheric tide model that predicts four asynchronous equilibrium spin states, two being stable, when the amplitude of the thermal tide exceeds a threshold that is met for habitable Earth-like planets with a 1-bar atmosphere around stars more massive than ~0.5 to 0.7 solar mass. Thus, many recently discovered terrestrial planets could exhibit asynchronous spin-orbit rotation, even with a thin atmosphere. Copyright © 2015, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezler, P.; Hartzman, M.; Reich, M.
1980-08-01
A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.; Ryan, Joseph N.
2013-01-01
A colloid transport model is introduced that is conceptually simple yet captures the essential features of colloid transport and retention in saturated porous media when colloid retention is dominated by the secondary minimum because an electrostatic barrier inhibits substantial deposition in the primary minimum. This model is based on conventional colloid filtration theory (CFT) but eliminates the empirical concept of attachment efficiency. The colloid deposition rate is computed directly from CFT by assuming all predicted interceptions of colloids by collectors result in at least temporary deposition in the secondary minimum. Also, a new paradigm for colloid re-entrainment based on colloid population heterogeneity is introduced. To accomplish this, the initial colloid population is divided into two fractions. One fraction, by virtue of physiochemical characteristics (e.g., size and charge), will always be re-entrained after capture in a secondary minimum. The remaining fraction of colloids, again as a result of physiochemical characteristics, will be retained “irreversibly” when captured by a secondary minimum. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of the initial colloid population that will be retained “irreversibly” upon interception by a secondary minimum, and (2) the rate at which reversibly retained colloids leave the secondary minimum. These two parameters were correlated to the depth of the Derjaguin-Landau-Verwey-Overbeek (DLVO) secondary energy minimum and pore-water velocity, two physical forces that influence colloid transport. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport.
NASA Technical Reports Server (NTRS)
Butner, Harold M.
1999-01-01
Our understanding about the inter-relationship between the collapsing cloud envelope and the disk has been greatly altered. While the dominant star formation models invoke free fall collapse and r(sup -1.5) density profile, other star formation models are possible. These models invoke either different cloud starting conditions or the mediating effects of magnetic fields to alter the cloud geometry during collapse. To test these models, it is necessary to understand the envelope's physical structure. The discovery of disks, based on millimeter observations around young stellar objects, however makes a simple interpretation of the emission complicated. Depending on the wavelength, the disk or the envelope could dominate emission from a star. In addition, the discovery of planets around other stars has made understanding the disks in their own right quite important. Many star formation models predict disks should form naturally as the star is forming. In many cases, the information we derive about disk properties depends implicitly on the assumed envelope properties. How to understand the two components and their interaction with each other is a key problem of current star formation.
Bosak, A; Chernyshov, D; Vakhrushev, Sergey; Krisch, M
2012-01-01
The available body of experimental data in terms of the relaxor-specific component of diffuse scattering is critically analysed and a collection of related models is reviewed; the sources of experimental artefacts and consequent failures of modelling efforts are enumerated. Furthermore, it is shown that the widely used concept of polar nanoregions as individual static entities is incompatible with the experimental diffuse scattering results. Based on the synchrotron diffuse scattering three-dimensional data set taken for the prototypical ferroelectric relaxor lead magnesium niobate-lead titanate (PMN-PT), a new parameterization of diffuse scattering in relaxors is presented and a simple phenomenological picture is proposed to explain the unusual properties of the relaxor behaviour. The model assumes a specific slowly changing displacement pattern, which is indirectly controlled by the low-energy acoustic phonons of the system. The model provides a qualitative but rather detailed explanation of temperature, pressure and electric-field dependence of diffuse neutron and X-ray scattering, as well as of the existence of a hierarchy in the relaxation times of these materials.