A SIMPLE MODEL FOR THE UPTAKE, TRANSLOCATION, AND ACCUMULATION OF PERCHLORATE IN TOBACCO PLANTS
A simple mathematical model is being developed to describe the uptake, translocation, and accumulation of perchlorate in tobacco plants. The model defines a plant as a set of compartments, consisting of mass balance differential equations and plant-specific physiological paramet...
ZIMOD: A Simple Computer Model of the Zimbabwean Economy.
ERIC Educational Resources Information Center
Knox, Jon; And Others
1988-01-01
This paper describes a rationale for the construction and use of a simple consistency model of the Zimbabwean economy that incorporates an input-output matrix. The model is designed to investigate alternative industrial strategies and their consequences for the balance of payments, consumption, and overall gross domestic product growth for a…
A simple-source model of military jet aircraft noise
NASA Astrophysics Data System (ADS)
Morgan, Jessica; Gee, Kent L.; Neilsen, Tracianne; Wall, Alan T.
2010-10-01
The jet plumes produced by military jet aircraft radiate significant amounts of noise. A need to better understand the characteristics of the turbulence-induced aeroacoustic sources has motivated the present study. The purpose of the study is to develop a simple-source model of jet noise that can be compared to the measured data. The study is based off of acoustic data collected near a tied-down F-22 Raptor. The simplest model consisted of adjusting the origin of a monopole above a rigid planar reflector until the locations of the predicted and measured interference nulls matched. The model has developed into an extended Rayleigh distribution of partially correlated monopoles which fits the measured data from the F-22 significantly better. The results and basis for the model match the current prevailing theory that jet noise consists of both correlated and uncorrelated sources. In addition, this simple-source model conforms to the theory that the peak source location moves upstream with increasing frequency and lower engine conditions.
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin; Zurek, Wojciech H.
2006-06-01
We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of “singly branching” states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment’s size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or “nonredundant,” information.
An egalitarian network model for the emergence of simple and complex cells in visual cortex
Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert
2004-01-01
We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891
Using of Video Modeling in Teaching a Simple Meal Preparation Skill for Pupils of Down Syndrome
ERIC Educational Resources Information Center
AL-Salahat, Mohammad Mousa
2016-01-01
The current study aimed to identify the impact of video modeling upon teaching three pupils with Down syndrome the skill of preparing a simple meal (sandwich), where the training was conducted in a separate classroom in schools of normal students. The training consisted of (i) watching the video of an intellectually disabled pupil, who is…
pyhector: A Python interface for the simple climate model Hector
Willner, Sven N.; Hartin, Corinne; Gieseke, Robert
2017-04-01
Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-05-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.
SEEPLUS: A SIMPLE ONLINE CLIMATE MODEL
NASA Astrophysics Data System (ADS)
Tsutsui, Junichi
A web application for a simple climate model - SEEPLUS (a Simple climate model to Examine Emission Pathways Leading to Updated Scenarios) - has been developed. SEEPLUS consists of carbon-cycle and climate-change modules, through which it provides the information infrastructure required to perform climate-change experiments, even on a millennial-timescale. The main objective of this application is to share the latest scientific knowledge acquired from climate modeling studies among the different stakeholders involved in climate-change issues. Both the carbon-cycle and climate-change modules employ impulse response functions (IRFs) for their key processes, thereby enabling the model to integrate the outcome from an ensemble of complex climate models. The current IRF parameters and forcing manipulation are basically consistent with, or within an uncertainty range of, the understanding of certain key aspects such as the equivalent climate sensitivity and ocean CO2 uptake data documented in representative literature. The carbon-cycle module enables inverse calculation to determine the emission pathway required in order to attain a given concentration pathway, thereby providing a flexible way to compare the module with more advanced modeling studies. The module also enables analytical evaluation of its equilibrium states, thereby facilitating the long-term planning of global warming mitigation.
Dense simple plasmas as high-temperature liquid simple metals
NASA Technical Reports Server (NTRS)
Perrot, F.
1990-01-01
The thermodynamic properties of dense plasmas considered as high-temperature liquid metals are studied. An attempt is made to show that the neutral pseudoatom picture of liquid simple metals may be extended for describing plasmas in ranges of densities and temperatures where their electronic structure remains 'simple'. The primary features of the model when applied to plasmas include the temperature-dependent self-consistent calculation of the electron charge density and the determination of a density and temperature-dependent ionization state.
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
A simple marriage model for the power-law behaviour in the frequency distributions of family names
NASA Astrophysics Data System (ADS)
Wu, Hao-Yun; Chou, Chung-I.; Tseng, Jie-Jun
2011-01-01
In many countries, the frequency distributions of family names are found to decay as a power law with an exponent ranging from 1.0 to 2.2. In this work, we propose a simple marriage model which can reproduce this power-law behaviour. Our model, based on the evolution of families, consists of the growth of big families and the formation of new families. Preliminary results from the model show that the name distributions are in good agreement with empirical data from Taiwan and Norway.
Electrical conductivity of metal powders under pressure
NASA Astrophysics Data System (ADS)
Montes, J. M.; Cuevas, F. G.; Cintas, J.; Urban, P.
2011-12-01
A model for calculating the electrical conductivity of a compressed powder mass consisting of oxide-coated metal particles has been derived. A theoretical tool previously developed by the authors, the so-called `equivalent simple cubic system', was used in the model deduction. This tool is based on relating the actual powder system to an equivalent one consisting of deforming spheres packed in a simple cubic lattice, which is much easier to examine. The proposed model relates the effective electrical conductivity of the powder mass under compression to its level of porosity. Other physically measurable parameters in the model are the conductivities of the metal and oxide constituting the powder particles, their radii, the mean thickness of the oxide layer and the tap porosity of the powder. Two additional parameters controlling the effect of the descaling of the particle oxide layer were empirically introduced. The proposed model was experimentally verified by measurements of the electrical conductivity of aluminium, bronze, iron, nickel and titanium powders under pressure. The consistency between theoretical predictions and experimental results was reasonably good in all cases.
Simple analytical model of a thermal diode
NASA Astrophysics Data System (ADS)
Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul
2018-05-01
Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.
A computational method for optimizing fuel treatment locations
Mark A. Finney
2006-01-01
Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...
NASA Astrophysics Data System (ADS)
Cochrane, C. J.; Lenahan, P. M.; Lelis, A. J.
2009-03-01
We have identified a magnetic resonance spectrum associated with minority carrier lifetime killing defects in device quality 4H SiC through magnetic resonance measurements in bipolar junction transistors using spin dependent recombination (SDR). The SDR spectrum has nine distinguishable lines; it is, within experimental error, essentially isotropic with four distinguishable pairs of side peaks symmetric about the strong center line. The line shape is, within experimental error, independent of bias voltage and recombination current. The large amplitude and spacing of the inner pair of side peaks and three more widely separated pairs of side peaks are not consistent with either a simple silicon or carbon vacancy or a carbon or silicon antisite. This indicates that the lifetime killing defect is not a simple defect but a defect aggregate. The spectrum is consistent with a multidefect cluster with an electron spin S =1/2. (The observed spectrum has not been reported previously in the magnetic resonance literature on SiC.) A fairly strong argument can be made in terms of a first order model linking the SDR spectrum to a divacancy or possibly a vacancy/antisite pair. The SDR amplitude versus gate voltage is semiquantitatively consistent with a very simple model in which the defect is uniformly distributed within the depletion region of the base/collector junction and is also the dominating recombination center. The large relative amplitude of the SDR response is more nearly consistent with a Kaplan-Solomon-Mott-like model for spin dependent recombination than the Lepine model.
On the Bayesian Nonparametric Generalization of IRT-Type Models
ERIC Educational Resources Information Center
San Martin, Ernesto; Jara, Alejandro; Rolin, Jean-Marie; Mouchart, Michel
2011-01-01
We study the identification and consistency of Bayesian semiparametric IRT-type models, where the uncertainty on the abilities' distribution is modeled using a prior distribution on the space of probability measures. We show that for the semiparametric Rasch Poisson counts model, simple restrictions ensure the identification of a general…
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
QSAR modelling using combined simple competitive learning networks and RBF neural networks.
Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E
2018-04-01
The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willner, Sven N.; Hartin, Corinne; Gieseke, Robert
Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less
The Money-Creation Model: An Alternative Pedagogy.
ERIC Educational Resources Information Center
Thornton, Mark; And Others
1991-01-01
Presents a teaching model that is consistent with the traditional approach to demonstrating the expansion and contraction of the money supply. Suggests that the model provides a simple and convenient visual image of changes in the monetary system. Describes the model as juxtaposing the behavior of the moneyholding public with that of the…
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Doubly self-consistent field theory of grafted polymers under simple shear in steady state.
Suo, Tongchuan; Whitmore, Mark D
2014-03-21
We present a generalization of the numerical self-consistent mean-field theory of polymers to the case of grafted polymers under simple shear. The general theoretical framework is presented, and then applied to three different chain models: rods, Gaussian chains, and finitely extensible nonlinear elastic (FENE) chains. The approach is self-consistent at two levels. First, for any flow field, the polymer density profile and effective potential are calculated self-consistently in a manner similar to the usual self-consistent field theory of polymers, except that the calculation is inherently two-dimensional even for a laterally homogeneous system. Second, through the use of a modified Brinkman equation, the flow field and the polymer profile are made self-consistent with respect to each other. For all chain models, we find that reasonable levels of shear cause the chains to tilt, but it has very little effect on the overall thickness of the polymer layer, causing a small decrease for rods, and an increase of no more than a few percent for the Gaussian and FENE chains. Using the FENE model, we also probe the individual bond lengths, bond correlations, and bond angles along the chains, the effects of the shear on them, and the solvent and bonded stress profiles. We find that the approximations needed within the theory for the Brinkman equation affect the bonded stress, but none of the other quantities.
NASA Astrophysics Data System (ADS)
Charnay, B.; Bézard, B.; Baudino, J.-L.; Bonnefoy, M.; Boccaletti, A.; Galicher, R.
2018-02-01
We developed a simple, physical, and self-consistent cloud model for brown dwarfs and young giant exoplanets. We compared different parametrizations for the cloud particle size, by fixing either particle radii or the mixing efficiency (parameter f sed), or by estimating particle radii from simple microphysics. The cloud scheme with simple microphysics appears to be the best parametrization by successfully reproducing the observed photometry and spectra of brown dwarfs and young giant exoplanets. In particular, it reproduces the L–T transition, due to the condensation of silicate and iron clouds below the visible/near-IR photosphere. It also reproduces the reddening observed for low-gravity objects, due to an increase of cloud optical depth for low gravity. In addition, we found that the cloud greenhouse effect shifts chemical equilibrium, increasing the abundances of species stable at high temperature. This effect should significantly contribute to the strong variation of methane abundance at the L–T transition and to the methane depletion observed on young exoplanets. Finally, we predict the existence of a continuum of brown dwarfs and exoplanets for absolute J magnitude = 15–18 and J-K color = 0–3, due to the evolution of the L–T transition with gravity. This self-consistent model therefore provides a general framework to understand the effects of clouds and appears well-suited for atmospheric retrievals.
Towards a Simple Constitutive Model for Bread Dough
NASA Astrophysics Data System (ADS)
Tanner, Roger I.
2008-07-01
Wheat flour dough is an example of a soft solid material consisting of a gluten (rubbery) network with starch particles as a filler. The volume fraction of the starch filler is high-typically 60%. A computer-friendly constitutive model has been lacking for this type of material and here we report on progress towards finding such a model. The model must describe the response to small strains, simple shearing starting from rest, simple elongation, biaxial straining, recoil and various other transient flows. A viscoelastic Lodge-type model involving a damage function. which depends on strain from an initial reference state fits the given data well, and it is also able to predict the thickness at exit from dough sheeting, which has been a long-standing unsolved puzzle. The model also shows an apparent rate-dependent yield stress, although no explicit yield stress is built into the model. This behaviour agrees with the early (1934) observations of Schofield and Scott Blair on dough recoil after unloading.
Intelligent traffic signals : extending the range of self-organization in the BML model.
DOT National Transportation Integrated Search
2013-04-01
The two-dimensional traffic model of Biham, Middleton and Levine (Phys. Rev. A, 1992) is : a simple cellular automaton that exhibits a wide range of complex behavior. It consists of both : northbound and eastbound cars traveling on a rectangular arra...
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; ...
2017-01-03
In this paper, we present a consistent implicit incompressible smoothed particle hydrodynamics (I 2SPH) discretization of Navier–Stokes, Poisson–Boltzmann, and advection–diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The accuracy and convergence of the consistent I 2SPH are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. Lastly, the new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Self-organization, the cascade model, and natural hazards.
Turcotte, Donald L; Malamud, Bruce D; Guzzetti, Fausto; Reichenbach, Paola
2002-02-19
We consider the frequency-size statistics of two natural hazards, forest fires and landslides. Both appear to satisfy power-law (fractal) distributions to a good approximation under a wide variety of conditions. Two simple cellular-automata models have been proposed as analogs for this observed behavior, the forest fire model for forest fires and the sand pile model for landslides. The behavior of these models can be understood in terms of a self-similar inverse cascade. For the forest fire model the cascade consists of the coalescence of clusters of trees; for the sand pile model the cascade consists of the coalescence of metastable regions.
Self-organization, the cascade model, and natural hazards
Turcotte, Donald L.; Malamud, Bruce D.; Guzzetti, Fausto; Reichenbach, Paola
2002-01-01
We consider the frequency-size statistics of two natural hazards, forest fires and landslides. Both appear to satisfy power-law (fractal) distributions to a good approximation under a wide variety of conditions. Two simple cellular-automata models have been proposed as analogs for this observed behavior, the forest fire model for forest fires and the sand pile model for landslides. The behavior of these models can be understood in terms of a self-similar inverse cascade. For the forest fire model the cascade consists of the coalescence of clusters of trees; for the sand pile model the cascade consists of the coalescence of metastable regions. PMID:11875206
A survey of commercial object-oriented database management systems
NASA Technical Reports Server (NTRS)
Atkins, John
1992-01-01
The object-oriented data model is the culmination of over thirty years of database research. Initially, database research focused on the need to provide information in a consistent and efficient manner to the business community. Early data models such as the hierarchical model and the network model met the goal of consistent and efficient access to data and were substantial improvements over simple file mechanisms for storing and accessing data. However, these models required highly skilled programmers to provide access to the data. Consequently, in the early 70's E.F. Codd, an IBM research computer scientists, proposed a new data model based on the simple mathematical notion of the relation. This model is known as the Relational Model. In the relational model, data is represented in flat tables (or relations) which have no physical or internal links between them. The simplicity of this model fostered the development of powerful but relatively simple query languages that now made data directly accessible to the general database user. Except for large, multi-user database systems, a database professional was in general no longer necessary. Database professionals found that traditional data in the form of character data, dates, and numeric data were easily represented and managed via the relational model. Commercial relational database management systems proliferated and performance of relational databases improved dramatically. However, there was a growing community of potential database users whose needs were not met by the relational model. These users needed to store data with data types not available in the relational model and who required a far richer modelling environment than that provided by the relational model. Indeed, the complexity of the objects to be represented in the model mandated a new approach to database technology. The Object-Oriented Model was the result.
Statistical Mechanics of the US Supreme Court
NASA Astrophysics Data System (ADS)
Lee, Edward D.; Broedersz, Chase P.; Bialek, William
2015-07-01
We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The maximum entropy model consistent with the observed pairwise correlations among justices' votes, an Ising spin glass, agrees quantitatively with the data. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering the intuition that ideologically opposite justices negatively influence each another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, organizing the voting patterns in a relatively simple "energy landscape." Besides unanimity, other energy minima in this landscape, or maxima in probability, correspond to prototypical voting states, such as the ideological split or a tightly correlated, conservative core. The model correctly predicts the correlation of justices with the majority and gives us a measure of their influence on the majority decision. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context.
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
Synapse fits neuron: joint reduction by model inversion.
van der Scheer, H T; Doelman, A
2017-08-01
In this paper, we introduce a novel simplification method for dealing with physical systems that can be thought to consist of two subsystems connected in series, such as a neuron and a synapse. The aim of our method is to help find a simple, yet convincing model of the full cascade-connected system, assuming that a satisfactory model of one of the subsystems, e.g., the neuron, is already given. Our method allows us to validate a candidate model of the full cascade against data at a finer scale. In our main example, we apply our method to part of the squid's giant fiber system. We first postulate a simple, hypothetical model of cell-to-cell signaling based on the squid's escape response. Then, given a FitzHugh-type neuron model, we derive the verifiable model of the squid giant synapse that this hypothesis implies. We show that the derived synapse model accurately reproduces synaptic recordings, hence lending support to the postulated, simple model of cell-to-cell signaling, which thus, in turn, can be used as a basic building block for network models.
NASA Astrophysics Data System (ADS)
Johari, A. H.; Muslim
2018-05-01
Experiential learning model using simple physics kit has been implemented to get a picture of improving attitude toward physics senior high school students on Fluid. This study aims to obtain a description of the increase attitudes toward physics senior high school students. The research method used was quasi experiment with non-equivalent pretest -posttest control group design. Two class of tenth grade were involved in this research 28, 26 students respectively experiment class and control class. Increased Attitude toward physics of senior high school students is calculated using an attitude scale consisting of 18 questions. Based on the experimental class test average of 86.5% with the criteria of almost all students there is an increase and in the control class of 53.75% with the criteria of half students. This result shows that the influence of experiential learning model using simple physics kit can improve attitude toward physics compared to experiential learning without using simple physics kit.
Doubly self-consistent field theory of grafted polymers under simple shear in steady state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suo, Tongchuan; Whitmore, Mark D., E-mail: mark-whitmore@umanitoba.ca
2014-03-21
We present a generalization of the numerical self-consistent mean-field theory of polymers to the case of grafted polymers under simple shear. The general theoretical framework is presented, and then applied to three different chain models: rods, Gaussian chains, and finitely extensible nonlinear elastic (FENE) chains. The approach is self-consistent at two levels. First, for any flow field, the polymer density profile and effective potential are calculated self-consistently in a manner similar to the usual self-consistent field theory of polymers, except that the calculation is inherently two-dimensional even for a laterally homogeneous system. Second, through the use of a modified Brinkmanmore » equation, the flow field and the polymer profile are made self-consistent with respect to each other. For all chain models, we find that reasonable levels of shear cause the chains to tilt, but it has very little effect on the overall thickness of the polymer layer, causing a small decrease for rods, and an increase of no more than a few percent for the Gaussian and FENE chains. Using the FENE model, we also probe the individual bond lengths, bond correlations, and bond angles along the chains, the effects of the shear on them, and the solvent and bonded stress profiles. We find that the approximations needed within the theory for the Brinkman equation affect the bonded stress, but none of the other quantities.« less
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
Modeling of electron cyclotron resonance discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyyappan, M.; Govindan, T.R.
The current trend in plasma processing is the development of high density plasma sources to achieve high deposition and etch rates, uniformity over large ares, and low wafer damage. Here, is a simple model to predict the spatially-averaged plasma characteristics of electron cyclotron resonance (ECR) reactors is presented. The model consists of global conservation equations for species concentration, electron density and energy. A gas energy balance is used to predict the neutral temperature self-consistently. The model is demonstrated for an ECR argon discharge. The predicted behavior of the discharge as a function of system variables agrees well with experimental observations.
Greenhouse effect: temperature of a metal sphere surrounded by a glass shell and heated by sunlight
NASA Astrophysics Data System (ADS)
Nguyen, Phuc H.; Matzner, Richard A.
2012-01-01
We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the z-axis. This development is a generalization of the simple treatment of the greenhouse effect given by Kittel and Kroemer (1980 Thermal Physics (San Francisco: Freeman)) and can serve as a very simple model demonstrating the much more complex Earth greenhouse effect. Solution of the model problem provides an excellent pedagogical tool at the Junior/Senior undergraduate level.
Multibody dynamic analysis using a rotation-free shell element with corotational frame
NASA Astrophysics Data System (ADS)
Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen
2018-03-01
Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.
Scalable Track Detection in SAR CCD Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, James G; Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less
Nature of solidification of nanoconfined organic liquid layers.
Lang, X Y; Zhu, Y F; Jiang, Q
2007-01-30
A simple model is established for solidification of a nanoconfined liquid under nonequilibrium conditions. In terms of this model, the nature of solidification is the conjunct finite size and interface effects, which is directly related to the cooling rate or the relaxation time of the undercooled liquid. The model predictions are consistent with available experimental results.
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions
Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi
2015-01-01
In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452
Model compilation: An approach to automated model derivation
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo
1990-01-01
An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.
Identifying mechanisms for superdiffusive dynamics in cell trajectories
NASA Astrophysics Data System (ADS)
Passucci, Giuseppe; Brasch, Megan; Henderson, James; Manning, M. Lisa
Self-propelled particle (SPP) models have been used to explore features of active matter such as motility-induced phase separation, jamming, and flocking, and are often used to model biological cells. However, many cells exhibit super-diffusive trajectories, where displacements scale faster than t 1 / 2 in all directions, and these are not captured by traditional SPP models. We extract cell trajectories from image stacks of mouse fibroblast cells moving on 2D substrates and find super-diffusive mean-squared displacements in all directions across varying densities. Two SPP model modifications have been proposed to capture super-diffusive dynamics: Levy walks and heterogeneous motility parameters. In mouse fibroblast cells displacement probability distributions collapse when time is rescaled by a power greater than 1/2, which is consistent with Levy walks. We show that a simple SPP model with heterogeneous rotational noise can also generate a similar collapse. Furthermore, a close examination of statistics extracted directly from cell trajectories is consistent with a heterogeneous mobility SPP model and inconsistent with a Levy walk model. Our work demonstrates that a simple set of analyses can distinguish between mechanisms for anomalous diffusion in active matter.
Two classes of ODE models with switch-like behavior.
Just, Winfried; Korb, Mason; Elbert, Ben; Young, Todd
2013-12-01
In cases where the same real-world system can be modeled both by an ODE system ⅅ and a Boolean system , it is of interest to identify conditions under which the two systems will be consistent, that is, will make qualitatively equivalent predictions. In this note we introduce two broad classes of relatively simple models that provide a convenient framework for studying such questions. In contrast to the widely known class of Glass networks, the right-hand sides of our ODEs are Lipschitz-continuous. We prove that if has certain structures, consistency between ⅅ and is implied by sufficient separation of time scales in one class of our models. Namely, if the trajectories of are "one-stepping" then we prove a strong form of consistency and if has a certain monotonicity property then there is a weaker consistency between ⅅ and . These results appear to point to more general structure properties that favor consistency between ODE and Boolean models.
Selective Transient Cooling by Impulse Perturbations in a Simple Toy Model
NASA Astrophysics Data System (ADS)
Fabrizio, Michele
2018-06-01
We show in a simple exactly solvable toy model that a properly designed impulse perturbation can transiently cool down low-energy degrees of freedom at the expense of high-energy ones that heat up. The model consists of two infinite-range quantum Ising models: one, the high-energy sector, with a transverse field much bigger than the other, the low-energy sector. The finite-duration perturbation is a spin exchange that couples the two Ising models with an oscillating coupling strength. We find a cooling of the low-energy sector that is optimized by the oscillation frequency in resonance with the spin exchange excitation. After the perturbation is turned off, the Ising model with a low transverse field can even develop a spontaneous symmetry breaking despite being initially above the critical temperature.
A Ball Pool Model to Illustrate Higgs Physics to the Public
ERIC Educational Resources Information Center
Organtini, Giovanni
2017-01-01
A simple model is presented to explain Higgs boson physics to the grand public. The model consists of a children's ball pool representing a Universe filled with a certain amount of the Higgs field. The model is suitable for usage as a hands-on tool in scientific exhibits and provides a clear explanation of almost all the aspects of the physics of…
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
Simple spatial scaling rules behind complex cities.
Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene
2017-11-28
Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.
Learning and inference using complex generative models in a spatial localization task.
Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N
2016-01-01
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.
A Saturnian cam current system driven by asymmetric thermospheric heating
NASA Astrophysics Data System (ADS)
Smith, C. G. A.
2011-02-01
We show that asymmetric heating of Saturn's thermosphere can drive a current system consistent with the magnetospheric ‘cam’ proposed by Espinosa, Southwood & Dougherty. A geometrically simple heating distribution is imposed on the Northern hemisphere of a simplified three-dimensional global circulation model of Saturn's thermosphere. Currents driven by the resulting winds are calculated using a globally averaged ionosphere model. Using a simple assumption about how divergences in these currents close by flowing along dipolar field lines between the Northern and Southern hemispheres, we estimate the magnetic field perturbations in the equatorial plane and show that they are broadly consistent with the proposed cam fields, showing a roughly uniform field implying radial and azimuthal components in quadrature. We also identify a small longitudinal phase drift in the cam current with radial distance as a characteristic of a thermosphere-driven current system. However, at present our model does not produce magnetic field perturbations of the required magnitude, falling short by a factor of ˜100, a discrepancy that may be a consequence of an incomplete model of the ionospheric conductance.
The ideas behind self-consistent expansion
NASA Astrophysics Data System (ADS)
Schwartz, Moshe; Katzav, Eytan
2008-04-01
In recent years we have witnessed a growing interest in various non-equilibrium systems described in terms of stochastic nonlinear field theories. In some of those systems, like KPZ and related models, the interesting behavior is in the strong coupling regime, which is inaccessible by traditional perturbative treatments such as dynamical renormalization group (DRG). A useful tool in the study of such systems is the self-consistent expansion (SCE), which might be said to generate its own 'small parameter'. The self-consistent expansion (SCE) has the advantage that its structure is just that of a regular expansion, the only difference is that the simple system around which the expansion is performed is adjustable. The purpose of this paper is to present the method in a simple and understandable way that hopefully will make it accessible to a wider public working on non-equilibrium statistical physics.
Anisn-Dort Neutron-Gamma Flux Intercomparison Exercise for a Simple Testing Model
NASA Astrophysics Data System (ADS)
Boehmer, B.; Konheiser, J.; Borodkin, G.; Brodkin, E.; Egorov, A.; Kozhevnikov, A.; Zaritsky, S.; Manturov, G.; Voloschenko, A.
2003-06-01
The ability of transport codes ANISN, DORT, ROZ-6, MCNP and TRAMO, as well as nuclear data libraries BUGLE-96, ABBN-93, VITAMIN-B6 and ENDF/B-6 to deliver consistent gamma and neutron flux results was tested in the calculation of a one-dimensional cylindrical model consisting of a homogeneous core and an outer zone with a single material. Model variants with H2O, Fe, Cr and Ni in the outer zones were investigated. The results are compared with MCNP-ENDF/B-6 results. Discrepancies are discussed. The specified test model is proposed as a computational benchmark for testing calculation codes and data libraries.
The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
2017-01-01
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
Why the Long Face? The Mechanics of Mandibular Symphysis Proportions in Crocodiles
Walmsley, Christopher W.; Smits, Peter D.; Quayle, Michelle R.; McCurry, Matthew R.; Richards, Heather S.; Oldfield, Christopher C.; Wroe, Stephen; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Background Crocodilians exhibit a spectrum of rostral shape from long snouted (longirostrine), through to short snouted (brevirostrine) morphologies. The proportional length of the mandibular symphysis correlates consistently with rostral shape, forming as much as 50% of the mandible’s length in longirostrine forms, but 10% in brevirostrine crocodilians. Here we analyse the structural consequences of an elongate mandibular symphysis in relation to feeding behaviours. Methods/Principal Findings Simple beam and high resolution Finite Element (FE) models of seven species of crocodile were analysed under loads simulating biting, shaking and twisting. Using beam theory, we statistically compared multiple hypotheses of which morphological variables should control the biomechanical response. Brevi- and mesorostrine morphologies were found to consistently outperform longirostrine types when subject to equivalent biting, shaking and twisting loads. The best predictors of performance for biting and twisting loads in FE models were overall length and symphyseal length respectively; for shaking loads symphyseal length and a multivariate measurement of shape (PC1– which is strongly but not exclusively correlated with symphyseal length) were equally good predictors. Linear measurements were better predictors than multivariate measurements of shape in biting and twisting loads. For both biting and shaking loads but not for twisting, simple beam models agree with best performance predictors in FE models. Conclusions/Significance Combining beam and FE modelling allows a priori hypotheses about the importance of morphological traits on biomechanics to be statistically tested. Short mandibular symphyses perform well under loads used for feeding upon large prey, but elongate symphyses incur high strains under equivalent loads, underlining the structural constraints to prey size in the longirostrine morphotype. The biomechanics of the crocodilian mandible are largely consistent with beam theory and can be predicted from simple morphological measurements, suggesting that crocodilians are a useful model for investigating the palaeobiomechanics of other aquatic tetrapods. PMID:23342027
Is Jupiter's magnetosphere like a pulsar's or earth's?
NASA Technical Reports Server (NTRS)
Kennel, C. F.; Coroniti, F. V.
1974-01-01
The application of pulsar physics to determine the magnetic structure in the planet Jupiter outer magnetosphere is discussed. A variety of theoretical models are developed to illuminate broad areas of consistency and conflict between theory and experiment. Two possible models of Jupiter's magnetosphere, a pulsar-like radial outflow model and an earth-like convection model, are examined. A compilation of the simple order of magnitude estimates derivable from the various models is provided.
Testing the Structure of Hydrological Models using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, B.; Muttil, N.
2009-04-01
Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
Self-consistent Models of Strong Interaction with Chiral Symmetry
DOE R&D Accomplishments Database
Nambu, Y.; Pascual, P.
1963-04-01
Some simple models of (renormalizable) meson-nucleon interaction are examined in which the nucleon mass is entirely due to interaction and the chiral ( gamma {sub 5}) symmetry is "broken'' to become a hidden symmetry. It is found that such a scheme is possible provided that a vector meson is introduced as an elementary field. (auth)
Evolutionary dynamics of fearfulness and boldness.
Ji, Ting; Zhang, Boyu; Sun, Yuehua; Tao, Yi
2009-02-21
A negative relationship between reproductive effort and survival is consistent with life-history. Evolutionary dynamics and evolutionarily stable strategy (ESS) for the trade-off between survival and reproduction are investigated using a simple model with two phenotypes, fearfulness and boldness. The dynamical stability of the pure strategy model and analysis of ESS conditions reveal that: (i) the simple coexistence of fearfulness and boldness is impossible; (ii) a small population size is favorable to fearfulness, but a large population size is favorable to boldness, i.e., neither fearfulness, nor boldness is always favored by natural selection; and (iii) the dynamics of population density is crucial for a proper understanding of the strategy dynamics.
A radio-frequency sheath model for complex waveforms
NASA Astrophysics Data System (ADS)
Turner, M. M.; Chabert, P.
2014-04-01
Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.
A simple model for estimating a magnetic field in laser-driven coils
Fiksel, Gennady; Fox, William; Gao, Lan; ...
2016-09-26
Magnetic field generation by laser-driven coils is a promising way of magnetizing plasma in laboratory high-energy-density plasma experiments. A typical configuration consists of two electrodes—one electrode is irradiated with a high-intensity laser beam and another electrode collects charged particles from the expanding plasma. The two electrodes are separated by a narrow gap forming a capacitor-like configuration and are connected with a conducting wire-coil. The charge-separation in the expanding plasma builds up a potential difference between the electrodes that drives the electrical current in the coil. A magnetic field of tens to hundreds of Teslas generated inside the coil has beenmore » reported. This paper presents a simple model that estimates the magnetic field using simple assumptions. Lastly, the results are compared with the published experimental data.« less
Two classes of ODE models with switch-like behavior
Just, Winfried; Korb, Mason; Elbert, Ben; Young, Todd
2013-01-01
In cases where the same real-world system can be modeled both by an ODE system ⅅ and a Boolean system 𝔹, it is of interest to identify conditions under which the two systems will be consistent, that is, will make qualitatively equivalent predictions. In this note we introduce two broad classes of relatively simple models that provide a convenient framework for studying such questions. In contrast to the widely known class of Glass networks, the right-hand sides of our ODEs are Lipschitz-continuous. We prove that if 𝔹 has certain structures, consistency between ⅅ and 𝔹 is implied by sufficient separation of time scales in one class of our models. Namely, if the trajectories of 𝔹 are “one-stepping” then we prove a strong form of consistency and if 𝔹 has a certain monotonicity property then there is a weaker consistency between ⅅ and 𝔹. These results appear to point to more general structure properties that favor consistency between ODE and Boolean models. PMID:24244061
Simple stochastic model for El Niño with westerly wind bursts
Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.
2016-01-01
Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821
Simple Thermal Environment Model (STEM) User's Guide
NASA Technical Reports Server (NTRS)
Justus, C.G.; Batts, G. W.; Anderson, B. J.; James, B. F.
2001-01-01
This report presents a Simple Thermal Environment Model (STEM) for determining appropriate engineering design values to specify the thermal environment of Earth-orbiting satellites. The thermal environment of a satellite, consists of three components: (1) direct solar radiation, (2) Earth-atmosphere reflected shortwave radiation, as characterized by Earth's albedo, and (3) Earth-atmosphere-emitted outgoing longwave radiation (OLR). This report, together with a companion "guidelines" report provides methodology and guidelines for selecting "design points" for thermal environment parameters for satellites and spacecraft systems. The methods and models reported here are outgrowths of Earth Radiation Budget Experiment (ERBE) satellite data analysis and thermal environment specifications discussed by Anderson and Smith (1994). In large part, this report is intended to update (and supersede) those results.
Early Planetary Differentiation: Comparative Planetology
NASA Technical Reports Server (NTRS)
Jones, John H.
2006-01-01
We currently have extensive data for four different terrestrial bodies of the inner solar system: Earth, the Moon, Mars, and the Eucrite Parent Body [EPB]. All formed early cores; but all(?) have mantles with elevated concentrations of highly sidero-phile elements, suggestive of the addition of a late "veneer". Two appear to have undergone extensive differentiation consistent with a global magma ocean. One appears to be inconsistent with a simple model of "low-pressure" chondritic differentiation. Thus, there seems to be no single, simple paradigm for understand-ing early differentiation.
Consistency of the free-volume approach to the homogeneous deformation of metallic glasses
NASA Astrophysics Data System (ADS)
Blétry, Marc; Thai, Minh Thanh; Champion, Yannick; Perrière, Loïc; Ochin, Patrick
2014-05-01
One of the most widely used approaches to model metallic-glasses high-temperature homogeneous deformation is the free-volume theory, developed by Cohen and Turnbull and extended by Spaepen. A simple elastoviscoplastic formulation has been proposed that allows one to determine various parameters of such a model. This approach is applied here to the results obtained by de Hey et al. on a Pd-based metallic glass. In their study, de Hey et al. were able to determine some of the parameters used in the elastoviscoplastic formulation through DSC modeling coupled with mechanical tests, and the consistency of the two viewpoints was assessed.
A simple mathematical model of society collapse applied to Easter Island
NASA Astrophysics Data System (ADS)
Bologna, M.; Flores, J. C.
2008-02-01
In this paper we consider a mathematical model for the evolution and collapse of the Easter Island society. Based on historical reports, the available primary resources consisted almost exclusively in the trees, then we describe the inhabitants and the resources as an isolated dynamical system. A mathematical, and numerical, analysis about the Easter Island community collapse is performed. In particular, we analyze the critical values of the fundamental parameters and a demographic curve is presented. The technological parameter, quantifying the exploitation of the resources, is calculated and applied to the case of another extinguished civilization (Copán Maya) confirming the consistency of the adopted model.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2010-07-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root- n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.
Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang
2013-01-01
We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided. PMID:24790286
Nishimoto, Shinji; Gallant, Jack L.
2012-01-01
Area MT has been an important target for studies of motion processing. However, previous neurophysiological studies of MT have used simple stimuli that do not contain many of the motion signals that occur during natural vision. In this study we sought to determine whether views of area MT neurons developed using simple stimuli can account for MT responses under more naturalistic conditions. We recorded responses from macaque area MT neurons during stimulation with naturalistic movies. We then used a quantitative modeling framework to discover which specific mechanisms best predict neuronal responses under these challenging conditions. We find that the simplest model that accurately predicts responses of MT neurons consists of a bank of V1-like filters, each followed by a compressive nonlinearity, a divisive nonlinearity and linear pooling. Inspection of the fit models shows that the excitatory receptive fields of MT neurons tend to lie on a single plane within the three-dimensional spatiotemporal frequency domain, and suppressive receptive fields lie off this plane. However, most excitatory receptive fields form a partial ring in the plane and avoid low temporal frequencies. This receptive field organization ensures that most MT neurons are tuned for velocity but do not tend to respond to ambiguous static textures that are aligned with the direction of motion. In sum, MT responses to naturalistic movies are largely consistent with predictions based on simple stimuli. However, models fit using naturalistic stimuli reveal several novel properties of MT receptive fields that had not been shown in prior experiments. PMID:21994372
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Using XCO2 retrievals for assessing the long-term consistency of NDACC/FTIR data sets
NASA Astrophysics Data System (ADS)
Barthlott, S.; Schneider, M.; Hase, F.; Wiegele, A.; Christner, E.; González, Y.; Blumenstock, T.; Dohe, S.; García, O. E.; Sepúlveda, E.; Strong, K.; Mendonca, J.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Mahieu, E.; Jones, N.; Griffith, D. W. T.; Velazco, V. A.; Smale, D.; Robinson, J.; Kivi, R.; Heikkinen, P.; Raffalski, U.
2014-10-01
Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier-Transform InfraRed) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these data records. Our NDACC XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons and the bias is 25‰). As XCO2 model we developed and used a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in-situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (MUlti-platform remote Sensing of Isotopologues for investigating the Cycle of Atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.
Using XCO2 retrievals for assessing the long-term consistency of NDACC/FTIR data sets
NASA Astrophysics Data System (ADS)
Barthlott, S.; Schneider, M.; Hase, F.; Wiegele, A.; Christner, E.; González, Y.; Blumenstock, T.; Dohe, S.; García, O. E.; Sepúlveda, E.; Strong, K.; Mendonca, J.; Weaver, D.; Palm, M.; Deutscher, N. M.; Warneke, T.; Notholt, J.; Lejeune, B.; Mahieu, E.; Jones, N.; Griffith, D. W. T.; Velazco, V. A.; Smale, D.; Robinson, J.; Kivi, R.; Heikkinen, P.; Raffalski, U.
2015-03-01
Within the NDACC (Network for the Detection of Atmospheric Composition Change), more than 20 FTIR (Fourier-transform infrared) spectrometers, spread worldwide, provide long-term data records of many atmospheric trace gases. We present a method that uses measured and modelled XCO2 for assessing the consistency of these NDACC data records. Our XCO2 retrieval setup is kept simple so that it can easily be adopted for any NDACC/FTIR-like measurement made since the late 1950s. By a comparison to coincident TCCON (Total Carbon Column Observing Network) measurements, we empirically demonstrate the useful quality of this suggested NDACC XCO2 product (empirically obtained scatter between TCCON and NDACC is about 4‰ for daily mean as well as monthly mean comparisons, and the bias is 25‰). Our XCO2 model is a simple regression model fitted to CarbonTracker results and the Mauna Loa CO2 in situ records. A comparison to TCCON data suggests an uncertainty of the model for monthly mean data of below 3‰. We apply the method to the NDACC/FTIR spectra that are used within the project MUSICA (multi-platform remote sensing of isotopologues for investigating the cycle of atmospheric water) and demonstrate that there is a good consistency for these globally representative set of spectra measured since 1996: the scatter between the modelled and measured XCO2 on a yearly time scale is only 3‰.
Jason Forthofer; Bret Butler
2007-01-01
A computational fluid dynamics (CFD) model and a mass-consistent model were used to simulate winds on simulated fire spread over a simple, low hill. The results suggest that the CFD wind field could significantly change simulated fire spread compared to traditional uniform winds. The CFD fire spread case may match reality better because the winds used in the fire...
Acoustical and Other Physical Properties of Marine Sediments
1991-01-01
Granular Structure of Rocks 4. Anisotropic Poroelasticity and Biot’s Parameters PART 1 A simple analytical model has been developed to describe the...mentioned properties. PART 4 Prediction of wave propagation in a submarine environment re- quires modeling the acoustic response of ocean bottom...Biot’s theory is a promising approach for modelling acoustic wave propa- gation in ocean sediments which generally consist of elastic or viscoelastic
Modeling and experimental characterization of electromigration in interconnect trees
NASA Astrophysics Data System (ADS)
Thompson, C. V.; Hau-Riege, S. P.; Andleigh, V. K.
1999-11-01
Most modeling and experimental characterization of interconnect reliability is focussed on simple straight lines terminating at pads or vias. However, laid-out integrated circuits often have interconnects with junctions and wide-to-narrow transitions. In carrying out circuit-level reliability assessments it is important to be able to assess the reliability of these more complex shapes, generally referred to as `trees.' An interconnect tree consists of continuously connected high-conductivity metal within one layer of metallization. Trees terminate at diffusion barriers at vias and contacts, and, in the general case, can have more than one terminating branch when they include junctions. We have extended the understanding of `immortality' demonstrated and analyzed for straight stud-to-stud lines, to trees of arbitrary complexity. This leads to a hierarchical approach for identifying immortal trees for specific circuit layouts and models for operation. To complete a circuit-level-reliability analysis, it is also necessary to estimate the lifetimes of the mortal trees. We have developed simulation tools that allow modeling of stress evolution and failure in arbitrarily complex trees. We are testing our models and simulations through comparisons with experiments on simple trees, such as lines broken into two segments with different currents in each segment. Models, simulations and early experimental results on the reliability of interconnect trees are shown to be consistent.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
Estimating radiofrequency power deposition in body NMR imaging.
Bottomley, P A; Redington, R W; Edelstein, W A; Schenck, J F
1985-08-01
Simple theoretical estimates of the average, maximum, and spatial variation of the radiofrequency power deposition (specific absorption rate) during hydrogen nuclear magnetic resonance imaging are deduced for homogeneous spheres and for cylinders of biological tissue with a uniformly penetrating linear rf field directed axially and transverse to the cylindrical axis. These are all simple scalar multiples of the expression for the cylinder in an axial field published earlier (Med. Phys. 8, 510 (1981]. Exact solutions for the power deposition in the cylinder with axial (Phys. Med. Biol. 23, 630 (1978] and transversely directed rf field are also presented, and the spatial variation of power deposition in head and body models is examined. In the exact models, the specific absorption rates decrease rapidly and monotonically with decreasing radius despite local increases in rf field amplitude. Conversion factors are provided for calculating the power deposited by Gaussian and sinc-modulated rf pulses used for slice selection in NMR imaging, relative to rectangular profiled pulses. Theoretical estimates are compared with direct measurements of the total power deposited in the bodies of nine adult males by a 63-MHz body-imaging system with transversely directed field, taking account of cable and NMR coil losses. The results for the average power deposition agree within about 20% for the exact model of the cylinder with axial field, when applied to the exposed torso volume enclosed by the rf coil. The average values predicted by the simple spherical and cylindrical models with axial fields, the exact cylindrical model with transverse field, and the simple truncated cylinder model with transverse field were about two to three times that measured, while the simple model consisting of an infinitely long cylinder with transverse field gave results about six times that measured. The surface power deposition measured by observing the incremental power as a function of external torso radius was comparable to the average value. This is consistent with the presence of a variable thickness peripheral adipose layer which does not substantially increase surface power deposition with increasing torso radius. The absence of highly localized intensity artifacts in 63-MHz body images does not suggest anomalously intense power deposition at localized internal sites, although peak power is difficult to measure.
Design and Training of Limited-Interconnect Architectures
1991-07-16
and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for
Statistical Mechanics of US Supreme Court
NASA Astrophysics Data System (ADS)
Lee, Edward; Broedersz, Chase; Bialek, William; Biophysics Theory Group Team
2014-03-01
We build simple models for the distribution of voting patterns in a group, using the Supreme Court of the United States as an example. The least structured, or maximum entropy, model that is consistent with the observed pairwise correlations among justices' votes is equivalent to an Ising spin glass. While all correlations (perhaps surprisingly) are positive, the effective pairwise interactions in the spin glass model have both signs, recovering some of our intuition that justices on opposite sides of the ideological spectrum should have a negative influence on one another. Despite the competing interactions, a strong tendency toward unanimity emerges from the model, and this agrees quantitatively with the data. The model shows that voting patterns are organized in a relatively simple ``energy landscape,'' correctly predicts the extent to which each justice is correlated with the majority, and gives us a measure of the influence that justices exert on one another. These results suggest that simple models, grounded in statistical physics, can capture essential features of collective decision making quantitatively, even in a complex political context. Funded by National Science Foundation Grants PHY-0957573 and CCF-0939370, WM Keck Foundation, Lewis-Sigler Fellowship, Burroughs Wellcome Fund, and Winston Foundation.
NASA Astrophysics Data System (ADS)
Hess, Julian; Wang, Yongqi
2016-11-01
A new mixture model for granular-fluid flows, which is thermodynamically consistent with the entropy principle, is presented. The extra pore pressure described by a pressure diffusion equation and the hypoplastic material behavior obeying a transport equation are taken into account. The model is applied to granular-fluid flows, using a closing assumption in conjunction with the dynamic fluid pressure to describe the pressure-like residual unknowns, hereby overcoming previous uncertainties in the modeling process. Besides the thermodynamically consistent modeling, numerical simulations are carried out and demonstrate physically reasonable results, including simple shear flow in order to investigate the vertical distribution of the physical quantities, and a mixture flow down an inclined plane by means of the depth-integrated model. Results presented give insight in the ability of the deduced model to capture the key characteristics of granular-fluid flows. We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) for this work within the Project Number WA 2610/3-1.
NASA Astrophysics Data System (ADS)
Parumasur, N.; Willie, R.
2008-09-01
We consider a simple HIV/AIDs finite dimensional mathematical model on interactions of the blood cells, the HIV/AIDs virus and the immune system for consistence of the equations to the real biomedical situation that they model. A better understanding to a cure solution to the illness modeled by the finite dimensional equations is given. This is accomplished through rigorous mathematical analysis and is reinforced by numerical analysis of models developed for real life cases.
REE in the Great Whale River estuary, northwest Quebec
NASA Technical Reports Server (NTRS)
Goldstein, Steven J.; Jacobsen, Stein B.
1988-01-01
A report on REE concentrations within the estuary of the Great Whale River in northwest Quebec and in Hudson Bay is given, showing concentrations which are less than those predicted by conservative mixing of seawater and river water, indicating removal of REE from solution. REE removal is rapid, occurring primarily at salinities less than 2 percent and ranges from about 70 percent for light REE to no more than 40 percent for heavy REE. At low salinity, Fe removal is essentially complete. The shape of Fe and REE vs. salinity profiles is not consistent with a simple model of destabilization and coagulation of Fe and REE-bearing colloidal material. A linear relationship between the activity of free ion REE(3+) and pH is consistent with a simple ion-exchange model for REE removal. Surface and subsurface samples of Hudson Bay seawater show high REE and La/Yb concentrations relative to average seawater, with the subsurface sample having a Nd concentration of 100 pmol/kg and an epsilon(Nd) of -29.3; characteristics consistent with river inputs of Hudson Bay. This indicates that rivers draining the Canadian Shield are a major source of nonradiogenic Nd and REE to the Atlantic Ocean.
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Christopher A.
In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-09
... and unimproved fee-simple land, consisting of 7 acres of land and 3 acres of submerged land. The... Middletown and consists of approximately 3 acres of improved and unimproved fee simple land. In general, the... and 2, Portsmouth, RI Tank Farms 1 and 2 consist of improved and unimproved fee simple land located...
A simple generative model of collective online behavior.
Gleeson, James P; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A; Reed-Tsochas, Felix
2014-07-22
Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates--even when using purely observational data without experimental design--that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior.
A simple generative model of collective online behavior
Gleeson, James P.; Cellai, Davide; Onnela, Jukka-Pekka; Porter, Mason A.; Reed-Tsochas, Felix
2014-01-01
Human activities increasingly take place in online environments, providing novel opportunities for relating individual behaviors to population-level outcomes. In this paper, we introduce a simple generative model for the collective behavior of millions of social networking site users who are deciding between different software applications. Our model incorporates two distinct mechanisms: one is associated with recent decisions of users, and the other reflects the cumulative popularity of each application. Importantly, although various combinations of the two mechanisms yield long-time behavior that is consistent with data, the only models that reproduce the observed temporal dynamics are those that strongly emphasize the recent popularity of applications over their cumulative popularity. This demonstrates—even when using purely observational data without experimental design—that temporal data-driven modeling can effectively distinguish between competing microscopic mechanisms, allowing us to uncover previously unidentified aspects of collective online behavior. PMID:25002470
NASA Technical Reports Server (NTRS)
Holmes, Thomas; Owe, Manfred; deJeu, Richard
2007-01-01
Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
Theoretical model for optical properties of porphyrin
NASA Astrophysics Data System (ADS)
Phan, Anh D.; Nga, Do T.; Phan, The-Long; Thanh, Le T. M.; Anh, Chu T.; Bernad, Sophie; Viet, N. A.
2014-12-01
We propose a simple model to interpret the optical absorption spectra of porphyrin in different solvents. Our model successfully explains the decrease in the intensity of optical absorption at maxima of increased wavelengths. We also prove the dependence of the intensity and peak positions in the absorption spectra on the environment. The nature of the Soret band is supposed to derive from π plasmon. Our theoretical calculations are consistent with previous experimental studies.
A Simple Model for Immature Retrovirus Capsid Assembly
NASA Astrophysics Data System (ADS)
Paquay, Stefan; van der Schoot, Paul; Dragnea, Bogdan
In this talk I will present simulations of a simple model for capsomeres in immature virus capsids, consisting of only point particles with a tunable range of attraction constrained to a spherical surface. We find that, at sufficiently low density, a short interaction range is sufficient for the suppression of five-fold defects in the packing and causes instead larger tears and scars in the capsid. These findings agree both qualitatively and quantitatively with experiments on immature retrovirus capsids, implying that the structure of the retroviral protein lattice can, for a large part, be explained simply by the effective interaction between the capsomeres. We thank the HFSP for funding under Grant RGP0017/2012.
Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.
Putzar, Lisa; Gondan, Matthias; Röder, Brigitte
2012-01-01
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
Deterministic diffusion in flower-shaped billiards.
Harayama, Takahisa; Klages, Rainer; Gaspard, Pierre
2002-08-01
We propose a flower-shaped billiard in order to study the irregular parameter dependence of chaotic normal diffusion. Our model is an open system consisting of periodically distributed obstacles in the shape of a flower, and it is strongly chaotic for almost all parameter values. We compute the parameter dependent diffusion coefficient of this model from computer simulations and analyze its functional form using different schemes, all generalizing the simple random walk approximation of Machta and Zwanzig. The improved methods we use are based either on heuristic higher-order corrections to the simple random walk model, on lattice gas simulation methods, or they start from a suitable Green-Kubo formula for diffusion. We show that dynamical correlations, or memory effects, are of crucial importance in reproducing the precise parameter dependence of the diffusion coefficent.
A study of the electric field in an open magnetospheric model
NASA Technical Reports Server (NTRS)
Stern, D. P.
1973-01-01
Recently, Svalgaard and Heppner reported two separate features of the polar electromagnetic field that correlate with the dawn-dusk component of the interplanetary magnetic field. This work attempts to explain these findings in terms of properties of the open magnetosphere. The topology and qualitative properties of the open magnetosphere are first studied by means of a simple model, consisting of a dipole in a constant field. Many such properties are found to depend on the separation line, a curve connecting neutral points and separating different field line regimes. In the simple model it turns out that the electric field in the central polar cap tends to point from dawn to dusk for a wide variety of external fields, but, near the boundary of the polar cap, electric equipotentials are deformed into crescents.
Two Sides of the Same Coin: U. S. "Residual" Inequality and the Gender Gap
ERIC Educational Resources Information Center
Bacolod, Marigee P.; Blum, Bernardo S.
2010-01-01
We show that the narrowing gender gap and the growth in earnings inequality are consistent with a simple model in which skills are heterogeneous, and the growth in skill prices has been particularly strong for skills with which women are well endowed. Empirical analysis of DOT, CPS, and NLSY79 data finds evidence to support this model. A large…
The Effects of Swedish Knife Model on Students' Understanding of the Digestive System
ERIC Educational Resources Information Center
Cerrah Ozsevgec, Lale; Artun, Huseyin; Unal, Melike
2012-01-01
This study was designed to examine the effect of Swedish Knife Model on students' understanding of digestive system. A simple experimental design (pretest-treatment-posttest) was used in the study and internal comparison of the results of the one group was made. The sample consisted of 40 7th grade Turkish students whose ages range from 13 to 15.…
Hard X-ray emission from accretion shocks around galaxy clusters
NASA Astrophysics Data System (ADS)
Kushnir, Doron; Waxman, Eli
2010-02-01
We show that the hard X-ray (HXR) emission observed from several galaxy clusters is consistent with a simple model, in which the nonthermal emission is produced by inverse Compton scattering of cosmic microwave background photons by electrons accelerated in cluster accretion shocks: The dependence of HXR surface brightness on cluster temperature is consistent with that predicted by the model, and the observed HXR luminosity is consistent with the fraction of shock thermal energy deposited in relativistic electrons being lesssim0.1. Alternative models, where the HXR emission is predicted to be correlated with the cluster thermal emission, are disfavored by the data. The implications of our predictions to future HXR observations (e.g. by NuStar, Simbol-X) and to (space/ground based) γ-ray observations (e.g. by Fermi, HESS, MAGIC, VERITAS) are discussed.
A new model of diffuse brain injury in rats. Part I: Pathophysiology and biomechanics.
Marmarou, A; Foda, M A; van den Brink, W; Campbell, J; Kita, H; Demetriadou, K
1994-02-01
This report describes the development of an experimental head injury model capable of producing diffuse brain injury in the rodent. A total of 161 anesthetized adult rats were injured utilizing a simple weight-drop device consisting of a segmented brass weight free-falling through a Plexiglas guide tube. Skull fracture was prevented by cementing a small stainless-steel disc on the calvaria. Two groups of rats were tested: Group 1, consisting of 54 rats, to establish fracture threshold; and Group 2, consisting of 107 animals, to determine the primary cause of death at severe injury levels. Data from Group 1 animals showed that a 450-gm weight falling from a 2-m height (0.9 kg-m) resulted in a mortality rate of 44% with a low incidence (12.5%) of skull fracture. Impact was followed by apnea, convulsions, and moderate hypertension. The surviving rats developed decortication flexion deformity of the forelimbs, with behavioral depression and loss of muscle tone. Data from Group 2 animals suggested that the cause of death was due to central respiratory depression; the mortality rate decreased markedly in animals mechanically ventilated during the impact. Analysis of mathematical models showed that this mass-height combination resulted in a brain acceleration of 900 G and a brain compression gradient of 0.28 mm. It is concluded that this simple model is capable of producing a graded brain injury in the rodent without a massive hypertensive surge or excessive brain-stem damage.
The coefficient of restitution of pressurized balls: a mechanistic model
NASA Astrophysics Data System (ADS)
Georgallas, Alex; Landry, Gaëtan
2016-01-01
Pressurized, inflated balls used in professional sports are regulated so that their behaviour upon impact can be anticipated and allow the game to have its distinctive character. However, the dynamics governing the impacts of such balls, even on stationary hard surfaces, can be extremely complex. The energy transformations, which arise from the compression of the gas within the ball and from the shear forces associated with the deformation of the wall, are examined in this paper. We develop a simple mechanistic model of the dependence of the coefficient of restitution, e, upon both the gauge pressure, P_G, of the gas and the shear modulus, G, of the wall. The model is validated using the results from a simple series of experiments using three different sports balls. The fits to the data are extremely good for P_G > 25 kPa and consistent values are obtained for the value of G for the wall material. As far as the authors can tell, this simple, mechanistic model of the pressure dependence of the coefficient of restitution is the first in the literature. *%K Coefficient of Restitution, Dynamics, Inflated Balls, Pressure, Impact Model
A simple analytical model for dynamics of time-varying target leverage ratios
NASA Astrophysics Data System (ADS)
Lo, C. F.; Hui, C. H.
2012-03-01
In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.
Consistent three-equation model for thin films
NASA Astrophysics Data System (ADS)
Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul
2017-11-01
Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.
Hierarchical lattice models of hydrogen-bond networks in water
NASA Astrophysics Data System (ADS)
Dandekar, Rahul; Hassanali, Ali A.
2018-06-01
We develop a graph-based model of the hydrogen-bond network in water, with a view toward quantitatively modeling the molecular-level correlational structure of the network. The networks formed are studied by the constructing the model on two infinite-dimensional lattices. Our models are built bottom up, based on microscopic information coming from atomistic simulations, and we show that the predictions of the model are consistent with known results from ab initio simulations of liquid water. We show that simple entropic models can predict the correlations and clustering of local-coordination defects around tetrahedral waters observed in the atomistic simulations. We also find that orientational correlations between bonds are longer ranged than density correlations, determine the directional correlations within closed loops, and show that the patterns of water wires within these structures are also consistent with previous atomistic simulations. Our models show the existence of density and compressibility anomalies, as seen in the real liquid, and the phase diagram of these models is consistent with the singularity-free scenario previously proposed by Sastry and coworkers [Phys. Rev. E 53, 6144 (1996), 10.1103/PhysRevE.53.6144].
Definitions: Health, Fitness, and Physical Activity.
ERIC Educational Resources Information Center
Corbin, Charles B.; Pangrazi, Robert P.; Franks, B. Don
2000-01-01
This paper defines a variety of fitness components, using a simple multidimensional hierarchical model that is consistent with recent definitions in the literature. It groups the definitions into two broad categories: product and process. Products refer to states of being such as physical fitness, health, and wellness. They are commonly referred…
Academic Self-Efficacy Perceptions of Teacher Candidates
ERIC Educational Resources Information Center
Yesilyurt, Etem
2013-01-01
This study aims determining academic self-efficacy perception of teacher candidates. It is survey model. Population of the study consists of teacher candidates in 2010-2011 academic years at Ahmet Kelesoglu Education Faculty of Education Formation of Selcuk University. A simple random sample was selected as sampling method and the study was…
2016-06-01
widely in literature, limiting comparisons. Methods: Yorkshire-cross swine were anesthetized, instrumented, and splenectomized. A simple liver...applicable injury in swine . Use of the tourniquet allowed for consistent liver injury and precise control over hemorrhage.
NASA Astrophysics Data System (ADS)
Signorelli, Javier; Tommasi, Andréa
2015-11-01
Homogenization models are widely used to predict the evolution of texture (crystal preferred orientations) and resulting anisotropy of physical properties in metals, rocks, and ice. They fail, however, in predicting two main features of texture evolution in simple shear (the dominant deformation regime on Earth) for highly anisotropic crystals, like olivine: (1) the fast rotation of the CPO towards a stable position characterized by parallelism of the dominant slip system and the macroscopic shear and (2) the asymptotical evolution towards a constant intensity. To better predict CPO-induced anisotropy in the mantle, but limiting computational costs and use of poorly-constrained physical parameters, we modified a viscoplastic self-consistent code to simulate the effects of subgrain rotation recrystallization. To each crystal is associated a finite number of fragments (possible subgrains). Formation of a subgrain corresponds to introduction of a disorientation (relative to the parent) and resetting of the fragment strain and internal energy. The probability of formation of a subgrain is controlled by comparison between the local internal energy and the average value in the polycrystal. A two-level mechanical interaction scheme is applied for simulating the intracrystalline strain heterogeneity allowed by the formation of low-angle grain boundaries. Within a crystal, interactions between subgrains follow a constant stress scheme. The interactions between grains are simulated by a tangent viscoplastic self-consistent approach. This two-level approach better reproduces the evolution of olivine CPO in simple shear in experiments and nature. It also predicts a marked weakening at low shear strains, consistently with experimental data.
On the modelling of shallow turbidity flows
NASA Astrophysics Data System (ADS)
Liapidevskii, Valery Yu.; Dutykh, Denys; Gisclon, Marguerite
2018-03-01
In this study we investigate shallow turbidity density currents and underflows from mechanical point of view. We propose a simple hyperbolic model for such flows. On one hand, our model is based on very basic conservation principles. On the other hand, the turbulent nature of the flow is also taken into account through the energy dissipation mechanism. Moreover, the mixing with the pure water along with sediments entrainment and deposition processes are considered, which makes the problem dynamically interesting. One of the main advantages of our model is that it requires the specification of only two modeling parameters - the rate of turbulent dissipation and the rate of the pure water entrainment. Consequently, the resulting model turns out to be very simple and self-consistent. This model is validated against several experimental data and several special classes of solutions (such as travelling, self-similar and steady) are constructed. Unsteady simulations show that some special solutions are realized as asymptotic long time states of dynamic trajectories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, Shih-Miao; Hwang, Ho-Ling
2007-01-01
This paper describes a development of national freight demand models for 27 industry sectors covered by the 2002 Commodity Flow Survey. It postulates that the national freight demands are consistent with U.S. business patterns. Furthermore, the study hypothesizes that the flow of goods, which make up the national production processes of industries, is coherent with the information described in the 2002 Annual Input-Output Accounts developed by the Bureau of Economic Analysis. The model estimation framework hinges largely on the assumption that a relatively simple relationship exists between freight production/consumption and business patterns for each industry defined by the three-digit Northmore » American Industry Classification System industry codes (NAICS). The national freight demand model for each selected industry sector consists of two models; a freight generation model and a freight attraction model. Thus, a total of 54 simple regression models were estimated under this study. Preliminary results indicated promising freight generation and freight attraction models. Among all models, only four of them had a R2 value lower than 0.70. With additional modeling efforts, these freight demand models could be enhanced to allow transportation analysts to assess regional economic impacts associated with temporary lost of transportation services on U.S. transportation network infrastructures. Using such freight demand models and available U.S. business forecasts, future national freight demands could be forecasted within certain degrees of accuracy. These freight demand models could also enable transportation analysts to further disaggregate the CFS state-level origin-destination tables to county or zip code level.« less
On nonlocally interacting metrics, and a simple proposal for cosmic acceleration
NASA Astrophysics Data System (ADS)
Vardanyan, Valeri; Akrami, Yashar; Amendola, Luca; Silvestri, Alessandra
2018-03-01
We propose a simple, nonlocal modification to general relativity (GR) on large scales, which provides a model of late-time cosmic acceleration in the absence of the cosmological constant and with the same number of free parameters as in standard cosmology. The model is motivated by adding to the gravity sector an extra spin-2 field interacting nonlocally with the physical metric coupled to matter. The form of the nonlocal interaction is inspired by the simplest form of the Deser-Woodard (DW) model, α R1/squareR, with one of the Ricci scalars being replaced by a constant m2, and gravity is therefore modified in the infrared by adding a simple term of the form m21/squareR to the Einstein-Hilbert term. We study cosmic expansion histories, and demonstrate that the new model can provide background expansions consistent with observations if m is of the order of the Hubble expansion rate today, in contrast to the simple DW model with no viable cosmology. The model is best fit by w0~‑1.075 and wa~0.045. We also compare the cosmology of the model to that of Maggiore and Mancarella (MM), m2R1/square2R, and demonstrate that the viable cosmic histories follow the standard-model evolution more closely compared to the MM model. We further demonstrate that the proposed model possesses the same number of physical degrees of freedom as in GR. Finally, we discuss the appearance of ghosts in the local formulation of the model, and argue that they are unphysical and harmless to the theory, keeping the physical degrees of freedom healthy.
CALIBRATION OF EQUILIBRIUM TIDE THEORY FOR EXTRASOLAR PLANET SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Brad M. S., E-mail: hansen@astro.ucla.ed
2010-11-01
We provide an 'effective theory' of tidal dissipation in extrasolar planet systems by empirically calibrating a model for the equilibrium tide. The model is valid to high order in eccentricity and parameterized by two constants of bulk dissipation-one for dissipation in the planet and one for dissipation in the host star. We are able to consistently describe the distribution of extrasolar planetary systems in terms of period, eccentricity, and mass (with a lower limit of a Saturn mass) with this simple model. Our model is consistent with the survival of short-period exoplanet systems, but not with the circularization period ofmore » equal mass stellar binaries, suggesting that the latter systems experience a higher level of dissipation than exoplanet host stars. Our model is also not consistent with the explanation of inflated planetary radii as resulting from tidal dissipation. The paucity of short-period planets around evolved A stars is explained as the result of enhanced tidal inspiral resulting from the increase in stellar radius with evolution.« less
Expected Monotonicity – A Desirable Property for Evidence Measures?
Hodge, Susan E.; Vieland, Veronica J.
2010-01-01
We consider here the principle of ‘evidential consistency’ – that as one gathers more data, any well-behaved evidence measure should, in some sense, approach the true answer. Evidential consistency is essential for the genome-scan design (GWAS or linkage), where one selects the most promising locus(i) for follow-up, expecting that new data will increase evidence for the correct hypothesis. Earlier work [Vieland, Hum Hered 2006;61:144–156] showed that many popular statistics do not satisfy this principle; Vieland concluded that the problem stems from fundamental difficulties in how we measure evidence and argued for determining criteria to evaluate evidence measures. Here, we investigate in detail one proposed consistency criterion – expected monotonicity (ExpM) – for a simple statistical model (binomial) and four likelihood ratio (LR)-based evidence measures. We show that, with one limited exception, none of these measures displays ExpM; what they do display is sometimes counterintuitive. We conclude that ExpM is not a reasonable requirement for evidence measures; moreover, no requirement based on expected values seems feasible. We demonstrate certain desirable properties of the simple LR and demonstrate a connection between the simple and integrated LRs. We also consider an alternative version of consistency, which is satisfied by certain forms of the integrated LR and posterior probability of linkage. PMID:20664208
Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.
2011-01-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616
A simple model for heterogeneous nucleation of isotactic polypropylene
NASA Astrophysics Data System (ADS)
Howard, Michael; Milner, Scott
2013-03-01
Flow-induced crystallization (FIC) is of interest because of its relevance to processes such as injection molding. It has been suggested that flow increases the homogeneous nucleation rate by reducing the melt state entropy. However, commercial polypropylene (iPP) exhibits quiescent nucleation rates that are much too high to be consistent with homogeneous nucleation in carefully purified samples. This suggests that heterogeneous nucleation is dominant for typical samples used in FIC experiments. We describe a simple model for heterogeneous nucleation of iPP, in terms of a cylindrical nucleus on a flat surface with the critical size and barrier set by the contact angle. Analysis of quiescent crystallization data with this model gives reasonable values for the contact angle. We have also employed atomistic simulations of iPP crystals to determine surface energies with vacuum and with Hamaker-matched substrates, and find values consistent with the contact angles inferred from heterogeneous nucleation experiments. In future work, these results combined with calculations from melt rheology of entropy reduction due to flow can be used to estimate the heterogeneous nucleation barrier reduction due to flow, and hence the increase in nucleation rate due to FIC for commecial iPP.
Oscillatory dynamics of investment and capacity utilization
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2017-01-01
Capitalist economic systems display a wide variety of oscillatory phenomena whose underlying causes are often not well understood. In this paper, I consider a very simple model of the reciprocal interaction between investment, capacity utilization, and their time derivatives. The model, which gives rise periodic oscillations, predicts qualitatively the phase relations between these variables. These predictions are observed to be consistent in a statistical sense with econometric data from the US economy.
pyhector: A Python interface for the simple climate model Hector
DOE Office of Scientific and Technical Information (OSTI.GOV)
N Willner, Sven; Hartin, Corinne; Gieseke, Robert
2017-04-01
Pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary production andmore » respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system (Hartin et al. 2016). The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2. These were developed to cover the range of baseline and mitigation emissions scenarios and are widely used in climate change research and model intercomparison projects. Using DataFrames from the Python library Pandas (McKinney 2010) as a data structure for the scenarios simplifies generating and adapting scenarios. Other parameters of the Hector model can easily be modified when running the model. Pyhector can be installed using pip from the Python Package Index.3 Source code and issue tracker are available in Pyhector's GitHub repository4. Documentation is provided through Readthedocs5. Usage examples are also contained in the repository as a Jupyter Notebook (Pérez and Granger 2007; Kluyver et al. 2016). Courtesy of the Mybinder project6, the example Notebook can also be executed and modified without installing Pyhector locally.« less
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow.
Kerner, Boris S; Klenov, Sergey L; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules "acceleration," "deceleration," "randomization," and "motion" of the Nagel-Schreckenberg CA model as well as "overacceleration through lane changing to the faster lane," "comparison of vehicle gap with the synchronization gap," and "speed adaptation within the synchronization gap" of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
NASA Astrophysics Data System (ADS)
Valencia, Hubert; Kangawa, Yoshihiro; Kakimoto, Koichi
2015-12-01
GaAs(100) c(4×4) surfaces were examined by ab initio calculations, under As2, H2 and N2 gas mixed conditions as a model for GaAs1-xNx vapor-phase epitaxy (VPE) on GaAs(100). Using a simple model consisting of As2 and H2 molecules adsorptions and As/N atom substitutions, it was shown to be possible to examine the crystal growth behavior considering the relative stability of the resulting surfaces against the chemical potential of As2, H2 and N2 gases. Such simple model allows us to draw a picture of the temperature and pressure stability domains for each surfaces that can be linked to specific growth conditions, directly. We found that, using this simple model, it is possible to explain the different N-incorporation regimes observed experimentally at different temperatures, and to predict the transition temperature between these regimes. Additionally, a rational explanation of N-incorporation ratio for each of these regimes is provided. Our model should then lead to a better comprehension and control of the experimental conditions needed to realize a high quality VPE of GaAs1-xNx.
fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization
NASA Astrophysics Data System (ADS)
Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda
2010-03-01
Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.
Predictive power of food web models based on body size decreases with trophic complexity.
Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo
2018-05-01
Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2 = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.
Simple scaling of catastrophic landslide dynamics.
Ekström, Göran; Stark, Colin P
2013-03-22
Catastrophic landslides involve the acceleration and deceleration of millions of tons of rock and debris in response to the forces of gravity and dissipation. Their unpredictability and frequent location in remote areas have made observations of their dynamics rare. Through real-time detection and inverse modeling of teleseismic data, we show that landslide dynamics are primarily determined by the length scale of the source mass. When combined with geometric constraints from satellite imagery, the seismically determined landslide force histories yield estimates of landslide duration, momenta, potential energy loss, mass, and runout trajectory. Measurements of these dynamical properties for 29 teleseismogenic landslides are consistent with a simple acceleration model in which height drop and rupture depth scale with the length of the failing slope.
Electron heating within interaction zones of simple high-speed solar wind streams
NASA Technical Reports Server (NTRS)
Feldman, W. C.; Asbridge, J. R.; Bame, S. J.; Gosling, J. T.; Lemons, D. S.
1978-01-01
In the present paper, electron heating within the high-speed portions of three simple stream-stream interaction zones is studied to further our understanding of the physics of heat flux regulation in interplanetary space. To this end, the thermal signals present in the compressions at the leading edges of the simple high-speed streams are analyzed, showing that the data are inconsistent with the Spitzer conductivity. Instead, a polynomial law is found to apply. Its implication concerning the mechanism of interplanetary heat conduction is discussed, and the results of applying this conductivity law to high-speed flows inside of 1 AU are studied. A self-consistent model of the radial evolution of electrons in the high-speed solar wind is proposed.
NASA Astrophysics Data System (ADS)
Palit, Sourav; Chakrabarti, Sandip Kumar; Pal, Sujay; Basak, Tamal
Extra ionization by X-rays during solar flares affects VLF signal propagation through D-region ionosphere. Ionization produced in the lower ionosphere due to X-ray spectra of solar flares are simulated with an efficient detector simulation program, GEANT4. The balancing between the ionization and loss processes, causing the lower ionosphere to settle back to its undisturbed state is handled with a simple chemical model consisting of four broad species of ion densities. Using the electron densities, modified VLF signal amplitude is then computed with LWPC code. VLF signal along NWC (Australia) to IERC/ICSP (India) propagation path is examined during a M and a X-type solar flares and observational deviations are compared with simulated results. The agreement is found to be excellent.
The Parent Magmas of the Cumulate Eucrites: A Mass Balance Approach
NASA Technical Reports Server (NTRS)
Treiman, Allan H.
1996-01-01
The cumulate eucrite meteorites are gabbros that are related to the eucrite basalt meteorites. The eucrite basalts are relatively primitive (nearly flat REE patterns with La approx. 8-30 x CI), but the parent magmas of the cumulate eucrites have been inferred as extremely evolved (La to greater than 100 x CI). This inference has been based on mineral/magma partitioning, and on mass balance considering the cumulate eucrites as adcumulates of plagioclase + pigeonite only; both approaches have been criticized as inappropriate. Here, mass balance including magma + equilibrium pigeonite + equilibrium plagiociase is used to test a simple model for the cumulate eucrites: that they formed from known eucritic magma types, that they consisted only of magma + crystals in chemical equilibrium with the magma, and that they were closed to chemical exchange after the accumulation of crystals. This model is tested for major and Rare Earth Elements (REE). The cumulate eucrites Serra de Mage and Moore County are consistent, in both REE and major elements, with formation by this simple model from a eucrite magma with a composition similar to the Nuevo Laredo meteorite: Serra de Mage as 14% magma, 47.5% pigeonite, and 38.5% plagioclase; Moore County as 35% magma, 37.5% pigeonite, and 27.5% plagioclase. These results are insensitive to the choice of mineral/magma partition coefficients. Results for the Moama cumulate eucrite are strongly dependent on choice of partition coefficients; for one reasonable choice, Moama's composition can be modeled as 4% Nuevo Laredo magma, 60% pigeonite, and 36% plagioclase. Selection of parent magma composition relies heavily on major elements; the REE cannot uniquely indicate a parent magma among the eucrite basalts. The major element composition of Y-791195 can be fit adequately as a simple cumulate from any basaltic eucrite composition. However, Y-791195 has LREE abundances and La/Lu too low to be accommodated within the model using any basaltic eucrite composition and any reasonable partition coefficients. Postcumulus loss of incompatible elements seems possible. It is intriguing that Serra de Mage, Moore County, and Moama are consistent with the same parental magma; could they be from the same igneous body on the eucrite parent asteroid (4 Vesta)?
Anthropogenic heat flux: advisable spatial resolutions when input data are scarce
NASA Astrophysics Data System (ADS)
Gabey, A. M.; Grimmond, C. S. B.; Capel-Timms, I.
2018-02-01
Anthropogenic heat flux (QF) may be significant in cities, especially under low solar irradiance and at night. It is of interest to many practitioners including meteorologists, city planners and climatologists. QF estimates at fine temporal and spatial resolution can be derived from models that use varying amounts of empirical data. This study compares simple and detailed models in a European megacity (London) at 500 m spatial resolution. The simple model (LQF) uses spatially resolved population data and national energy statistics. The detailed model (GQF) additionally uses local energy, road network and workday population data. The Fractions Skill Score (FSS) and bias are used to rate the skill with which the simple model reproduces the spatial patterns and magnitudes of QF, and its sub-components, from the detailed model. LQF skill was consistently good across 90% of the city, away from the centre and major roads. The remaining 10% contained elevated emissions and "hot spots" representing 30-40% of the total city-wide energy. This structure was lost because it requires workday population, spatially resolved building energy consumption and/or road network data. Daily total building and traffic energy consumption estimates from national data were within ± 40% of local values. Progressively coarser spatial resolutions to 5 km improved skill for total QF, but important features (hot spots, transport network) were lost at all resolutions when residential population controlled spatial variations. The results demonstrate that simple QF models should be applied with conservative spatial resolution in cities that, like London, exhibit time-varying energy use patterns.
Chaos in plasma simulation and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, C.; Newman, D.E.; Sprott, J.C.
1993-09-01
We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Mass Balance of Multiyear Sea Ice in the Southern Beaufort Sea
2012-09-30
datasets. Table 1 lists the primary data sources to be used. To determine sources and sinks of MY ice, we use a simple model of MY ice circulation, which is...shown in Figure 1. In this model , we consider the Beaufort Sea to consist of four zones defined by mean drift of sea ice in summer and winter, such...Healy/Louis S. St. Laurant cruises 1 Seasonal Ice Zone Observing Network 2 Polar Airborne Measurements and Arctic Regional Climate Model
Comparison of Alcator C data with the Rebut-Lallia-Watkins critical gradient scaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, I.H.
The critical temperature gradient model of Rebut, Lallia and Watkins is compared with data from Alcator C. The predicted central electron temperature is derived from the model, and a simple analytic formula is given. It is found to be in quite good agreement with the observed temperatures on Alcator C under ohmic heating conditions. However, the thermal diffusivity postulated in the model for gradients that exceed the critical is not consistent with the observed electron heating by Lower Hybrid waves.
Rainfall thresholds for the initiation of debris flows at La Honda, California
Wilson, R.C.; Wieczorek, G.F.
1995-01-01
A simple numerical model, based on the physical analogy of a leaky barrel, can simulate significant features of the interaction between rainfall and shallow-hillslope pore pressures. The leaky-barrel-model threshold is consistent with, but slightly higher than, an earlier, purely empirical, threshold. The number of debris flows triggered by a storm can be related to the time and amount by which the leaky-barrel-model response exceeded the threshold during the storm. -from Authors
Modelling of capillary-driven flow for closed paper-based microfluidic channels
NASA Astrophysics Data System (ADS)
Songok, Joel; Toivakka, Martti
2017-06-01
Paper-based microfluidics is an emerging field focused on creating inexpensive devices, with simple fabrication methods for applications in various fields including healthcare, environmental monitoring and veterinary medicine. Understanding the flow of liquid is important in achieving consistent operation of the devices. This paper proposes capillary models to predict flow in paper-based microfluidic channels, which include a flow accelerating hydrophobic top cover. The models, which consider both non-absorbing and absorbing substrates, are in good agreement with the experimental results.
Giftedness and Genetics: The Emergenic-Epigenetic Model and Its Implications
ERIC Educational Resources Information Center
Simonton, Dean Keith
2005-01-01
The genetic endowment underlying giftedness may operate in a far more complex manner than often expressed in most theoretical accounts of the phenomenon. First, an endowment may be emergenic. That is, a gift may consist of multiple traits (multidimensional) that are inherited in a multiplicative (configurational), rather than an additive (simple)…
The Evaluation of Teachers' Job Performance Based on Total Quality Management (TQM)
ERIC Educational Resources Information Center
Shahmohammadi, Nayereh
2017-01-01
This study aimed to evaluate teachers' job performance based on total quality management (TQM) model. This was a descriptive survey study. The target population consisted of all primary school teachers in Karaj (N = 2917). Using Cochran formula and simple random sampling, 340 participants were selected as sample. A total quality management…
While it is generally accepted that dense stands of plants exacerbate epidemics caused by foliar pathogens, there is little experimental evidence to support this view. We grew model plant communities consisting of wheat and wild oats at different densities and proportions and exp...
While it is generally accepted that dense stands of plants exacerbate epidermics caused by foliar pathogens, there is little experimental evidence to support this view. We grew model plant communities consisting of wheat and wild oats at different densities and proportions and ex...
A cognitive-consistency based model of population wide attitude change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lakkaraju, Kiran; Speed, Ann Elizabeth
Attitudes play a significant role in determining how individuals process information and behave. In this paper we have developed a new computational model of population wide attitude change that captures the social level: how individuals interact and communicate information, and the cognitive level: how attitudes and concept interact with each other. The model captures the cognitive aspect by representing each individuals as a parallel constraint satisfaction network. The dynamics of this model are explored through a simple attitude change experiment where we vary the social network and distribution of attitudes in a population.
Galactic chemical evolution and nucleocosmochronology - Standard model with terminated infall
NASA Technical Reports Server (NTRS)
Clayton, D. D.
1984-01-01
Some exactly soluble families of models for the chemical evolution of the Galaxy are presented. The parameters considered include gas mass, the age-metallicity relation, the star mass vs. metallicity, the age distribution, and the mean age of dwarfs. A short BASIC program for calculating these parameters is given. The calculation of metallicity gradients, nuclear cosmochronology, and extinct radioactivities is addressed. An especially simple, mathematically linear model is recommended as a standard model of galaxies with truncated infall due to its internal consistency and compact display of the physical effects of the parameters.
Analytical model for minority games with evolutionary learning
NASA Astrophysics Data System (ADS)
Campos, Daniel; Méndez, Vicenç; Llebot, Josep E.; Hernández, Germán A.
2010-06-01
In a recent work [D. Campos, J.E. Llebot, V. Méndez, Theor. Popul. Biol. 74 (2009) 16] we have introduced a biological version of the Evolutionary Minority Game that tries to reproduce the intraspecific competition for limited resources in an ecosystem. In comparison with the complex decision-making mechanisms used in standard Minority Games, only two extremely simple strategies ( juveniles and adults) are accessible to the agents. Complexity is introduced instead through an evolutionary learning rule that allows younger agents to learn taking better decisions. We find that this game shows many of the typical properties found for Evolutionary Minority Games, like self-segregation behavior or the existence of an oscillation phase for a certain range of the parameter values. However, an analytical treatment becomes much easier in our case, taking advantage of the simple strategies considered. Using a model consisting of a simple dynamical system, the phase diagram of the game (which differentiates three phases: adults crowd, juveniles crowd and oscillations) is reproduced.
Holographic multiverse and conformal invariance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garriga, Jaume; Vilenkin, Alexander, E-mail: jaume.garriga@ub.edu, E-mail: vilenkin@cosmos.phy.tufts.edu
2009-11-01
We consider a holographic description of the inflationary multiverse, according to which the wave function of the universe is interpreted as the generating functional for a lower dimensional Euclidean theory. We analyze a simple model where transitions between inflationary vacua occur through bubble nucleation, and the inflating part of spacetime consists of de Sitter regions separated by thin bubble walls. In this model, we present some evidence that the dual theory is conformally invariant in the UV.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
NASA Astrophysics Data System (ADS)
Hirabayashi, M.; Howl, B. A.; Fassett, C. I.; Soderblom, J. M.; Minton, D. A.; Melosh, H. J.
2018-02-01
Impact cratering is likely a primary agent of regolith generation on airless bodies. Regolith production via impact cratering has long been a key topic of study since the Apollo era. The evolution of regolith due to impact cratering, however, is not well understood. A better formulation is needed to help quantify the formation mechanism and timescale of regolith evolution. Here we propose an analytically derived stochastic model that describes the evolution of regolith generated by small, simple craters. We account for ejecta blanketing as well as regolith infilling of the transient crater cavity. Our results show that the regolith infilling plays a key role in producing regolith. Our model demonstrates that because of the stochastic nature of impact cratering, the regolith thickness varies laterally, which is consistent with earlier work. We apply this analytical model to the regolith evolution at the Apollo 15 site. The regolith thickness is computed considering the observed crater size-frequency distribution of small, simple lunar craters (< 381 m in radius for ejecta blanketing and <100 m in radius for the regolith infilling). Allowing for some amount of regolith coming from the outside of the area, our result is consistent with an empirical result from the Apollo 15 seismic experiment. Finally, we find that the timescale of regolith growth is longer than that of crater equilibrium, implying that even if crater equilibrium is observed on a cratered surface, it is likely that the regolith thickness is still evolving due to additional impact craters.
Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas
NASA Astrophysics Data System (ADS)
Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.
2003-04-01
Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Self-consistent approach for neutral community models with speciation
NASA Astrophysics Data System (ADS)
Haegeman, Bart; Etienne, Rampal S.
2010-03-01
Hubbell’s neutral model provides a rich theoretical framework to study ecological communities. By incorporating both ecological and evolutionary time scales, it allows us to investigate how communities are shaped by speciation processes. The speciation model in the basic neutral model is particularly simple, describing speciation as a point-mutation event in a birth of a single individual. The stationary species abundance distribution of the basic model, which can be solved exactly, fits empirical data of distributions of species’ abundances surprisingly well. More realistic speciation models have been proposed such as the random-fission model in which new species appear by splitting up existing species. However, no analytical solution is available for these models, impeding quantitative comparison with data. Here, we present a self-consistent approximation method for neutral community models with various speciation modes, including random fission. We derive explicit formulas for the stationary species abundance distribution, which agree very well with simulations. We expect that our approximation method will be useful to study other speciation processes in neutral community models as well.
Role of local assembly in the hierarchical crystallization of associating colloidal hard hemispheres
NASA Astrophysics Data System (ADS)
Lei, Qun-li; Hadinoto, Kunn; Ni, Ran
2017-10-01
Hierarchical self-assembly consisting of local associations of simple building blocks for the formation of complex structures widely exists in nature, while the essential role of local assembly remains unknown. In this work, by using computer simulations, we study a simple model system consisting of associating colloidal hemispheres crystallizing into face-centered-cubic crystals comprised of spherical dimers of hemispheres, focusing on the effect of dimer formation on the hierarchical crystallization. We found that besides assisting the crystal nucleation because of increasing the symmetry of building blocks, the association between hemispheres can also induce both reentrant melting and reentrant crystallization depending on the range of interaction. Especially when the interaction is highly sticky, we observe a novel reentrant crystallization of identical crystals, which melt only in a certain temperature range. This offers another axis in fabricating responsive crystalline materials by tuning the fluctuation of local association.
A (very) Simple Model for the Aspect Ratio of High-Order River Basins
NASA Astrophysics Data System (ADS)
Shelef, E.
2017-12-01
The structure of river networks dictates the distribution of elevation, water, and sediments across Earth's surface. Despite its intricate shape, the structure of high-order river networks displays some surprising regularities such as the consistent aspect ratio (i.e., basin's width over length) of river basins along linear mountain fronts. This ratio controls the spacing between high-order channels as well as the spacing between the depositional bodies they form. It is generally independent of tectonic and climatic conditions and is often attributed to the initial topography over which the network was formed. This study shows that a simple, cross-like channel model explains this ratio via a requirement for equal elevation gain between the outlets and drainage-divides of adjacent channels at topographic steady state. This model also explains the dependence of aspect ratio on channel concavity and the location of the widest point on a drainage divide.
A simple quantum mechanical treatment of scattering in nanoscale transistors
NASA Astrophysics Data System (ADS)
Venugopal, R.; Paulsson, M.; Goasguen, S.; Datta, S.; Lundstrom, M. S.
2003-05-01
We present a computationally efficient, two-dimensional quantum mechanical simulation scheme for modeling dissipative electron transport in thin body, fully depleted, n-channel, silicon-on-insulator transistors. The simulation scheme, which solves the nonequilibrium Green's function equations self consistently with Poisson's equation, treats the effect of scattering using a simple approximation inspired by the "Büttiker probes," often used in mesoscopic physics. It is based on an expansion of the active device Hamiltonian in decoupled mode space. Simulation results are used to highlight quantum effects, discuss the physics of scattering and to relate the quantum mechanical quantities used in our model to experimentally measured low field mobilities. Additionally, quantum boundary conditions are rigorously derived and the effects of strong off-equilibrium transport are examined. This paper shows that our approximate treatment of scattering, is an efficient and useful simulation method for modeling electron transport in nanoscale, silicon-on-insulator transistors.
González-Ramírez, Laura R.; Ahmed, Omar J.; Cash, Sydney S.; Wayne, C. Eugene; Kramer, Mark A.
2015-01-01
Epilepsy—the condition of recurrent, unprovoked seizures—manifests in brain voltage activity with characteristic spatiotemporal patterns. These patterns include stereotyped semi-rhythmic activity produced by aggregate neuronal populations, and organized spatiotemporal phenomena, including waves. To assess these spatiotemporal patterns, we develop a mathematical model consistent with the observed neuronal population activity and determine analytically the parameter configurations that support traveling wave solutions. We then utilize high-density local field potential data recorded in vivo from human cortex preceding seizure termination from three patients to constrain the model parameters, and propose basic mechanisms that contribute to the observed traveling waves. We conclude that a relatively simple and abstract mathematical model consisting of localized interactions between excitatory cells with slow adaptation captures the quantitative features of wave propagation observed in the human local field potential preceding seizure termination. PMID:25689136
An Incidence Loss Model for Wave Rotors with Axially Aligned Passages
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
1998-01-01
A simple mathematical model is described to account for the losses incurred when the flow in the duct (port) of a wave rotor is not aligned with the passages. The model, specifically for wave rotors with axially aligned passages, describes a loss mechanism which is sensitive to incident flow angle and Mach number. Implementation of the model in a one-dimensional CFD based wave rotor simulation is presented. Comparisons with limited experimental results are consistent with the model. Sensitivity studies are presented which highlight the significance of the incidence loss relative to other loss mechanisms in the wave rotor.
Test of a geometric model for the modification stage of simple impact crater development
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Coderre, J. M.; Rupert, J.; Garvin, J. B.
1989-01-01
This paper presents a geometric model describing the geometry of the transient cavity of an impact crater and the subsequent collapse of its walls to form a crater filled by an interior breccia lens. The model is tested by comparing the volume of slump material calculated from known dimensional parameters with the volume of the breccia lens estimated on the basis of observational data. Results obtained from the model were found to be consistent with observational data, particularly in view of the highly sensitive nature of the model to input parameters.
Mira variables: An informal review
NASA Technical Reports Server (NTRS)
Wing, R. F.
1980-01-01
The structure of the Mira variables is discussed with particular emphasis on the extent of their observable atmospheres, the various methods for measuring the sizes of these atmospheres, and the manner in which the size changes through the cycle. The results obtained by direct, photometric and spectroscopic methods are compared, and the problems of interpretation are addressed. Also, a simple model for the atmospheric structure and motions of Miras based on recent observations of the doubling of infrared molecualr times is described. This model, consisting of two atmospheric layers plus a circumstellar shell, provides a physically plausible picture of the atmosphere which is consistent with the photometrically measured magnitude and temperature variations as well as the spectroscopic data.
The use of models to predict potential contamination aboard orbital vehicles
NASA Technical Reports Server (NTRS)
Boraas, Martin E.; Seale, Dianne B.
1989-01-01
A model of fungal growth on air-exposed, nonnutritive solid surfaces, developed for utilization aboard orbital vehicles is presented. A unique feature of this testable model is that the development of a fungal mycelium can facilitate its own growth by condensation of water vapor from its environment directly onto fungal hyphae. The fungal growth rate is limited by the rate of supply of volatile nutrients and fungal biomass is limited by either the supply of nonvolatile nutrients or by metabolic loss processes. The model discussed is structurally simple, but its dynamics can be quite complex. Biofilm accumulation can vary from a simple linear increase to sustained exponential growth, depending on the values of the environmental variable and model parameters. The results of the model are consistent with data from aquatic biofilm studies, insofar as the two types of systems are comparable. It is shown that the model presented is experimentally testable and provides a platform for the interpretation of observational data that may be directly relevant to the question of growth of organisms aboard the proposed Space Station.
Giovannini, Giannina; Sbarciog, Mihaela; Steyer, Jean-Philippe; Chamy, Rolando; Vande Wouwer, Alain
2018-05-01
Hydrogen has been found to be an important intermediate during anaerobic digestion (AD) and a key variable for process monitoring as it gives valuable information about the stability of the reactor. However, simple dynamic models describing the evolution of hydrogen are not commonplace. In this work, such a dynamic model is derived using a systematic data driven-approach, which consists of a principal component analysis to deduce the dimension of the minimal reaction subspace explaining the data, followed by an identification of the kinetic parameters in the least-squares sense. The procedure requires the availability of informative data sets. When the available data does not fulfill this condition, the model can still be built from simulated data, obtained using a detailed model such as ADM1. This dynamic model could be exploited in monitoring and control applications after a re-identification of the parameters using actual process data. As an example, the model is used in the framework of a control strategy, and is also fitted to experimental data from raw industrial wine processing wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mohammed, Asadig; Murugan, Jeff; Nastase, Horatiu
2012-11-02
We present an embedding of the three-dimensional relativistic Landau-Ginzburg model for condensed matter systems in an N = 6, U(N) × U(N) Chern-Simons-matter theory [the Aharony-Bergman-Jafferis-Maldacena model] by consistently truncating the latter to an Abelian effective field theory encoding the collective dynamics of O(N) of the O(N(2)) modes. In fact, depending on the vacuum expectation value on one of the Aharony-Bergman-Jafferis-Maldacena scalars, a mass deformation parameter μ and the Chern-Simons level number k, our Abelianization prescription allows us to interpolate between the Abelian Higgs model with its usual multivortex solutions and a Ø(4) theory. We sketch a simple condensed matter model that reproduces all the salient features of the Abelianization. In this context, the Abelianization can be interpreted as giving a dimensional reduction from four dimensions.
Jolin, William C; Goyetche, Reaha; Carter, Katherine; Medina, John; Vasudevan, Dharni; MacKay, Allison A
2017-06-06
With the increasing number of emerging contaminants that are cationic at environmentally relevant pH values, there is a need for robust predictive models of organic cation sorption coefficients (K d ). Current predictive models fail to account for the differences in the identity, abundance, and affinity of surface-associated inorganic exchange ions naturally present at negatively charged receptor sites on environmental solids. To better understand how organic cation sorption is influenced by surface-associated inorganic exchange ions, sorption coefficients of 10 organic cations (including eight pharmaceuticals and two simple probe organic amines) were determined for six homoionic forms of the aluminosilicate mineral, montmorillonite. Organic cation sorption coefficients exhibited consistent trends for all compounds across the various homoionic clays with sorption coefficients (K d ) decreasing as follows: K d Na + > K d NH 4 + ≥ K d K + > K d Ca 2+ ≥ K d Mg 2+ > K d Al 3+ . This trend for competition between organic cations and exchangeable inorganic cations is consistent with the inorganic cation selectivity sequence, determined for exchange between inorganic ions. Such consistent trends in competition between organic and inorganic cations suggested that a simple probe cation, such as phenyltrimethylammonium or benzylamine, could capture soil-to-soil variations in native inorganic cation identity and abundance for the prediction of organic cation sorption to soils and soil minerals. Indeed, sorption of two pharmaceutical compounds to 30 soils was better described by phenyltrimethylammonium sorption than by measures of benzylamine sorption, effective cation exchange capacity alone, or a model from the literature (Droge, S., and Goss, K. Environ. Sci. Technol. 2013, 47, 14224). A hybrid approach integrating structural scaling factors derived from this literature model of organic cation sorption, along with phenyltrimethylammonium K d values, allowed for estimation of K d values for more structurally complex organic cations to homoionic montmorillonites and to heteroionic soils (mean absolute error of 0.27 log unit). Accordingly, we concluded that the use of phenyltrimethylammonium as a probe compound was a promising means to account for the identity, affinity, and abundance of natural exchange ions in the prediction of organic cation sorption coefficients for environmental solids.
The microwave spectrum and nature of the subsurface of Mars.
NASA Technical Reports Server (NTRS)
Cuzzi, J. N.; Muhleman, D. O.
1972-01-01
Expected microwave spectra of Mars are computed using an improved thermal model and accurate aspect geometry. It is found that when seasonal polar cap effects are included in the calculations, the observable spectrum of Mars is flat from 0.1-21 cm to within the accuracy of present data. The spectra obtained from this model are consistent with all the data and are obtainable from a relatively simple model (homogeneous, dry, smooth dielectric sphere). This result differs from that predicted by the analytical theory in common use which is in apparent conflict with the observed spectra. A range of electrical loss tangents, covering the extreme limits for likely dry particulate geological materials, is employed. The case of a lunar-like subsurface is completely consistent with all present data.
Simple cellular automaton model for traffic breakdown, highway capacity, and synchronized flow
NASA Astrophysics Data System (ADS)
Kerner, Boris S.; Klenov, Sergey L.; Schreckenberg, Michael
2011-10-01
We present a simple cellular automaton (CA) model for two-lane roads explaining the physics of traffic breakdown, highway capacity, and synchronized flow. The model consists of the rules “acceleration,” “deceleration,” “randomization,” and “motion” of the Nagel-Schreckenberg CA model as well as “overacceleration through lane changing to the faster lane,” “comparison of vehicle gap with the synchronization gap,” and “speed adaptation within the synchronization gap” of Kerner's three-phase traffic theory. We show that these few rules of the CA model can appropriately simulate fundamental empirical features of traffic breakdown and highway capacity found in traffic data measured over years in different countries, like characteristics of synchronized flow, the existence of the spontaneous and induced breakdowns at the same bottleneck, and associated probabilistic features of traffic breakdown and highway capacity. Single-vehicle data derived in model simulations show that synchronized flow first occurs and then self-maintains due to a spatiotemporal competition between speed adaptation to a slower speed of the preceding vehicle and passing of this slower vehicle. We find that the application of simple dependences of randomization probability and synchronization gap on driving situation allows us to explain the physics of moving synchronized flow patterns and the pinch effect in synchronized flow as observed in real traffic data.
Analytical Model for Mars Crater-Size Frequency Distribution
NASA Astrophysics Data System (ADS)
Bruckman, W.; Ruiz, A.; Ramos, E.
2009-05-01
We present a theoretical and analytical curve that reproduces essential features of the frequency distributions vs. diameter of the 42,000 impact craters contained in Barlow's Mars Catalog. The model is derived using reasonable simple assumptions that allow us to relate the present craters population with the craters population at each particular epoch. The model takes into consideration the reduction of the number of craters as a function of time caused by their erosion and obliteration, and this provides a simple and natural explanation for the presence of different slopes in the empirical log-log plot of number of craters (N) vs. diameter (D). A mean life for martians craters as a function of diameter is deduced, and it is shown that this result is consistent with the corresponding determination of craters mean life based on Earth data. Arguments are given to suggest that this consistency follows from the fact that a crater mean life is proportional to its volumen. It also follows that in the absence of erosions and obliterations, when craters are preserved, we would have N ∝ 1/D^{4.3}, which is a striking conclusion, since the exponent 4.3 is larger than previously thought. Such an exponent implies a similar slope in the extrapolated impactors size-frequency distribution.
Sustained currents in coupled diffusive systems
NASA Astrophysics Data System (ADS)
Larralde, Hernán; Sanders, David P.
2014-08-01
Coupling two diffusive systems may give rise to a nonequilibrium stationary state (NESS) with a non-trivial persistent, circulating current. We study a simple example that is exactly soluble, consisting of random walkers with different biases towards a reflecting boundary, modelling, for example, Brownian particles with different charge states in an electric field. We obtain analytical expressions for the concentrations and currents in the NESS for this model, and exhibit the main features of the system by numerical simulation.
NASA Astrophysics Data System (ADS)
Almbladh, C.-O.; Morales, A. L.
1989-02-01
Auger CVV spectra of simple metals are generally believed to be well described by one-electron-like theories in the bulk which account for matrix elements and, in some cases, also static core-hole screening effects. We present here detailed calculations on Li, Be, Na, Mg, and Al using self-consistent bulk wave functions and proper matrix elements. The resulting spectra differ markedly from experiment and peak at too low energies. To explain this discrepancy we investigate effects of the surface and dynamical effects of the sudden disappearance of the core hole in the final state. To study core-hole effects we solve Mahan-Nozières-De Dominicis (MND) model numerically over the entire band. The core-hole potential and other parameters in the MND model are determined by self-consistent calculations of the core-hole impurity. The results are compared with simpler approximations based on the final-state rule due to von Barth and Grossmann. To study surface and mean-free-path effects we perform slab calculations for Al but use a simpler infinite-barrier model in the remaining cases. The model reproduces the slab spectra for Al with very good accuracy. In all cases investigated either the effects of the surface or the effects of the core hole give important modifications and a much improved agreement with experiment.
NASA Astrophysics Data System (ADS)
Marchitto, T. M.; Bryan, S. P.; Doss, W.; McCulloch, M. T.; Montagna, P.
2018-01-01
In contrast to Li/Ca and Mg/Ca, Li/Mg is strongly anticorrelated with temperature in aragonites precipitated by the benthic foraminifer Hoeglundina elegans and a wide range of scleractinian coral taxa. We propose a simple conceptual model of biomineralization that explains this pattern and is consistent with available abiotic aragonite partition coefficients. Under this model the organism actively modifies seawater within its calcification pool by raising its [Ca2+], using a pump that strongly discriminates against both Li+ and Mg2+. Rayleigh fractionation during calcification effectively reverses this process, removing Ca2+ while leaving most Li+ and Mg2+ behind in the calcifying fluid. The net effect of these two processes is that Li/Mg in the calcifying fluid remains very close to the seawater value, and temperature-dependent abiotic partition coefficients are expressed in the biogenic aragonite Li/Mg ratio. We further show that coral Sr/Ca is consistent with this model if the Ca2+ pump barely discriminates against Sr2+. In H. elegans the covariation of Sr/Ca and Mg/Ca requires either that the pump more strongly discriminates against Sr2+, or that cation incorporation is affected by aragonite precipitation rate via the mechanism of surface entrapment. In either case Li/Mg is minimally affected by such 'vital effects' which plague other elemental ratio paleotemperature proxies.
Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity
NASA Astrophysics Data System (ADS)
Thual, S.; Majda, A.; Chen, N.
2017-12-01
The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.
Control strategy for a dual-arm maneuverable space robot
NASA Technical Reports Server (NTRS)
Wang, P. K. C.
1987-01-01
A simple strategy for the attitude control and arm coordination of a maneuverable space robot with dual arms is proposed. The basic task for the robot consists of the placement of marked rigid solid objects with specified pairs of gripping points and a specified direction of approach for gripping. The strategy consists of three phases each of which involves only elementary rotational and translational collision-free maneuvers of the robot body. Control laws for these elementary maneuvers are derived by using a body-referenced dynamic model of the dual-arm robot.
NASA Astrophysics Data System (ADS)
Ahmed, Chaara El Mouez
Nous avons etudie les relations de dispersion et la diffusion des glueballs et des mesons dans le modele U(1)_{2+1} compact. Ce modele a ete souvent utilise comme un simple modele de la chromodynamique quantique (QCD), parce qu'il possede le confinement ainsi que les etats de glueballs. Par contre, sa structure mathematique est beaucoup plus simple que la QCD. Notre methode consiste a diagonaliser l'Hamiltonien de ce modele dans une base appropriee de graphes et sur reseau impulsion, afin de generer les relations de dispersion des glueballs et des mesons. Pour la diffusion, nous avons utilise la methode dependante du temps pour calculer la matrice S et la section efficace de diffusion des glueballs et des mesons. Les divers resultats obtenus semblent etre en accord avec les travaux anterieurs de Hakim, Alessandrini et al., Irving et al., qui eux, utilisent plutot la theorie des perturbations en couplage fort, et travaillent sur un reseau espace-temps.
A comparison of the structureborne and airborne paths for propfan interior noise
NASA Technical Reports Server (NTRS)
Eversman, W.; Koval, L. R.; Ramakrishnan, J. V.
1986-01-01
A comparison is made between the relative levels of aircraft interior noise related to structureborne and airborne paths for the same propeller source. A simple, but physically meaningful, model of the structure treats the fuselage interior as a rectangular cavity with five rigid walls. The sixth wall, the fuselage sidewall, is a stiffened panel. The wing is modeled as a simple beam carried into the fuselage by a large discrete stiffener representing the carry-through structure. The fuselage interior is represented by analytically-derived acoustic cavity modes and the entire structure is represented by structural modes derived from a finite element model. The noise source for structureborne noise is the unsteady lift generation on the wing due to the rotating trailing vortex system of the propeller. The airborne noise source is the acoustic field created by a propeller model consistent with the vortex representation. Comparisons are made on the basis of interior noise over a range of propeller rotational frequencies at a fixed thrust.
Simple versus complex models of trait evolution and stasis as a response to environmental change
NASA Astrophysics Data System (ADS)
Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott
2015-04-01
Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.
Equivalent circuit models for interpreting impedance perturbation spectroscopy data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell
2004-07-01
As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.
Kinetic electron model for plasma thruster plumes
NASA Astrophysics Data System (ADS)
Merino, Mario; Mauriño, Javier; Ahedo, Eduardo
2018-03-01
A paraxial model of an unmagnetized, collisionless plasma plume expanding into vacuum is presented. Electrons are treated kinetically, relying on the adiabatic invariance of their radial action integral for the integration of Vlasov's equation, whereas ions are treated as a cold species. The quasi-2D plasma density, self-consistent electric potential, and electron pressure, temperature, and heat fluxes are analyzed. In particular, the model yields the collisionless cooling of electrons, which differs from the Boltzmann relation and the simple polytropic laws usually employed in fluid and hybrid PIC/fluid plume codes.
Sobol, Wlad T
2002-01-01
A simple kinetic model that describes the time evolution of the chemical concentration of an arbitrary compound within the tank of an automatic film processor is presented. It provides insights into the kinetics of chemistry concentration inside the processor's tank; the results facilitate the tasks of processor tuning and quality control (QC). The model has successfully been used in several troubleshooting sessions of low-volume mammography processors for which maintaining consistent QC tracking was difficult due to fluctuations of bromide levels in the developer tank.
Depletion of interstitial oxygen in silicon and the thermal donor model
NASA Technical Reports Server (NTRS)
Borenstein, Jeffrey T.; Singh, Vijay A.; Corbett, James W.
1987-01-01
It is shown here that the experimental results of Newman (1985) and Tan et al. (1986) regarding the loss of oxygen interstitials during 450 C annealing of Czochralski silicon are consistent with the recently proposed model of Borenstein, Peak, and Corbett (1986) for thermal donor formation. Calculations were carried out for TD cores corresponding to O2, O3, O4, and/or O5 clusters. A simple model which attempts to capture the essential physics of the interstitial depletion has been constructed, and is briefly described.
NASA Astrophysics Data System (ADS)
Katsumata, Hisatoshi; Konishi, Keiji; Hara, Naoyuki
2018-04-01
The present paper proposes a scheme for controlling wave segments in excitable media. This scheme consists of two phases: in the first phase, a simple mathematical model for wave segments is derived using only the time series data of input and output signals for the media; in the second phase, the model derived in the first phase is used in an advanced control technique. We demonstrate with numerical simulations of the Oregonator model that this scheme performs better than a conventional control scheme.
The distribution of the scattered laser light in laser-plate-target coupling
NASA Astrophysics Data System (ADS)
Xiao-bo, Nie; Tie-qiang, Chang; Dong-xian, Lai; Shen-ye, Liu; Zhi-jian, Zheng
1997-04-01
Theoretical and experimental studies of the angular distributions of scattered laser light in laser-Au-plate-target coupling are reported. A simple model that describes three-dimensional plasmas and scattered laser light is presented. The approximate shape of critical density surface has been given and the three-dimensional laser ray tracing is applied in the model. The theoretical results of the model are consistent with the experimental data for the scattered laser light in the polar angle range of 25° to 145° from the laser beam.
A simple rule for the costs of vigilance: empirical evidence from a social forager.
Cowlishaw, Guy; Lawes, Michael J.; Lightbody, Margaret; Martin, Alison; Pettifor, Richard; Rowcliffe, J. Marcus
2004-01-01
It is commonly assumed that anti-predator vigilance by foraging animals is costly because it interrupts food searching and handling time, leading to a reduction in feeding rate. When food handling does not require visual attention, however, a forager may handle food while simultaneously searching for the next food item or scanning for predators. We present a simple model of this process, showing that when the length of such compatible handling time Hc is long relative to search time S, specifically Hc/S > 1, it is possible to perform vigilance without a reduction in feeding rate. We test three predictions of this model regarding the relationships between feeding rate, vigilance and the Hc/S ratio, with data collected from a wild population of social foragers (samango monkeys, Cercopithecus mitis erythrarchus). These analyses consistently support our model, including our key prediction: as Hc/S increases, the negative relationship between feeding rate and the proportion of time spent scanning becomes progressively shallower. This pattern is more strongly driven by changes in median scan duration than scan frequency. Our study thus provides a simple rule that describes the extent to which vigilance can be expected to incur a feeding rate cost. PMID:15002768
Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments
NASA Astrophysics Data System (ADS)
Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; de Zeeuw, Chris I.
2016-11-01
Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity.
Modeled changes of cerebellar activity in mutant mice are predictive of their learning impairments
Badura, Aleksandra; Clopath, Claudia; Schonewille, Martijn; De Zeeuw, Chris I.
2016-01-01
Translating neuronal activity to measurable behavioral changes has been a long-standing goal of systems neuroscience. Recently, we have developed a model of phase-reversal learning of the vestibulo-ocular reflex, a well-established, cerebellar-dependent task. The model, comprising both the cerebellar cortex and vestibular nuclei, reproduces behavioral data and accounts for the changes in neural activity during learning in wild type mice. Here, we used our model to predict Purkinje cell spiking as well as behavior before and after learning of five different lines of mutant mice with distinct cell-specific alterations of the cerebellar cortical circuitry. We tested these predictions by obtaining electrophysiological data depicting changes in neuronal spiking. We show that our data is largely consistent with the model predictions for simple spike modulation of Purkinje cells and concomitant behavioral learning in four of the mutants. In addition, our model accurately predicts a shift in simple spike activity in a mutant mouse with a brainstem specific mutation. This combination of electrophysiological and computational techniques opens a possibility of predicting behavioral impairments from neural activity. PMID:27805050
An extended source for CN jets in Comet P/Halley
NASA Technical Reports Server (NTRS)
Klavetter, James Jay; A'Hearn, Michael F.
1994-01-01
We examined radial intensity profiles of CN jets in comparison with the diffuse, isotropic component of the CN coma of Comet P/Halley. All images were bias-subtracted, flat-fielded, and continuum-subtracted. We calculated the diffuse profiles by finding the azimuthal mean of the coma least contaminated by jets yielding profiles similar to those of vectorial and Haser models of simple photodissociation. We found the jet profiles by calculating a mean around a Gaussian-fitted center in r-theta space. There is an unmistakable difference between the profiles of the CN jets and the profiles of the diffuse CN. Spatial derivatives of these profiles, corrected for geometrical expansion, show that the diffuse component is consistent with a simple photodissociation process, but the jet component is not. The peak production of the jet profile occurs 6000 km from the nucleus at a heliocentric distance of 1.4 AU. Modeling of both components of the coma indicate results that are consistent with the diffuse CN photochemically produced, but the CN jets need an additional extended source. We found that about one-half of the CN in the coma of Comet P/Halley originated from the jets, the rest from the diffuse component. These features, along with the width of the jet being approximately constant, are consistent with a CHON grain origin for the jets.
Nonlocal Models of Cosmic Acceleration
NASA Astrophysics Data System (ADS)
Woodard, R. P.
2014-02-01
I review a class of nonlocally modified gravity models which were proposed to explain the current phase of cosmic acceleration without dark energy. Among the topics considered are deriving causal and conserved field equations, adjusting the model to make it support a given expansion history, why these models do not require an elaborate screening mechanism to evade solar system tests, degrees of freedom and kinetic stability, and the negative verdict of structure formation. Although these simple models are not consistent with data on the growth of cosmic structures many of their features are likely to carry over to more complicated models which are in better agreement with the data.
New adatom model for Si(11) 7X7 and Si(111)Ge 5X5 reconstructed surfaces
NASA Technical Reports Server (NTRS)
Chadi, D. J.
1985-01-01
A new adatom model differing from the conventional model by a reconstruction of the substrate is proposed. The new adatom structure provides an explanation for the 7x7 and 5x5 size of the unit cells seen on annealed Si(111) and Si(111)-Ge surfaces, respectively. The model is consistent with structural information from vacuum-tunneling microscopy. It also provides simple explanations for stacking-fault-type features expected from Rutherford backscattering experiments and for similarities in the LEED and photoemission spectra of 2x1 and 7x7 surfaces.
NASA Technical Reports Server (NTRS)
Hess, R. A.; Wheat, L. W.
1975-01-01
A control theoretic model of the human pilot was used to analyze a baseline electronic cockpit display in a helicopter landing approach task. The head down display was created on a stroke written cathode ray tube and the vehicle was a UH-1H helicopter. The landing approach task consisted of maintaining prescribed groundspeed and glideslope in the presence of random vertical and horizontal turbulence. The pilot model was also used to generate and evaluate display quickening laws designed to improve pilot vehicle performance. A simple fixed base simulation provided comparative tracking data.
NASA Technical Reports Server (NTRS)
Coles, W. A.; Harmon, J. K.; Lazarus, A. J.; Sullivan, J. D.
1978-01-01
Solar wind velocities measured by earth-orbiting spacecraft are compared with velocities determined from interplanetary scintillation (IPS) observations for 1973, a period when high-velocity streams were prevalent. The spacecraft and IPS velocities agree well in the mean and are highly correlated. No simple model for the distribution of enhanced turbulence within streams is sufficient to explain the velocity comparison results for the entire year. Although a simple proportionality between density fluctuation level and bulk density is consistent with IPS velocities for some periods, some streams appear to have enhanced turbulence in the high-velocity region, where the density is low.
Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J
2011-11-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.
Honest Importance Sampling with Multiple Markov Chains
Tan, Aixin; Doss, Hani; Hobert, James P.
2017-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855
Honest Importance Sampling with Multiple Markov Chains.
Tan, Aixin; Doss, Hani; Hobert, James P
2015-01-01
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.
NASA Astrophysics Data System (ADS)
Endo, Noritaka
2016-12-01
A simple stochastic cellular automaton model is proposed for simulating bedload transport, especially for cases with a low transport rate and where available sediments are very sparse on substrates in a subaqueous system. Numerical simulations show that the bed type changes from sheet flow through sand patches to ripples as the amount of sand increases; this is consistent with observations in flume experiments and in the field. Without changes in external conditions, the sand flux calculated for a given amount of sand decreases over time as bedforms develop from a flat bed. This appears to be inconsistent with the general understanding that sand flux remains unchanged under the constant-fluid condition, but it is consistent with the previous experimental data. For areas of low sand abundance, the sand flux versus sand amount (flux-density relation) in the simulation shows a single peak with an abrupt decrease, followed by a long tail; this is very similar to the flux-density relation seen in automobile traffic flow. This pattern (the relation between segments of the curve and the corresponding bed states) suggests that sand sheets, sand patches, and sand ripples correspond respectively to the free-flow phase, congested phase, and jam phase of traffic flows. This implies that sand topographic features on starved beds are determined by the degree of interference between sand particles. Although the present study deals with simple cases only, this can provide a simplified but effective modeling of the more complicated sediment transport processes controlled by interference due to contact between grains, such as the pulsatory migration of grain-size bimodal mixtures with repetition of clustering and scattering.
Wershaw, R. L.
1986-01-01
A generalized model of humic materials in soils and sediments, which is consistent with their observed properties, is presented. This model provides a means of understanding the interaction of hydrophobic pollutants with humic materials. In this model, it is proposed that the humic materials in soils and sediments consist of a number of different oligomers and simple compounds which result from the partial degradation of plant remains. These degradation products are stabilized by incorporation into humic aggregates bound together by weak bonding mechanisms, such as hydrogen bonding, pi bonding, and hydrophobic interactions. The resulting structures are similar to micelles or membranes, in which the interiors of the structures are hydrophobic and the exteriors are hydrophilic. Hydrophobic compounds will partition into the hydrophobic interiors of the humic micelles or "membrane-like" structures. ?? 1986.
General mechanism of two-state protein folding kinetics.
Rollins, Geoffrey C; Dill, Ken A
2014-08-13
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s.
Development of the Concept of Energy Conservation using Simple Experiments for Grade 10 Students
NASA Astrophysics Data System (ADS)
Rachniyom, S.; Toedtanya, K.; Wuttiprom, S.
2017-09-01
The purpose of this research was to develop students’ concept of and retention rate in relation to energy conservation. Activities included simple and easy experiments that considered energy transformation from potential to kinetic energy. The participants were 30 purposively selected grade 10 students in the second semester of the 2016 academic year. The research tools consisted of learning lesson plans and a learning achievement test. Results showed that the experiments worked well and were appropriate as learning activities. The students’ achievement scores significantly increased at the statistical level of 05, the students’ retention rates were at a high level, and learning behaviour was at a good level. These simple experiments allowed students to learn to demonstrate to their peers and encouraged them to use familiar models to explain phenomena in daily life.
Creation of Consistent Burn Wounds: A Rat Model
Cai, Elijah Zhengyang; Ang, Chuan Han; Raju, Ashvin; Tan, Kong Bing; Hing, Eileen Chor Hoong; Loo, Yihua; Wong, Yong Chiat; Lee, Hanjing; Lim, Jane; Moochhala, Shabbir M; Hauser, Charlotte AE
2014-01-01
Background Burn infliction techniques are poorly described in rat models. An accurate study can only be achieved with wounds that are uniform in size and depth. We describe a simple reproducible method for creating consistent burn wounds in rats. Methods Ten male Sprague-Dawley rats were anesthetized and dorsum shaved. A 100 g cylindrical stainless-steel rod (1 cm diameter) was heated to 100℃ in boiling water. Temperature was monitored using a thermocouple. We performed two consecutive toe-pinch tests on different limbs to assess the depth of sedation. Burn infliction was limited to the loin. The skin was pulled upwards, away from the underlying viscera, creating a flat surface. The rod rested on its own weight for 5, 10, and 20 seconds at three different sites on each rat. Wounds were evaluated for size, morphology and depth. Results Average wound size was 0.9957 cm2 (standard deviation [SD] 0.1845) (n=30). Wounds created with duration of 5 seconds were pale, with an indistinct margin of erythema. Wounds of 10 and 20 seconds were well-defined, uniformly brown with a rim of erythema. Average depths of tissue damage were 1.30 mm (SD 0.424), 2.35 mm (SD 0.071), and 2.60 mm (SD 0.283) for duration of 5, 10, 20 seconds respectively. Burn duration of 5 seconds resulted in full-thickness damage. Burn duration of 10 seconds and 20 seconds resulted in full-thickness damage, involving subjacent skeletal muscle. Conclusions This is a simple reproducible method for creating burn wounds consistent in size and depth in a rat burn model. PMID:25075351
Hybrid Modeling of Plasma Discharges
2010-04-01
of the distribution functions (Boltzmann equation or approximations to it) in a hyper- dimensional space [4]. The selection of either approach...experiment To verify the algorithm, we used a simple test case consisting of a one- dimensional plasma with reflecting boundaries ("plasma in a ...to the one studied in [76] but with a much more severe initial condition, since in [76] there is only
ERIC Educational Resources Information Center
Marsh, Herbert W.; Scalas, L. Francesca; Nagengast, Benjamin
2010-01-01
Self-esteem, typically measured by the Rosenberg Self-Esteem Scale (RSE), is one of the most widely studied constructs in psychology. Nevertheless, there is broad agreement that a simple unidimensional factor model, consistent with the original design and typical application in applied research, does not provide an adequate explanation of RSE…
Collaboration using roles. [in computer network security
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
Segregation of roles into alternative accounts is a model which provides not only the ability to collaborate but also enables accurate accounting of resources consumed by collaborative projects, protects the resources and objects of such a project, and does not introduce new security vulnerabilities. The implementation presented here does not require users to remember additional passwords and provides a very simple consistent interface.
Modeling filtration and fouling with a microstructured membrane filter
NASA Astrophysics Data System (ADS)
Cummings, Linda; Sanaei, Pejman
2017-11-01
Membrane filters find widespread use in diverse applications such as A/C systems and water purification. While the details of the filtration process may vary significantly, the broad challenge of efficient filtration is the same: to achieve finely-controlled separation at low power consumption. The obvious resolution to the challenge would appear simple: use the largest pore size consistent with the separation requirement. However, the membrane characteristics (and hence the filter performance) are far from constant over its lifetime: the particles removed from the feed are deposited within and on the membrane filter, fouling it and degrading the performance over time. The processes by which this occurs are complex, and depend on several factors, including: the internal structure of the membrane and the type of particles in the feed. We present a model for fouling of a simple microstructured membrane, and investigate how the details of the microstructure affect the filtration efficiency. Our idealized membrane consists of bifurcating pores, arranged in a layered structure, so that the number (and size) of pores changes in the depth of the membrane. In particular, we address how the details of the membrane microstructure affect the filter lifetime, and the total throughput. NSF DMS 1615719.
Cranking Calculation in the sdg Interacting Boson Model
NASA Astrophysics Data System (ADS)
Wang, Baolin
1998-10-01
A self-consistent cranking calculation of the intrinsic states of the sdg interacting boson model is performed. The formulae of the moment of inertia are given in a general sdg IBM multipole Hamiltonian with one- and two-body terms. In the quadrupole interaction, the intrinsic states, the quadrupole and hexadecapole deformation and the moment of inertia are investigated in the large N limit. Using a simple Hamiltonian, the results of numerical calculations for 152, 154Sm and 154-160 Gd satisfactorily reproduce the experimental data.
Gravitational decoupled anisotropies in compact stars
NASA Astrophysics Data System (ADS)
Gabbanelli, Luciano; Rincón, Ángel; Rubio, Carlos
2018-05-01
Simple generic extensions of isotropic Durgapal-Fuloria stars to the anisotropic domain are presented. These anisotropic solutions are obtained by guided minimal deformations over the isotropic system. When the anisotropic sector interacts in a purely gravitational manner, the conditions to decouple both sectors by means of the minimal geometric deformation approach are satisfied. Hence the anisotropic field equations are isolated resulting a more treatable set. The simplicity of the equations allows one to manipulate the anisotropies that can be implemented in a systematic way to obtain different realistic models for anisotropic configurations. Later on, observational effects of such anisotropies when measuring the surface redshift are discussed. To conclude, the consistency of the application of the method over the obtained anisotropic configurations is shown. In this manner, different anisotropic sectors can be isolated of each other and modeled in a simple and systematic way.
Optimized theory for simple and molecular fluids.
Marucho, M; Montgomery Pettitt, B
2007-03-28
An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.
Petrology and age of alkalic lava from the Ratak Chain of the Marshall Islands
Davis, A.S.; Pringle, M.S.; Pickthorn, L.-B.G.; Clague, D.A.; Schwab, W.C.
1989-01-01
Volcanic rock dredged from the flanks of four volcanic edifices in the Ratak chain of the Marshall Islands consist of alkalic lava that erupted above sea level or in shallow water. Compositions of recovered samples are predominantly differentiated alkalic basalt and hawaiite but include strongly alkalic melilitite. Whole rock 40Ar/39Ar total fusion and incremental heating ages of 87.3 ?? 0.6 Ma and 82.2 ?? 1.6 Ma determined for samples from Erikub Seamount and Ratak Guyot, respectively, are within the range predicted by plate rotation models but show no age progression consistent with a simple hot spot model. Variations in isotopic and some incompatible element ratios suggest interisland heterogeneity. -from Authors
Rigid aggregates: theory and applications
NASA Astrophysics Data System (ADS)
Richardson, D. C.
2005-08-01
Numerical models employing ``perfect'' self-gravitating rubble piles that consist of monodisperse rigid spheres with configurable contact dissipation have been used to explore collisional and rotational disruption of gravitational aggregates. Applications of these simple models include numerical simulations of planetesimal evolution, asteroid family formation, tidal disruption, and binary asteroid formation. These studies may be limited by the idealized nature of the rubble pile model, since perfect identical spheres stack and shear in a very specific, possibly over-idealized way. To investigate how constituent properties affect the overall characteristics of a gravitational aggregate, particularly its failure modes, we have generalized our numerical code to model colliding, self-gravitating, rigid aggregates made up of variable-size spheres. Euler's equation of rigid-body motion in the presence of external torques are implemented, along with a self-consistent prescription for handling non-central impacts. Simple rules for sticking and breaking are also included. Preliminary results will be presented showing the failure modes of gravitational aggregates made up of smaller, rigid, non-idealized components. Applications of this new capability include more realistic aggregate models, convenient modeling of arbitrary rigid shapes for studies of the stability of orbiting companions (replacing one or both bodies with rigid aggregates eliminates expensive interparticle collisions while preserving the shape, spin, and gravity field of the bodies), and sticky particle aggregation in dense planetary rings. This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NAG511722 issued through the Office of Space Science and by the National Science Foundation under Grant No. AST0307549.
Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony
2012-08-17
A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simulation model for wind energy storage systems. Volume II. Operation manual. [SIMWEST code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, A.W.; Edsinger, R.W.; Burroughs, J.D.
1977-08-01
The effort developed a comprehensive computer program for the modeling of wind energy/storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic). An acronym for the program is SIMWEST (Simulation Model for Wind Energy Storage). The level of detail of SIMWEST is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. Volume II, the SIMWEST operation manual, describes the usage of the SIMWEST program, the designmore » of the library components, and a number of simple example simulations intended to familiarize the user with the program's operation. Volume II also contains a listing of each SIMWEST library subroutine.« less
Fuller, Robert William; Wong, Tony E; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.
Some anticipated contributions to core fluid dynamics from the GRM
NASA Technical Reports Server (NTRS)
Vanvorhies, C.
1985-01-01
It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.
NASA Astrophysics Data System (ADS)
Ingebrigtsen, Trond S.; Schrøder, Thomas B.; Dyre, Jeppe C.
2012-01-01
This paper is an attempt to identify the real essence of simplicity of liquids in John Locke’s understanding of the term. Simple liquids are traditionally defined as many-body systems of classical particles interacting via radially symmetric pair potentials. We suggest that a simple liquid should be defined instead by the property of having strong correlations between virial and potential-energy equilibrium fluctuations in the NVT ensemble. There is considerable overlap between the two definitions, but also some notable differences. For instance, in the new definition simplicity is not a direct property of the intermolecular potential because a liquid is usually only strongly correlating in part of its phase diagram. Moreover, not all simple liquids are atomic (i.e., with radially symmetric pair potentials) and not all atomic liquids are simple. The main part of the paper motivates the new definition of liquid simplicity by presenting evidence that a liquid is strongly correlating if and only if its intermolecular interactions may be ignored beyond the first coordination shell (FCS). This is demonstrated by NVT simulations of the structure and dynamics of several atomic and three molecular model liquids with a shifted-forces cutoff placed at the first minimum of the radial distribution function. The liquids studied are inverse power-law systems (r-n pair potentials with n=18,6,4), Lennard-Jones (LJ) models (the standard LJ model, two generalized Kob-Andersen binary LJ mixtures, and the Wahnstrom binary LJ mixture), the Buckingham model, the Dzugutov model, the LJ Gaussian model, the Gaussian core model, the Hansen-McDonald molten salt model, the Lewis-Wahnstrom ortho-terphenyl model, the asymmetric dumbbell model, and the single-point charge water model. The final part of the paper summarizes properties of strongly correlating liquids, emphasizing that these are simpler than liquids in general. Simple liquids, as defined here, may be characterized in three quite different ways: (1) chemically by the fact that the liquid’s properties are fully determined by interactions from the molecules within the FCS, (2) physically by the fact that there are isomorphs in the phase diagram, i.e., curves along which several properties like excess entropy, structure, and dynamics, are invariant in reduced units, and (3) mathematically by the fact that throughout the phase diagram the reduced-coordinate constant-potential-energy hypersurfaces define a one-parameter family of compact Riemannian manifolds. No proof is given that the chemical characterization follows from the strong correlation property, but we show that this FCS characterization is consistent with the existence of isomorphs in strongly correlating liquids’ phase diagram. Finally, we note that the FCS characterization of simple liquids calls into question the physical basis of standard perturbation theory, according to which the repulsive and attractive forces play fundamentally different roles for the physics of liquids.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A
2008-12-01
Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.
(abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, A. E.
1994-01-01
Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.
Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, Alfred
1995-01-01
Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.
Digital image classification with the help of artificial neural network by simple histogram.
Dey, Pranab; Banerjee, Nirmalya; Kaur, Rajwant
2016-01-01
Visual image classification is a great challenge to the cytopathologist in routine day-to-day work. Artificial neural network (ANN) may be helpful in this matter. In this study, we have tried to classify digital images of malignant and benign cells in effusion cytology smear with the help of simple histogram data and ANN. A total of 404 digital images consisting of 168 benign cells and 236 malignant cells were selected for this study. The simple histogram data was extracted from these digital images and an ANN was constructed with the help of Neurointelligence software [Alyuda Neurointelligence 2.2 (577), Cupertino, California, USA]. The network architecture was 6-3-1. The images were classified as training set (281), validation set (63), and test set (60). The on-line backpropagation training algorithm was used for this study. A total of 10,000 iterations were done to train the ANN system with the speed of 609.81/s. After the adequate training of this ANN model, the system was able to identify all 34 malignant cell images and 24 out of 26 benign cells. The ANN model can be used for the identification of the individual malignant cells with the help of simple histogram data. This study will be helpful in the future to identify malignant cells in unknown situations.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Struijs, J; van de Meent, D; Schowanek, D; Buchholz, H; Patoux, R; Wolf, T; Austin, T; Tolls, J; van Leeuwen, K; Galay-Burgos, M
2016-09-01
The multimedia model SimpleTreat, evaluates the distribution and elimination of chemicals by municipal sewage treatment plants (STP). It is applied in the framework of REACH (Registration, Evaluation, Authorization and Restriction of Chemicals). This article describes an adaptation of this model for application to industrial sewage treatment plants (I-STP). The intended use of this re-parametrized model is focused on risk assessment during manufacture and subsequent uses of chemicals, also in the framework of REACH. The results of an inquiry on the operational characteristics of industrial sewage treatment installations were used to re-parameterize the model. It appeared that one property of industrial sewage, i.e. Biological Oxygen Demand (BOD) in combination with one parameter of the activated sludge process, the hydraulic retention time (HRT) is satisfactory to define treatment of industrial wastewater by means of the activated sludge process. The adapted model was compared to the original municipal version, SimpleTreat 4.0, by means of a sensitivity analysis. The consistency of the model output was assessed by computing the emission to water from an I-STP of a set of fictitious chemicals. This set of chemicals exhibit a range of physico-chemical and biodegradability properties occurring in industrial wastewater. Predicted removal rates of a chemical from raw sewage are higher in industrial than in municipal STPs. The latter have typically shorter hydraulic retention times with diminished opportunity for elimination of the chemical due to volatilization and biodegradation. Copyright © 2016 Elsevier Ltd. All rights reserved.
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.
Hutchinson, John M C; Gigerenzer, Gerd
2005-05-31
The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.
Goldie, James; Alexander, Lisa; Lewis, Sophie C; Sherwood, Steven
2017-08-01
To find appropriate regression model specifications for counts of the daily hospital admissions of a Sydney cohort and determine which human heat stress indices best improve the models' fit. We built parent models of eight daily counts of admission records using weather station observations, census population estimates and public holiday data. We added heat stress indices; models with lower Akaike Information Criterion scores were judged a better fit. Five of the eight parent models demonstrated adequate fit. Daily maximum Simplified Wet Bulb Globe Temperature (sWBGT) consistently improved fit more than most other indices; temperature and heatwave indices also modelled some health outcomes well. Humidity and heat-humidity indices better fit counts of patients who died following admission. Maximum sWBGT is an ideal measure of heat stress for these types of Sydney hospital admissions. Simple temperature indices are a good fallback where a narrower range of conditions is investigated. Implications for public health: This study confirms the importance of selecting appropriate heat stress indices for modelling. Epidemiologists projecting Sydney hospital admissions should use maximum sWBGT as a common measure of heat stress. Health organisations interested in short-range forecasting may prefer simple temperature indices. © 2017 The Authors.
Lipid Bilayer Vesicles with Numbers of Membrane-Linking Pores
NASA Astrophysics Data System (ADS)
Ken-ichirou Akashi,; Hidetake Miyata,
2010-06-01
We report that phospholipid membranes spontaneously formed in aqueous medium giant unilamellar vesicles (GUVs) possessing many membranous wormhole-like structures (membrane-linking pores, MLPs). By phase contract microscopy and confocal fluorescence microscopy, the structures of the MLPs, consisting of lipid bilayer, were resolvable, and a variety of vesicular shapes having many MLPs (a high genus topology) were found. These vesicles were stable but easily deformed by micromanipulation with a microneedle. We also observed the size reduction of the MLPs with the increase in membrane tension, which was qualitatively consistent with a prediction from a simple dynamical model.
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Roma-Dollase, David; Krankowski, Andrzej; García-Rigo, Alberto; Orús-Pérez, Raül
2017-12-01
A summary of the main concepts on global ionospheric map(s) [hereinafter GIM(s)] of vertical total electron content (VTEC), with special emphasis on their assessment, is presented in this paper. It is based on the experience accumulated during almost two decades of collaborative work in the context of the international global navigation satellite systems (GNSS) service (IGS) ionosphere working group. A representative comparison of the two main assessments of ionospheric electron content models (VTEC-altimeter and difference of Slant TEC, based on independent global positioning system data GPS, dSTEC-GPS) is performed. It is based on 26 GPS receivers worldwide distributed and mostly placed on islands, from the last quarter of 2010 to the end of 2016. The consistency between dSTEC-GPS and VTEC-altimeter assessments for one of the most accurate IGS GIMs (the tomographic-kriging GIM `UQRG' computed by UPC) is shown. Typical error RMS values of 2 TECU for VTEC-altimeter and 0.5 TECU for dSTEC-GPS assessments are found. And, as expected by following a simple random model, there is a significant correlation between both RMS and specially relative errors, mainly evident when large enough number of observations per pass is considered. The authors expect that this manuscript will be useful for new analysis contributor centres and in general for the scientific and technical community interested in simple and truly external ways of validating electron content models of the ionosphere.
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1982-01-01
The derivation of a simple expression for the capacitance C(V) associated with the transition region of a p-n junction under a forward bias is derived by phenomenological reasoning. The treatment of C(V) is based on the conventional Shockley equations, and simpler expressions for C(V) result that are in general accord with the previous analytical and numerical results. C(V) consists of two components resulting from changes in majority carrier concentration and from free hole and electron accumulation in the space-charge region. The space-charge region is conceived as the intrinsic region of an n-i-p structure for a space-charge region markedly wider than the extrinsic Debye lengths at its edges. This region is excited in the sense that the forward bias creates hole and electron densities orders of magnitude larger than those in equilibrium. The recent Shirts-Gordon (1979) modeling of the space-charge region using a dielectric response function is contrasted with the more conventional Schottky-Shockley modeling.
White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J
2017-09-01
Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).
On the dynamics of a human body model.
NASA Technical Reports Server (NTRS)
Huston, R. L.; Passerello, C. E.
1971-01-01
Equations of motion for a model of the human body are developed. Basically, the model consists of an elliptical cylinder representing the torso, together with a system of frustrums of elliptical cones representing the limbs. They are connected to the main body and each other by hinges and ball and socket joints. Vector, tensor, and matrix methods provide a systematic organization of the geometry. The equations of motion are developed from the principles of classical mechanics. The solution of these equations then provide the displacement and rotation of the main body when the external forces and relative limb motions are specified. Three simple example motions are studied to illustrate the method. The first is an analysis and comparison of simple lifting on the earth and the moon. The second is an elementary approach to underwater swimming, including both viscous and inertia effects. The third is an analysis of kicking motion and its effect upon a vertically suspended man such as a parachutist.
Pharmacokinetic Modeling of JP-8 Jet Fuel Components: II. A Conceptual Framework
2003-12-01
example, a single type of (simple) binary interaction between 300 components would require the specification of some 105 interaction coefficients . One...individual substances, via binary mechanisms, is enough to predict the interactions present in the mixture. Secondly, complex mixtures can often be...approximated as pseudo- binary systems, consisting of the compound of interest plus a single interacting complex vehicle with well-defined, composite
Greenhouse Effect: Temperature of a Metal Sphere Surrounded by a Glass Shell and Heated by Sunlight
ERIC Educational Resources Information Center
Nguyen, Phuc H.; Matzner, Richard A.
2012-01-01
We study the greenhouse effect on a model satellite consisting of a tungsten sphere surrounded by a thin spherical, concentric glass shell, with a small gap between the sphere and the shell. The system sits in vacuum and is heated by sunlight incident along the "z"-axis. This development is a generalization of the simple treatment of the…
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in the same manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminate plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling) analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Multidisciplinary Aerodynamic-Structural Shape Optimization Using Deformation (MASSOUD)
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2000-01-01
This paper presents a multidisciplinary shape parameterization approach. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft object animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity (e.g., nonlinear computational fluid dynamics and detailed finite element modeling analysis tools. This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, camber, and free-form surface. Results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, and a simple performance module.
Hecht, Steven A
2006-01-01
We used the choice/no-choice methodology in two experiments to examine patterns of strategy selection and execution in groups of undergraduates. Comparisons between choice and no-choice trials revealed three groups. Some participants good retrievers) were consistently able to use retrieval to solve almost all arithmetic problems. Other participants (perfectionists) successfully used retrieval substantially less often in choice-allowed trials than when strategy choices were prohibited. Not-so-good retrievers retrieved correct answers less often than the other participants in both the choice-allowed and no-choice conditions. No group differences emerged with respect to time needed to search and access answers from long-term memory; however, not-so-good retrievers were consistently slower than the other subgroups at executing fact-retrieval processes that are peripheral to memory search and access. Theoretical models of simple arithmetic, such as the Strategy Choice and Discovery Simulation (Shrager & Siegler, 1998), should be updated to include the existence of both perfectionist and not-so-good retriever adults.
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
NASA Astrophysics Data System (ADS)
Floría, L. M.; Baesens, C.; Gómez-Gardeñes, J.
In the preface to his monograph on the structure of Evolutionary Theory [1], the late professor Stephen Jay Gould attributes to the philosopher Immanuel Kant the following aphorism in Science Philosophy: "Percepts without concepts are blind; concepts without percepts are empty". Using with a bit of freedom these Kantian terms, one would say that a scientific model is a framework (or network) of interrelated concepts and percepts where experts build up scientific consistent explanations of a given set of observations. Good models are those which are both, conceptually simple and universal in their perceptions. Let us illustrate with examples the meaning of this statement.
Continuous distribution of emission states from single CdSe/ZnS quantum dots.
Zhang, Kai; Chang, Hauyee; Fu, Aihua; Alivisatos, A Paul; Yang, Haw
2006-04-01
The photoluminescence dynamics of colloidal CdSe/ZnS/streptavidin quantum dots were studied using time-resolved single-molecule spectroscopy. Statistical tests of the photon-counting data suggested that the simple "on/off" discrete state model is inconsistent with experimental results. Instead, a continuous emission state distribution model was found to be more appropriate. Autocorrelation analysis of lifetime and intensity fluctuations showed a nonlinear correlation between them. These results were consistent with the model that charged quantum dots were also emissive, and that time-dependent charge migration gave rise to the observed photoluminescence dynamics.
On Global Optimal Sailplane Flight Strategy
NASA Technical Reports Server (NTRS)
Sander, G. J.; Litt, F. X.
1979-01-01
The derivation and interpretation of the necessary conditions that a sailplane cross-country flight has to satisfy to achieve the maximum global flight speed is considered. Simple rules are obtained for two specific meteorological models. The first one uses concentrated lifts of various strengths and unequal distance. The second one takes into account finite, nonuniform space amplitudes for the lifts and allows, therefore, for dolphin style flight. In both models, altitude constraints consisting of upper and lower limits are shown to be essential to model realistic problems. Numerical examples illustrate the difference with existing techniques based on local optimality conditions.
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed
2016-12-01
Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.
Failure of self-consistency in the discrete resource model of visual working memory.
Bays, Paul M
2018-06-03
The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.
Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung
2013-03-01
The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.
QSPR using MOLGEN-QSPR: the challenge of fluoroalkane boiling points.
Rücker, Christoph; Meringer, Markus; Kerber, Adalbert
2005-01-01
By means of the new software MOLGEN-QSPR, a multilinear regression model for the boiling points of lower fluoroalkanes is established. The model is based exclusively on simple descriptors derived directly from molecular structure and nevertheless describes a broader set of data more precisely than previous attempts that used either more demanding (quantum chemical) descriptors or more demanding (nonlinear) statistical methods such as neural networks. The model's internal consistency was confirmed by leave-one-out cross-validation. The model was used to predict all unknown boiling points of fluorobutanes, and the quality of predictions was estimated by means of comparison with boiling point predictions for fluoropentanes.
Numerical study of the current sheet and PSBL in a magnetotail model
NASA Technical Reports Server (NTRS)
Doxas, I.; Horton, W.; Sandusky, K.; Tajima, T.; Steinolfson, R.
1989-01-01
The current sheet and plasma sheet boundary layer (PSBL) in a magnetotail model are discussed. A test particle code is used to study the response of ensembles of particles to a two-dimensional, time-dependent model of the geomagnetic tail, and test the proposition (Coroniti, 1985a, b; Buchner and Zelenyi, 1986; Chen and Palmadesso, 1986; Martin, 1986) that the stochasticity of the particle orbits in these fields is an important part of the physical mechanism for magnetospheric substorms. The realistic results obtained for the fluid moments of the particle distribution with this simple model, and their insensitivity to initial conditions, is consistent with this hypothesis.
Helicopter vibration suppression using simple pendulum absorbers on the rotor blade
NASA Technical Reports Server (NTRS)
Hamouda, M.-N. H.; Pierce, G. A.
1981-01-01
A design procedure is presented for the installation of simple pendulums on the blades of a helicopter rotor to suppress the root reactions. The procedure consists of a frequency response analysis for a hingeless rotor blade excited by a harmonic variation of spanwise airload distributions during forward flight, as well as a concentrated load at the tip. The structural modeling of the blade provides for elastic degrees of freedom in flap and lead-lag bending plus torsion. Simple flap and lead-lag pendulums are considered individually. Using a rational order scheme, the general nonlinear equations of motion are linearized. A quasi-steady aerodynamic representation is used in the formation of the airloads. The solution of the system equations derives from their representation as a transfer matrix. The results include the effect of pendulum tuning on the minimization of the hub reactions.
NASA Technical Reports Server (NTRS)
Yan, Jerry C.
1987-01-01
In concurrent systems, a major responsibility of the resource management system is to decide how the application program is to be mapped onto the multi-processor. Instead of using abstract program and machine models, a generate-and-test framework known as 'post-game analysis' that is based on data gathered during program execution is proposed. Each iteration consists of (1) (a simulation of) an execution of the program; (2) analysis of the data gathered; and (3) the proposal of a new mapping that would have a smaller execution time. These heuristics are applied to predict execution time changes in response to small perturbations applied to the current mapping. An initial experiment was carried out using simple strategies on 'pipeline-like' applications. The results obtained from four simple strategies demonstrated that for this kind of application, even simple strategies can produce acceptable speed-up with a small number of iterations.
NASA Astrophysics Data System (ADS)
Cartar, William K.
Photonic crystal microcavity quantum dot lasers show promise as high quality-factor, low threshold lasers, that can be integrated on-chip, with tunable room temperature opera- tions. However, such semiconductor microcavity lasers are notoriously difficult to model in a self-consistent way and are primarily modelled by simplified rate equation approxima- tions, typically fit to experimental data, which limits investigations of their optimization and fundamental light-matter interaction processes. Moreover, simple cavity mode optical theory and rate equations have recently been shown to fail in explaining lasing threshold trends in triangular lattice photonic crystal cavities as a function of cavity size, and the potential impact of fabrication disorder is not well understood. In this thesis, we develop a simple but powerful numerical scheme for modelling the quantum dot active layer used for lasing in these photonic crystal cavity structures, as an ensemble of randomly posi- tioned artificial two-level atoms. Each two-level atom is defined by optical Bloch equations solved by a quantum master equation that includes phenomenological pure dephasing and an incoherent pump rate that effectively models a multi-level gain system. Light-matter in- teractions of both passive and lasing structures are analyzed using simulation defined tools and post-simulation Green function techniques. We implement an active layer ensemble of up to 24,000 statistically unique quantum dots in photonic crystal cavity simulations, using a self-consistent finite-difference time-domain method. This method has the distinct advantage of capturing effects such as dipole-dipole coupling and radiative decay, without the need for any phenomenological terms, since the time-domain solution self-consistently captures these effects. Our analysis demonstrates a powerful ability to connect with recent experimental trends, while remaining completely general in its set-up; for example, we do not invoke common approximations such as the rotating-wave or slowly-varying envelope approximations, and solve dynamics with zero a priori knowledge.
Native Silk Feedstock as a Model Biopolymer: A Rheological Perspective.
Laity, Peter R; Holland, Chris
2016-08-08
Variability in silk's rheology is often regarded as an impediment to understanding or successfully copying the natural spinning process. We have previously reported such variability in unspun native silk extracted straight from the gland of the domesticated silkworm Bombyx mori and discounted classical explanations such as differences in molecular weight and concentration. We now report that variability in oscillatory measurements can be reduced onto a simple master-curve through normalizing with respect to the crossover. This remarkable result suggests that differences between silk feedstocks are rheologically simple and not as complex as originally thought. By comparison, solutions of poly(ethylene-oxide) and hydroxypropyl-methyl-cellulose showed similar normalization behavior; however, the resulting curves were broader than for silk, suggesting greater polydispersity in the (semi)synthetic materials. Thus, we conclude Nature may in fact produce polymer feedstocks that are more consistent than typical man-made counterparts as a model for future rheological investigations.
Computer simulations of sympatric speciation in a simple food web
NASA Astrophysics Data System (ADS)
Luz-Burgoa, K.; Dell, Tony; de Oliveira, S. Moss
2005-07-01
Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with the formation of the species. Inspired by them, in this paper we investigate the process of sympatric speciation in a simple food web model. For that we modify the individual-based Penna model that has been widely used to study aging as well as other evolutionary processes. Initially, our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we manipulate directly a basal resource distribution and monitor the changes in the populations. Sympatric speciation is obtained for the top species in both cases, and our results suggest that the speciation velocity depends on how far up, in the food chain, the focus population is feeding. Simulations are done with three different sexual imprintinglike mechanisms, in order to discuss adaptation by natural selection.
[Quality assurance of the renal applications software].
del Real Núñez, R; Contreras Puertas, P I; Moreno Ortega, E; Mena Bares, L M; Maza Muret, F R; Latre Romero, J M
2007-01-01
The need for quality assurance of all technical aspects of nuclear medicine studies is widely recognised. However, little attention has been paid to the quality assurance of the applications software. Our work reported here aims at verifying the analysis software for processing of renal nuclear medicine studies (renograms). The software tools were used to build a synthetic dynamic model of renal system. The model consists of two phases: perfusion and function. The organs of interest (kidneys, bladder and aortic artery) were simple geometric forms. The uptake of the renal structures was described by mathematic functions. Curves corresponding to normal or pathological conditions were simulated for kidneys, bladder and aortic artery by appropriate selection of parameters. There was no difference between the parameters of the mathematic curves and the quantitative data produced by the renal analysis program. Our test procedure is simple to apply, reliable, reproducible and rapid to verify the renal applications software.
Electrostatic interactions among hydrophobic ions in lipid bilayer membranes.
Andersen, O S; Feldberg, S; Nakadomari, H; Levy, S; McLaughlin, S
1978-01-01
We have shown that the absorption of tetraphenylborate into black lipid membranes formed from either bacterial phosphatidylethanolamine or glycerolmonooleate produces concentration-dependent changes in the electrostatic potential between the membrane interior and the bulk aqueous phases. These potential changes were studied by a variety of techniques: voltage clamp, charge pulse, and "probe" measurements on black lipid membranes; electrophroetic mobility measurements on phospholipid vesicles; and surface potential measurements on phospholipid monolayers. The magnitude of the potential changes indicates that tetraphenylborate absorbs into a region of the membrane with a low dielectric constant, where it produces substantial boundary potentials, as first suggested by Markin et al. (1971). Many features of our data can be explained by a simple three-capacitor model, which we develop in a self-consistent manner. Some discrepancies between our data and the simple model suggest that discrete charge phenomena may be important within these thin membranes. PMID:620077
Is there a relationship between curvature and inductance in the Josephson junction?
NASA Astrophysics Data System (ADS)
Dobrowolski, T.; Jarmoliński, A.
2018-03-01
A Josephson junction is a device made of two superconducting electrodes separated by a very thin layer of isolator or normal metal. This relatively simple device has found a variety of technical applications in the form of Superconducting Quantum Interference Devices (SQUIDs) and Single Electron Transistors (SETs). One can expect that in the near future the Josephson junction will find applications in digital electronics technology RSFQ (Rapid Single Flux Quantum) and in the more distant future in construction of quantum computers. Here we concentrate on the relation of the curvature of the Josephson junction with its inductance. We apply a simple Capacitively Shunted Junction (CSJ) model in order to find condition which guarantees consistency of this model with prediction based on the Maxwell and London equations with Landau-Ginzburg current of Cooper pairs. This condition can find direct experimental verification.
A simple analytical thermo-mechanical model for liquid crystal elastomer bilayer structures
NASA Astrophysics Data System (ADS)
Cui, Yun; Wang, Chengjun; Sim, Kyoseung; Chen, Jin; Li, Yuhang; Xing, Yufeng; Yu, Cunjiang; Song, Jizhou
2018-02-01
The bilayer structure consisting of thermal-responsive liquid crystal elastomers (LCEs) and other polymer materials with stretchable heaters has attracted much attention in applications of soft actuators and soft robots due to its ability to generate large deformations when subjected to heat stimuli. A simple analytical thermo-mechanical model, accounting for the non-uniform feature of the temperature/strain distribution along the thickness direction, is established for this type of bilayer structure. The analytical predictions of the temperature and bending curvature radius agree well with finite element analysis and experiments. The influences of the LCE thickness and the heat generation power on the bending deformation of the bilayer structure are fully investigated. It is shown that a thinner LCE layer and a higher heat generation power could yield more bending deformation. These results may help the design of soft actuators and soft robots involving thermal responsive LCEs.
Lepora, Nathan F; Blomeley, Craig P; Hoyland, Darren; Bracci, Enrico; Overton, Paul G; Gurney, Kevin
2011-11-01
The study of active and passive neuronal dynamics usually relies on a sophisticated array of electrophysiological, staining and pharmacological techniques. We describe here a simple complementary method that recovers many findings of these more complex methods but relies only on a basic patch-clamp recording approach. Somatic short and long current pulses were applied in vitro to striatal medium spiny (MS) and fast spiking (FS) neurons from juvenile rats. The passive dynamics were quantified by fitting two-compartment models to the short current pulse data. Lumped conductances for the active dynamics were then found by compensating this fitted passive dynamics within the current-voltage relationship from the long current pulse data. These estimated passive and active properties were consistent with previous more complex estimations of the neuron properties, supporting the approach. Relationships within the MS and FS neuron types were also evident, including a graduation of MS neuron properties consistent with recent findings about D1 and D2 dopamine receptor expression. Application of the method to simulated neuron data supported the hypothesis that it gives reasonable estimates of membrane properties and gross morphology. Therefore detailed information about the biophysics can be gained from this simple approach, which is useful for both classification of neuron type and biophysical modelling. Furthermore, because these methods rely upon no manipulations to the cell other than patch clamping, they are ideally suited to in vivo electrophysiology. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Climatic impact of Amazon deforestation - a mechanistic model study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning Zeng; Dickinson, R.E.; Xubin Zeng
1996-04-01
Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less
A biochemically semi-detailed model of auxin-mediated vein formation in plant leaves.
Roussel, Marc R; Slingerland, Martin J
2012-09-01
We present here a model intended to capture the biochemistry of vein formation in plant leaves. The model consists of three modules. Two of these modules, those describing auxin signaling and transport in plant cells, are biochemically detailed. We couple these modules to a simple model for PIN (auxin efflux carrier) protein localization based on an extracellular auxin sensor. We study the single-cell responses of this combined model in order to verify proper functioning of the modeled biochemical network. We then assemble a multicellular model from the single-cell building blocks. We find that the model can, under some conditions, generate files of polarized cells, but not true veins. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Computational principles underlying recognition of acoustic signals in grasshoppers and crickets.
Ronacher, Bernhard; Hennig, R Matthias; Clemens, Jan
2015-01-01
Grasshoppers and crickets independently evolved hearing organs and acoustic communication. They differ considerably in the organization of their auditory pathways, and the complexity of their songs, which are essential for mate attraction. Recent approaches aimed at describing the behavioral preference functions of females in both taxa by a simple modeling framework. The basic structure of the model consists of three processing steps: (1) feature extraction with a bank of 'LN models'-each containing a linear filter followed by a nonlinearity, (2) temporal integration, and (3) linear combination. The specific properties of the filters and nonlinearities were determined using a genetic learning algorithm trained on a large set of different song features and the corresponding behavioral response scores. The model showed an excellent prediction of the behavioral responses to the tested songs. Most remarkably, in both taxa the genetic algorithm found Gabor-like functions as the optimal filter shapes. By slight modifications of Gabor filters several types of preference functions could be modeled, which are observed in different cricket species. Furthermore, this model was able to explain several so far enigmatic results in grasshoppers. The computational approach offered a remarkably simple framework that can account for phenotypically rather different preference functions across several taxa.
NASA Astrophysics Data System (ADS)
Ke, Xinyou; Alexander, J. Iwan D.; Prahl, Joseph M.; Savinell, Robert F.
2015-08-01
A simple analytical model of a layered system comprised of a single passage of a serpentine flow channel and a parallel underlying porous electrode (or porous layer) is proposed. This analytical model is derived from Navier-Stokes motion in the flow channel and Darcy-Brinkman model in the porous layer. The continuities of flow velocity and normal stress are applied at the interface between the flow channel and the porous layer. The effects of the inlet volumetric flow rate, thickness of the flow channel and thickness of a typical carbon fiber paper porous layer on the volumetric flow rate within this porous layer are studied. The maximum current density based on the electrolyte volumetric flow rate is predicted, and found to be consistent with reported numerical simulation. It is found that, for a mean inlet flow velocity of 33.3 cm s-1, the analytical maximum current density is estimated to be 377 mA cm-2, which compares favorably with experimental result reported by others of ∼400 mA cm-2.
Glycolysis Is Governed by Growth Regime and Simple Enzyme Regulation in Adherent MDCK Cells
Rehberg, Markus; Ritter, Joachim B.; Reichl, Udo
2014-01-01
Due to its vital importance in the supply of cellular pathways with energy and precursors, glycolysis has been studied for several decades regarding its capacity and regulation. For a systems-level understanding of the Madin-Darby canine kidney (MDCK) cell metabolism, we couple a segregated cell growth model published earlier with a structured model of glycolysis, which is based on relatively simple kinetics for enzymatic reactions of glycolysis, to explain the pathway dynamics under various cultivation conditions. The structured model takes into account in vitro enzyme activities, and links glycolysis with pentose phosphate pathway and glycogenesis. Using a single parameterization, metabolite pool dynamics during cell cultivation, glucose limitation and glucose pulse experiments can be consistently reproduced by considering the cultivation history of the cells. Growth phase-dependent glucose uptake together with cell-specific volume changes generate high intracellular metabolite pools and flux rates to satisfy the cellular demand during growth. Under glucose limitation, the coordinated control of glycolytic enzymes re-adjusts the glycolytic flux to prevent the depletion of glycolytic intermediates. Finally, the model's predictive power supports the design of more efficient bioprocesses. PMID:25329309
Glycolysis is governed by growth regime and simple enzyme regulation in adherent MDCK cells.
Rehberg, Markus; Ritter, Joachim B; Reichl, Udo
2014-10-01
Due to its vital importance in the supply of cellular pathways with energy and precursors, glycolysis has been studied for several decades regarding its capacity and regulation. For a systems-level understanding of the Madin-Darby canine kidney (MDCK) cell metabolism, we couple a segregated cell growth model published earlier with a structured model of glycolysis, which is based on relatively simple kinetics for enzymatic reactions of glycolysis, to explain the pathway dynamics under various cultivation conditions. The structured model takes into account in vitro enzyme activities, and links glycolysis with pentose phosphate pathway and glycogenesis. Using a single parameterization, metabolite pool dynamics during cell cultivation, glucose limitation and glucose pulse experiments can be consistently reproduced by considering the cultivation history of the cells. Growth phase-dependent glucose uptake together with cell-specific volume changes generate high intracellular metabolite pools and flux rates to satisfy the cellular demand during growth. Under glucose limitation, the coordinated control of glycolytic enzymes re-adjusts the glycolytic flux to prevent the depletion of glycolytic intermediates. Finally, the model's predictive power supports the design of more efficient bioprocesses.
Rates of profit as correlated sums of random variables
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2013-10-01
Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.
NASA Astrophysics Data System (ADS)
Zhou, Lingfei; Chapuis, Yves-Andre; Blonde, Jean-Philippe; Bervillier, Herve; Fukuta, Yamato; Fujita, Hiroyuki
2004-07-01
In this paper, the authors proposed to study a model and a control strategy of a two-dimensional conveyance system based on the principles of the Autonomous Decentralized Microsystems (ADM). The microconveyance system is based on distributed cooperative MEMS actuators which can produce a force field onto the surface of the device to grip and move a micro-object. The modeling approach proposed here is based on a simple model of a microconveyance system which is represented by a 5 x 5 matrix of cells. Each cell is consisted of a microactuator, a microsensor, and a microprocessor to provide actuation, autonomy and decentralized intelligence to the cell. Thus, each cell is able to identify a micro-object crossing on it and to decide by oneself the appropriate control strategy to convey the micro-object to its destination target. The control strategy could be established through five simple decision rules that the cell itself has to respect at each calculate cycle time. Simulation and FPGA implementation results are given in the end of the paper in order to validate model and control approach of the microconveyance system.
Equilibria of perceptrons for simple contingency problems.
Dawson, Michael R W; Dupuis, Brian
2012-08-01
The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.
Aoi, Shinya; Nachstedt, Timo; Manoonpong, Poramate; Wörgötter, Florentin; Matsuno, Fumitoshi
2018-01-01
Insects have various gaits with specific characteristics and can change their gaits smoothly in accordance with their speed. These gaits emerge from the embodied sensorimotor interactions that occur between the insect’s neural control and body dynamic systems through sensory feedback. Sensory feedback plays a critical role in coordinated movements such as locomotion, particularly in stick insects. While many previously developed insect models can generate different insect gaits, the functional role of embodied sensorimotor interactions in the interlimb coordination of insects remains unclear because of their complexity. In this study, we propose a simple physical model that is amenable to mathematical analysis to explain the functional role of these interactions clearly. We focus on a foot contact sensory feedback called phase resetting, which regulates leg retraction timing based on touchdown information. First, we used a hexapod robot to determine whether the distributed decoupled oscillators used for legs with the sensory feedback generate insect-like gaits through embodied sensorimotor interactions. The robot generated two different gaits and one had similar characteristics to insect gaits. Next, we proposed the simple model as a minimal model that allowed us to analyze and explain the gait mechanism through the embodied sensorimotor interactions. The simple model consists of a rigid body with massless springs acting as legs, where the legs are controlled using oscillator phases with phase resetting, and the governed equations are reduced such that they can be explained using only the oscillator phases with some approximations. This simplicity leads to analytical solutions for the hexapod gaits via perturbation analysis, despite the complexity of the embodied sensorimotor interactions. This is the first study to provide an analytical model for insect gaits under these interaction conditions. Our results clarified how this specific foot contact sensory feedback contributes to generation of insect-like ipsilateral interlimb coordination during hexapod locomotion. PMID:29489831
Convective Detrainment and Control of the Tropical Water Vapor Distribution
NASA Astrophysics Data System (ADS)
Kursinski, E. R.; Rind, D.
2006-12-01
Sherwood et al. (2006) developed a simple power law model describing the relative humidity distribution in the tropical free troposphere where the power law exponent is the ratio of a drying time scale (tied to subsidence rates) and a moistening time which is the average time between convective moistening events whose temporal distribution is described as a Poisson distribution. Sherwood et al. showed that the relative humidity distribution observed by GPS occultations and MLS is indeed close to a power law, approximately consistent with the simple model's prediction. Here we modify this simple model to be in terms of vertical length scales rather than time scales in a manner that we think more correctly matches the model predictions to the observations. The subsidence is now in terms of the vertical distance the air mass has descended since it last detrained from a convective plume. The moisture source term becomes a profile of convective detrainment flux versus altitude. The vertical profile of the convective detrainment flux is deduced from the observed distribution of the specific humidity at each altitude combined with sinking rates estimated from radiative cooling. The resulting free tropospheric detrainment profile increases with altitude above 3 km somewhat like an exponential profile which explains the approximate power law behavior observed by Sherwood et al. The observations also reveal a seasonal variation in the detrainment profile reflecting changes in the convective behavior expected by some based on observed seasonal changes in the vertical structure of convective regions. The simple model results will be compared with the moisture control mechanisms in a GCM with many additional mechanisms, the GISS climate model, as described in Rind (2006). References Rind. D., 2006: Water-vapor feedback. In Frontiers of Climate Modeling, J. T. Kiehl and V. Ramanathan (eds), Cambridge University Press [ISBN-13 978-0-521- 79132-8], 251-284. Sherwood, S., E. R. Kursinski and W. Read, A distribution law for free-tropospheric relative humidity, J. Clim. In press. 2006
The pressure-dependence of the size of extruded vesicles.
Patty, Philipus J; Frisken, Barbara J
2003-08-01
Variations in the size of vesicles formed by extrusion through small pores are discussed in terms of a simple model. Our model predicts that the radius should decrease as the square root of the applied pressure, consistent with data for vesicles extruded under various conditions. The model also predicts dependencies on the pore size used and on the lysis tension of the vesicles being extruded that are consistent with our data. The pore size was varied by using track-etched polycarbonate membranes with average pore diameters ranging from 50 to 200 nm. To vary the lysis tension, vesicles made from POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine), mixtures of POPC and cholesterol, and mixtures of POPC and C(16)-ceramide were studied. The lysis tension, as measured by an extrusion-based technique, of POPC:cholesterol vesicles is higher than that of pure POPC vesicles whereas POPC:ceramide vesicles have lower lysis tensions than POPC vesicles.
Is inflation from unwinding fluxes IIB?
NASA Astrophysics Data System (ADS)
Gautason, Fridrik Freyr; Schillo, Marjorie; Van Riet, Thomas
2017-03-01
In this paper we argue that the mechanism of unwinding inflation is naturally present in warped compactifications of type IIB string theory with local throats. The unwinding of flux is caused by its annihilation against branes. The resulting inflaton potential is linear with periodic modulations. We initiate an analysis of the inflationary dynamics and cosmological observables, which are highly constrained by moduli stabilization. For the simplified model of single-Kähler Calabi-Yau spaces we find that many, though not all of the consistency constraints can be satisfied. Particularly, in this simple model geometric constraints are in tension with obtaining the observed amplitude of the scalar power spectrum. However, we do find 60 efolds of inflation with a trans-Planckian field excursion which offers the hope that slightly more complicated models can lead to a fully consistent explicit construction of large field inflation of this kind.
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.
1993-01-01
The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.
NASA Technical Reports Server (NTRS)
Bradas, James C.; Fennelly, Alphonsus J.; Smalley, Larry L.
1987-01-01
It is shown that a generalized (or 'power law') inflationary phase arises naturally and inevitably in a simple (Bianchi type-I) anisotropic cosmological model in the self-consistent Einstein-Cartan gravitation theory with the improved stress-energy-momentum tensor with the spin density of Ray and Smalley (1982, 1983). This is made explicit by an analytical solution of the field equations of motion of the fluid variables. The inflation is caused by the angular kinetic energy density due to spin. The model further elucidates the relationship between fluid vorticity, the angular velocity of the inertially dragged tetrads, and the precession of the principal axes of the shear ellipsoid. Shear is not effective in damping the inflation.
Universal adsorption at the vapor-liquid interface near the consolute point
NASA Technical Reports Server (NTRS)
Schmidt, James W.
1990-01-01
The ellipticity of the vapor-liquid interface above mixtures of methylcyclohexane (C7H14) and perfluoromethylcyclohexane (C7F14) has been measured near the consolute point T(c) = 318.6 K. The data are consistent with a model of the interface that combines a short-ranged density-vs height profile in the vapor phase with a much longer-ranged composition-versus-height profile in the liquid. The value of the free parameter produced by fitting the model to the data is consistent with results from two other simple mixtures and a mixture of a polymer and solvent. This experiment combines precision ellipsometry of the vapor-liquid interface with in situ measurements of refractive indices of the liquid phases, and it precisely locates the consolute point.
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
Propulsion at low Reynolds number
NASA Astrophysics Data System (ADS)
Najafi, Ali; Golestanian, Ramin
2005-04-01
We study the propulsion of two model swimmers at low Reynolds number. Inspired by Purcell's model, we propose a very simple one-dimensional swimmer consisting of three spheres that are connected by two arms whose lengths can change between two values. The proposed swimmer can swim with a special type of motion, which breaks the time-reversal symmetry. We also show that an ellipsoidal membrane with tangential travelling wave on it can also propel itself in the direction preferred by the travelling wave. This system resembles the realistic biological animals like Paramecium.
Dynamic design and control of a high-speed pneumatic jet actuator
NASA Astrophysics Data System (ADS)
Misyurin, S. Yu; Ivlev, V. I.; Kreinin, G. V.
2017-12-01
Mathematical model of an actuator, consisting of a pneumatic (gas) high-speed jet engine, transfer mechanism, and a control device used for switching the ball valve is worked out. The specific attention was paid to the transition (normalization) of the dynamic model into the dimensionless form. Its dynamic simulation criteria are determined, and dynamics study of an actuator was carried out. The simple control algorithm of relay action with a velocity feedback enabling the valve plug to be turned with a smooth nonstop and continuous approach to the final position is demonstrated
New segregation analysis of panic disorder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieland, V.J.; Fyer, A.J.; Chapman, T.
1996-04-09
We performed simple segregation analyses of panic disorder using 126 families of probands with DSM-III-R panic disorder who were ascertained for a family study of anxiety disorders at an anxiety disorders research clinic. We present parameter estimates for dominant, recessive, and arbitrary single major locus models without sex effects, as well as for a nongenetic transmission model, and compare these models to each other and to models obtained by other investigators. We rejected the nongenetic transmission model when comparing it to the recessive model. Consistent with some previous reports, we find comparable support for dominant and recessive models, and inmore » both cases estimate nonzero phenocopy rates. The effect of restricting the analysis to families of probands without any lifetime history of comorbid major depression (MDD) was also examined. No notable differences in parameter estimates were found in that subsample, although the power of that analysis was low. Consistency between the findings in our sample and in another independently collected sample suggests the possibility of pooling such samples in the future in order to achieve the necessary power for more complex analyses. 32 refs., 4 tabs.« less
Traffic jam dynamics in stochastic cellular automata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagel, K.; Schreckenberg, M.
1995-09-01
Simple models for particles hopping on a grid (cellular automata) are used to simulate (single lane) traffic flow. Despite their simplicity, these models are astonishingly realistic in reproducing start-stop-waves and realistic fundamental diagrams. One can use these models to investigate traffic phenomena near maximum flow. A so-called phase transition at average maximum flow is visible in the life-times of jams. The resulting dynamic picture is consistent with recent fluid-dynamical results by Kuehne/Kerner/Konhaeuser, and with Treiterer`s hysteresis description. This places CA models between car-following models and fluid-dynamical models for traffic flow. CA models are tested in projects in Los Alamos (USA)more » and in NRW (Germany) for large scale microsimulations of network traffic.« less
New velocimetry and revised cartography of the spiral arms in the Milky Way—a consistent symbiosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallée, Jacques P., E-mail: jacques.vallee@cnrc.gc.ca
Recent advances in the determinations of the positions (pitch angle, shape, numbers, interarm separation) and velocities (rotation curve) of the spiral arms are evaluated and compared to previous determinations. Based on these results, an average cartographic model is developed that fits the means of basic input data and provides predictions for the locations of the arms in the Milky Way, for each galactic quadrant. For each spiral arm segment in each galactic quadrant, the LSR radial velocities are calculated for the radial distance as well as for its galactic longitude. From our velocimetric model, arm intercepts (between line of sightsmore » and spiral arms) are indicated in velocity space and may be used to find the distance and velocity to any arm, in a given longitude range. Velocity comparisons between model predictions and published CO velocity distribution are done for each galactic quadrant, with good results. Our velocimetric model is not hydromagnetic in character, nor is it a particle-simulation scheme, yet it is simple to use for comparisons with the observations and it is in symbiosis and consistent with our cartographic model (itself simple to use for comparisons with observations). A blending in velocity of the Perseus and Cygnus arms is further demonstrated, as well as an apparent longitude-velocity blending of the starting points of the four spiral arms near 4 kpc (not a physical ring). An integrated (distance, velocity) model for the mass in the disk is employed, to yield the total mass of 3.0 × 10{sup 11} M{sub ☉} within a galactic radius of 28 kpc.« less
NASA Astrophysics Data System (ADS)
Raju, Subramanian; Saibaba, Saroja
2016-09-01
The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.
Modeling of RF/MHD coupling using NIMROD and GENRAY
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Schnack, D. D.; Sovinec, C. R.; Hegna, C. C.; Callen, J. D.; Ebrahimi, F.; Kruger, S. E.; Carlsson, J.; Held, E. D.; Ji, J.-Y.; Harvey, R. W.; Smirnov, A. P.
2008-11-01
We summarize ongoing theoretical/numerical work relevant to the development of a self--consistent framework for the inclusion of RF effects in fluid simulations, specifically considering the stabilization of resistive tearing modes in tokamak (DIII--D--like) geometry by electron cyclotron current drive. Previous investigations [T. G. Jenkins et al., Bull. APS 52, 131 (2007)] have demonstrated that relatively simple (though non--self--consistent) models for the RF--induced currents can be incorporated into the fluid equations, and that these currents can markedly reduce the width of the nonlinearly saturated magnetic islands generated by tearing modes. We report our progress toward the self--consistent modeling of these RF--induced currents. The initial interfacing of the NIMROD* code with the GENRAY/CQL3D** codes (which calculate RF propagation and energy/momentum deposition) is explained, equilibration of RF--induced currents over the plasma flux surfaces is investigated, and initial studies exploring the efficient reduction of saturated island widths through time modulation of the ECCD are presented. Conducted as part of the SWIM*** project; funded by U. S. DoE. *www.nimrodteam.org **www.compxco.com ***www.cswim.org
Modeling of RF/MHD coupling using NIMROD, GENRAY, and the Integrated Plasma Simulator
NASA Astrophysics Data System (ADS)
Jenkins, Thomas; Schnack, D. D.; Sovinec, C. R.; Hegna, C. C.; Callen, J. D.; Ebrahimi, F.; Kruger, S. E.; Carlsson, J.; Held, E. D.; Ji, J.-Y.; Harvey, R. W.; Smirnov, A. P.
2009-05-01
We summarize ongoing theoretical/numerical work relevant to the development of a self--consistent framework for the inclusion of RF effects in fluid simulations; specifically considering resistive tearing mode stabilization in tokamak (DIII--D--like) geometry via ECCD. Relatively simple (though non--self--consistent) models for the RF--induced currents are incorporated into the fluid equations, markedly reducing the width of the nonlinearly saturated magnetic islands generated by tearing modes. We report our progress toward the self--consistent modeling of these RF--induced currents. The initial interfacing of the NIMROD* code with the GENRAY/CQL3D** codes (calculating RF propagation and energy/momentum deposition) via the Integrated Plasma Simulator (IPS) framework*** is explained, equilibration of RF--induced currents over the plasma flux surfaces is investigated, and studies exploring the efficient reduction of saturated island widths through time modulation and spatial localization of the ECCD are presented. *[Sovinec et al., JCP 195, 355 (2004)] **[www.compxco.com] ***[This research and the IPS development are both part of the SWIM project. Funded by U.S. DoE.
MOPED enables discoveries through consistently processed proteomics data
Higdon, Roger; Stewart, Elizabeth; Stanberry, Larissa; Haynes, Winston; Choiniere, John; Montague, Elizabeth; Anderson, Nathaniel; Yandl, Gregory; Janko, Imre; Broomall, William; Fishilevich, Simon; Lancet, Doron; Kolker, Natali; Kolker, Eugene
2014-01-01
The Model Organism Protein Expression Database (MOPED, http://moped.proteinspire.org), is an expanding proteomics resource to enable biological and biomedical discoveries. MOPED aggregates simple, standardized and consistently processed summaries of protein expression and metadata from proteomics (mass spectrometry) experiments from human and model organisms (mouse, worm and yeast). The latest version of MOPED adds new estimates of protein abundance and concentration, as well as relative (differential) expression data. MOPED provides a new updated query interface that allows users to explore information by organism, tissue, localization, condition, experiment, or keyword. MOPED supports the Human Proteome Project’s efforts to generate chromosome and diseases specific proteomes by providing links from proteins to chromosome and disease information, as well as many complementary resources. MOPED supports a new omics metadata checklist in order to harmonize data integration, analysis and use. MOPED’s development is driven by the user community, which spans 90 countries guiding future development that will transform MOPED into a multi-omics resource. MOPED encourages users to submit data in a simple format. They can use the metadata a checklist generate a data publication for this submission. As a result, MOPED will provide even greater insights into complex biological processes and systems and enable deeper and more comprehensive biological and biomedical discoveries. PMID:24350770
Computational models of the Posner simple and choice reaction time tasks
Feher da Silva, Carolina; Baldo, Marcus V. C.
2015-01-01
The landmark experiments by Posner in the late 1970s have shown that reaction time (RT) is faster when the stimulus appears in an expected location, as indicated by a cue; since then, the so-called Posner task has been considered a “gold standard” test of spatial attention. It is thus fundamental to understand the neural mechanisms involved in performing it. To this end, we have developed a Bayesian detection system and small integrate-and-fire neural networks, which modeled sensory and motor circuits, respectively, and optimized them to perform the Posner task under different cue type proportions and noise levels. In doing so, main findings of experimental research on RT were replicated: the relative frequency effect, suboptimal RTs and significant error rates due to noise and invalid cues, slower RT for choice RT tasks than for simple RT tasks, fastest RTs for valid cues and slowest RTs for invalid cues. Analysis of the optimized systems revealed that the employed mechanisms were consistent with related findings in neurophysiology. Our models predict that (1) the results of a Posner task may be affected by the relative frequency of valid and neutral trials, (2) in simple RT tasks, input from multiple locations are added together to compose a stronger signal, and (3) the cue affects motor circuits more strongly in choice RT tasks than in simple RT tasks. In discussing the computational demands of the Posner task, attention has often been described as a filter that protects the nervous system, whose capacity is limited, from information overload. Our models, however, reveal that the main problems that must be overcome to perform the Posner task effectively are distinguishing signal from external noise and selecting the appropriate response in the presence of internal noise. PMID:26190997
Consistent Yokoya-Chen Approximation to Beamstrahlung(LCC-0010)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peskin, M
2004-04-22
I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.I reconsider the Yokoya-Chen approximate evolution equation for beamstrahlung and modify it slightly to generate simple, consistent analytical approximations for the electron and photon energy spectra. I compare these approximations to previous ones, and to simulation data.
Hydrodynamic and Chemical Factors in Clogging by Montmorillonite in Porous Media
Mays, David C.; Hunt, James R.
2008-01-01
Clogging by colloid deposits is important in water treatment filters, groundwater aquifers, and petroleum reservoirs. The complexity of colloid deposition and deposit morphology preclude models based on first principles, so this study extends an empirical approach to quantify clogging using a simple, one-parameter model. Experiments were conducted with destabilized suspensions of sodium- and calcium-montmorillonite to quantify the hydrodynamic and chemical factors important in clogging. Greater clogging is observed at slower fluid velocity, consistent with previous investigations. However, calcium-montmorillonite causes one order of magnitude less clogging per mass of deposited particles compared to sodium-montmorillonite or a previously published summary of clogging in model granular media. Steady state conditions, in which the permeability and the quantity of deposited material are both constant, were not observed, even though the experimental conditions were optimized for that purpose. These results indicate that hydrodynamic aspects of clogging by these natural materials are consistent with those of simplified model systems, and they demonstrate significant chemical effects on clogging for fully destabilized montmorillonite clay. PMID:17874771
Hydrodynamic and chemical factors in clogging by montmorillonite in porous media.
Mays, David C; Hunt, James R
2007-08-15
Clogging by colloid deposits is important in water treatment filters, groundwater aquifers, and petroleum reservoirs. The complexity of colloid deposition and deposit morphology preclude models based on first principles, so this study extends an empirical approach to quantify clogging using a simple, one-parameter model. Experiments were conducted with destabilized suspensions of sodium- and calcium-montmorillonite to quantify the hydrodynamic and chemical factors important in clogging. Greater clogging is observed at slower fluid velocity, consistent with previous investigations. However, calcium-montmorillonite causes 1 order of magnitude less clogging per mass of deposited particles compared to sodium-montmorillonite or a previously published summary of clogging in model granular media. Steady-state conditions, in which the permeability and the quantity of deposited material are both constant, were not observed, even though the experimental conditions were optimized for that purpose. These results indicate that hydrodynamic aspects of clogging by these natural materials are consistent with those of simplified model systems, and they demonstrate significant chemical effects on clogging for fully destabilized montmorillonite clay.
General Mechanism of Two-State Protein Folding Kinetics
Rollins, Geoffrey C.; Dill, Ken A.
2016-01-01
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s. PMID:25056406
Paleoclimate diagnostics: consistent large-scale temperature responses in warm and cold climates
NASA Astrophysics Data System (ADS)
Izumi, Kenji; Bartlein, Patrick; Harrison, Sandy
2015-04-01
The CMIP5 model simulations of the large-scale temperature responses to increased raditative forcing include enhanced land-ocean contrast, stronger response at higher latitudes than in the tropics, and differential responses in warm and cool season climates to uniform forcing. Here we show that these patterns are also characteristic of CMIP5 model simulations of past climates. The differences in the responses over land as opposed to over the ocean, between high and low latitudes, and between summer and winter are remarkably consistent (proportional and nearly linear) across simulations of both cold and warm climates. Similar patterns also appear in historical observations and paleoclimatic reconstructions, implying that such responses are characteristic features of the climate system and not simple model artifacts, thereby increasing our confidence in the ability of climate models to correctly simulate different climatic states. We also show the possibility that a small set of common mechanisms control these large-scale responses of the climate system across multiple states.
NASA Astrophysics Data System (ADS)
Wobus, C.; Tucker, G.; Anderson, R.; Kean, J.; Small, E.; Hancock, G.
2007-12-01
The cross-sectional form of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate changes in channel cross-sectional geometry through time. We have developed a 2D numerical model that computes the formation of a channel in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Boundary shear stress is calculated using a simple approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local boundary surface. The resulting model predictions for the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with the predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ~3%, and the predicted peak shear stress is consistent to within ~7%. The efficiency of our model makes it suitable for calculations of long-term morphologic change both in single cross-sections and in series of cross-sections arrayed downstream. For a uniform substrate, the model predicts a strong tendency toward a fixed width-to-depth ratio, regardless of gradient or discharge. The model predicts power-law relationships between width and discharge with an exponent near 2/5, and between width and gradient with an exponent near -1/5. Recent enhancements to the model include the addition of sediment, which increases the width-to-depth ratio at steady state by favoring erosion of the channel walls relative to the channel bed (the "cover effect"). Inclusion of a probability density function of discharges with a simple parameterization of weathering along channel banks leads to the formation of model strath terraces. Downstream changes in substrate erodibility or tectonic uplift rate lead to step-function changes in channel width, consistent with empirical observations. Finally, explicit inclusion of bedload transport allows channel width, gradient, and the pattern of sediment flux to evolve dynamically, allowing us to explore the response of bedrock channels to both spatial patterns of rock uplift, and temporal variations in sediment input.
Digital image classification with the help of artificial neural network by simple histogram
Dey, Pranab; Banerjee, Nirmalya; Kaur, Rajwant
2016-01-01
Background: Visual image classification is a great challenge to the cytopathologist in routine day-to-day work. Artificial neural network (ANN) may be helpful in this matter. Aims and Objectives: In this study, we have tried to classify digital images of malignant and benign cells in effusion cytology smear with the help of simple histogram data and ANN. Materials and Methods: A total of 404 digital images consisting of 168 benign cells and 236 malignant cells were selected for this study. The simple histogram data was extracted from these digital images and an ANN was constructed with the help of Neurointelligence software [Alyuda Neurointelligence 2.2 (577), Cupertino, California, USA]. The network architecture was 6-3-1. The images were classified as training set (281), validation set (63), and test set (60). The on-line backpropagation training algorithm was used for this study. Result: A total of 10,000 iterations were done to train the ANN system with the speed of 609.81/s. After the adequate training of this ANN model, the system was able to identify all 34 malignant cell images and 24 out of 26 benign cells. Conclusion: The ANN model can be used for the identification of the individual malignant cells with the help of simple histogram data. This study will be helpful in the future to identify malignant cells in unknown situations. PMID:27279679
Phase transitions in the q -voter model with noise on a duplex clique
NASA Astrophysics Data System (ADS)
Chmiel, Anna; Sznajd-Weron, Katarzyna
2015-11-01
We study a nonlinear q -voter model with stochastic noise, interpreted in the social context as independence, on a duplex network. To study the role of the multilevelness in this model we propose three methods of transferring the model from a mono- to a multiplex network. They take into account two criteria: one related to the status of independence (LOCAL vs GLOBAL) and one related to peer pressure (AND vs OR). In order to examine the influence of the presence of more than one level in the social network, we perform simulations on a particularly simple multiplex: a duplex clique, which consists of two fully overlapped complete graphs (cliques). Solving numerically the rate equation and simultaneously conducting Monte Carlo simulations, we provide evidence that even a simple rearrangement into a duplex topology may lead to significant changes in the observed behavior. However, qualitative changes in the phase transitions can be observed for only one of the considered rules: LOCAL&AND. For this rule the phase transition becomes discontinuous for q =5 , whereas for a monoplex such behavior is observed for q =6 . Interestingly, only this rule admits construction of realistic variants of the model, in line with recent social experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Prateek; Batell, Brian; Fox, Patrick J.
2015-05-07
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, tt{sup -bar}), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for bb{sup -bar} is also enlarged to 35 GeV≲m{sub χ}≲165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
Wong, Tony E.; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Prateek; Batell, Brian; Fox, Patrick J.
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, tt¯), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for bb¯ is also enlarged to 35 GeV ≲ m χ ≲ 165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
Agrawal, Prateek; Batell, Brian; Fox, Patrick J.; ...
2015-05-07
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, tt¯), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for bb¯ is also enlarged to 35 GeV ≲ m χ ≲ 165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
Evaluating the multimedia fate of organic chemicals: A level III fugacity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackay, D.; Paterson, S.
A multimedia model is developed and applied to selected organic chemicals in evaluative and real regional environments. The model employs the fugacity concept and treats four bulk compartments: air, water, soil, and bottom sediment, which consist of subcompartments of varying proportions of air, water, and mineral and organic matter. Chemical equilibrium is assumed to apply within (but not between) each bulk compartment. Expressions are included for emissions, advective flows, degrading reactions, and interphase transport by diffusive and non-diffusive processes. Input to the model consists of a description of the environment, the physical-chemical and reaction properties of the chemical, and emissionmore » rates. For steady-state conditions the solution is a simple algebraic expression. The model is applied to six chemicals in the region of southern Ontario and the calculated fate and concentrations are compared with observations. The results suggest that the model may be used to determine the processes that control the environmental fate of chemicals in a region and provide approximate estimates of relative media concentrations.« less
Thomas, Michael L
2012-03-01
There is growing evidence that psychiatric disorders maintain hierarchical associations where general and domain-specific factors play prominent roles (see D. Watson, 2005). Standard, unidimensional measurement models can fail to capture the meaningful nuances of such complex latent variable structures. The present study examined the ability of the multidimensional item response theory bifactor model (see R. D. Gibbons & D. R. Hedeker, 1992) to improve construct validity by serving as a bridge between measurement and clinical theories. Archival data consisting of 688 outpatients' psychiatric diagnoses and item-level responses to the Brief Symptom Inventory (BSI; L. R. Derogatis, 1993) were extracted from files at a university mental health clinic. The bifactor model demonstrated superior fit for the internal structure of the BSI and improved overall diagnostic accuracy in the sample (73%) compared with unidimensional (61%) and oblique simple structure (65%) models. Consistent with clinical theory, multiple sources of item variance were drawn from individual test items. Test developers and clinical researchers are encouraged to consider model-based measurement in the assessment of psychiatric distress.
A simple model for the dependence on local detonation speed of the product entropy
NASA Astrophysics Data System (ADS)
Hetherington, David C.; Whitworth, Nicholas J.
2012-03-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of singlespeed programmed burn to DSD/WBL (Detonation Shock Dynamics / Whitham Bdzil Lambourn). The problem with this advance is that the previously conventional approach to the hydrodynamic stage of the model results in the entropy of the detonation products (s) having the wrong correlation with detonation speed (D). Instead of being higher where D is lower, the conventional method leads to s being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and s is realistically correlated with D.
A Simple Model for the Dependence on Local Detonation Speed (D) of the Product Entropy (S)
NASA Astrophysics Data System (ADS)
Hetherington, David
2011-06-01
The generation of a burn time field as a pre-processing step ahead of a hydrocode calculation has been mostly upgraded in the explosives modelling community from the historical model of single-speed programmed burn to DSD. However, with this advance has come the problem that the previously conventional approach to the hydrodynamic stage of the model results in S having the wrong correlation with D. Instead of being higher where the detonation speed is lower, i.e. where reaction occurs at lower compression, the conventional method leads to S being lower where D is lower, resulting in a completely fictitious enhancement of available energy where the burn is degraded! A technique is described which removes this deficiency of the historical model when used with a DSD-generated burn time field. By treating the conventional JWL equation as a semi-empirical expression for the local expansion isentrope, and constraining the local parameter set for consistency with D, it is possible to obtain the two desirable outcomes that the model of the detonation wave is internally consistent, and S is realistically correlated with D.
Higgs mass and muon anomalous magnetic moment in supersymmetric models with vectorlike matters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endo, Motoi; Hamaguchi, Koichi; Institute for the Physics and Mathematics of the Universe
2011-10-01
We study the muon anomalous magnetic moment (muon g-2) and the Higgs boson mass in a simple extension of the minimal supersymmetric (SUSY) standard model with extra vectorlike matters, in the frameworks of gauge-mediated SUSY breaking (GMSB) models and gravity mediation (mSUGRA) models. It is shown that the deviation of the muon g-2 and a relatively heavy Higgs boson can be simultaneously explained in large tan{beta} region. (i) In GMSB models, the Higgs mass can be more than 135 GeV (130 GeV) in the region where the muon g-2 is consistent with the experimental value at the 2{sigma} (1{sigma}) level,more » while maintaining the perturbative coupling unification. (ii) In the case of mSUGRA models with universal soft masses, the Higgs mass can be as large as about 130 GeV when the muon g-2 is consistent with the experimental value at the 2{sigma} level. In both cases, the Higgs mass can be above 140 GeV if the g-2 constraint is not imposed.« less
Dark matter and MOND dynamical models of the massive spiral galaxy NGC 2841
NASA Astrophysics Data System (ADS)
Samurović, S.; Vudragović, A.; Jovanović, M.
2015-08-01
We study dynamical models of the massive spiral galaxy NGC 2841 using both the Newtonian models with Navarro-Frenk-White (NFW) and isothermal dark haloes, as well as various MOND (MOdified Newtonian Dynamics) models. We use the observations coming from several publicly available data bases: we use radio data, near-infrared photometry as well as spectroscopic observations. In our models, we find that both tested Newtonian dark matter approaches can successfully fit the observed rotational curve of NGC 2841. The three tested MOND models (standard, simple and, for the first time applied to another spiral galaxy than the Milky Way, Bekenstein's toy model) provide fits of the observed rotational curve with various degrees of success: the best result was obtained with the standard MOND model. For both approaches, Newtonian and MOND, the values of the mass-to-light ratios of the bulge are consistent with the predictions from the stellar population synthesis (SPS) based on the Salpeter initial mass function (IMF). Also, for Newtonian and simple and standard MOND models, the estimated stellar mass-to-light ratios of the disc agree with the predictions from the SPS models based on the Kroupa IMF, whereas the toy MOND model provides too low a value of the stellar mass-to-light ratio, incompatible with the predictions of the tested SPS models. In all our MOND models, we vary the distance to NGC 2841, and our best-fitting standard and toy models use the values higher than the Cepheid-based distance to the galaxy NGC 2841, and the best-fitting simple MOND model is based on the lower value of the distance. The best-fitting NFW model is inconsistent with the predictions of the Λ cold dark matter cosmology, because the inferred concentration index is too high for the established virial mass.
Predicting longshore gradients in longshore transport: the CERC formula compared to Delft3D
List, Jeffrey H.; Hanes, Daniel M.; Ruggiero, Peter
2007-01-01
The prediction of longshore transport gradients is critical for forecasting shoreline change. We employ simple test cases consisting of shoreface pits at varying distances from the shoreline to compare the longshore transport gradients predicted by the CERC formula against results derived from the process-based model Delft3D. Results show that while in some cases the two approaches give very similar results, in many cases the results diverge greatly. Although neither approach is validated with field data here, the Delft3D-based transport gradients provide much more consistent predictions of erosional and accretionary zones as the pit location varies across the shoreface.
NASA Technical Reports Server (NTRS)
Hough, D. H.; Readhead, A. C. S.
1989-01-01
A complete, flux-density-limited sample of double-lobed radio quasars is defined, with nuclei bright enough to be mapped with the Mark III VLBI system. It is shown that the statistics of linear size, nuclear strength, and curvature are consistent with the assumption of random source orientations and simple relativistic beaming in the nuclei. However, these statistics are also consistent with the effects of interaction between the beams and the surrounding medium. The distribution of jet velocities in the nuclei, as measured with VLBI, will provide a powerful test of physical theories of extragalactic radio sources.
Emergence of collective propulsion through cell-cell adhesion.
Matsushita, Katsuyoshi
2018-04-01
The mechanisms driving the collective movement of cells remain poorly understood. To contribute toward resolving this mystery, a model was formulated to theoretically explore the possible functions of polarized cell-cell adhesion in collective cell migration. The model consists of an amoeba cell with polarized cell-cell adhesion, which is controlled by positive feedback with cell motion. This model cell has no persistent propulsion and therefore exhibits a simple random walk when in isolation. However, at high density, these cells acquire collective propulsion and form ordered movement. This result suggests that cell-cell adhesion has a potential function, which induces collective propulsion with persistence.
Emergence of collective propulsion through cell-cell adhesion
NASA Astrophysics Data System (ADS)
Matsushita, Katsuyoshi
2018-04-01
The mechanisms driving the collective movement of cells remain poorly understood. To contribute toward resolving this mystery, a model was formulated to theoretically explore the possible functions of polarized cell-cell adhesion in collective cell migration. The model consists of an amoeba cell with polarized cell-cell adhesion, which is controlled by positive feedback with cell motion. This model cell has no persistent propulsion and therefore exhibits a simple random walk when in isolation. However, at high density, these cells acquire collective propulsion and form ordered movement. This result suggests that cell-cell adhesion has a potential function, which induces collective propulsion with persistence.
NASA Technical Reports Server (NTRS)
Blackwell, William C., Jr.
2004-01-01
In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.
Theoretical Studies of Processes Affecting the Stratospheric and Free Tropospheric Aerosols
NASA Technical Reports Server (NTRS)
Hamill, Patrick
1999-01-01
This report describes the work done with funding from NASA Grant during the past three years. Funding commenced in June, 1996 and had a planned duration of three years. This report covers the time period June 1996 to June 1999. Here we present a short description of the projects carried out and documentation of the work done in terms of publications, papers presented, and conferences attended: microphysical modeling consist of two related tasks (1) development of a simple microphysical model for modeling the Pinatubo plume and (2) carrying out a study of sulfate particle formation in volcanic plume.Also analysis of sun photometer measurements are presented.
Toxin effect on protein biosynthesis in eukaryotic cells: a simple kinetic model.
Skakauskas, Vladas; Katauskis, Pranas; Skvortsov, Alex; Gray, Peter
2015-03-01
A model for toxin inhibition of protein synthesis inside eukaryotic cells is presented. Mitigation of this effect by introduction of an antibody is also studied. Antibody and toxin (ricin) initially are delivered outside the cell. The model describes toxin internalization from the extracellular into the intracellular domain, its transport to the endoplasmic reticulum (ER) and the cleavage inside the ER into the RTA and RTB chains, the release of RTA into the cytosol, inactivation (depurination) of ribosomes, and the effect on translation. The model consists of a set of ODEs which are solved numerically. Numerical results are illustrated by figures and discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Nonequilibrium thermodynamics of the shear-transformation-zone model
NASA Astrophysics Data System (ADS)
Luo, Alan M.; Ã-ttinger, Hans Christian
2014-02-01
The shear-transformation-zone (STZ) model has been applied numerous times to describe the plastic deformation of different types of amorphous systems. We formulate this model within the general equation for nonequilibrium reversible-irreversible coupling (GENERIC) framework, thereby clarifying the thermodynamic structure of the constitutive equations and guaranteeing thermodynamic consistency. We propose natural, physically motivated forms for the building blocks of the GENERIC, which combine to produce a closed set of time evolution equations for the state variables, valid for any choice of free energy. We demonstrate an application of the new GENERIC-based model by choosing a simple form of the free energy. In addition, we present some numerical results and contrast those with the original STZ equations.
Nanopore Current Oscillations: Nonlinear Dynamics on the Nanoscale.
Hyland, Brittany; Siwy, Zuzanna S; Martens, Craig C
2015-05-21
In this Letter, we describe theoretical modeling of an experimentally realized nanoscale system that exhibits the general universal behavior of a nonlinear dynamical system. In particular, we consider the description of voltage-induced current fluctuations through a single nanopore from the perspective of nonlinear dynamics. We briefly review the experimental system and its behavior observed and then present a simple phenomenological nonlinear model that reproduces the qualitative behavior of the experimental data. The model consists of a two-dimensional deterministic nonlinear bistable oscillator experiencing both dissipation and random noise. The multidimensionality of the model and the interplay between deterministic and stochastic forces are both required to obtain a qualitatively accurate description of the physical system.
NASA Astrophysics Data System (ADS)
Kotliar, Gabriel
2005-01-01
Dynamical mean field theory (DMFT) relates extended systems (bulk solids, surfaces and interfaces) to quantum impurity models (QIM) satisfying a self-consistency condition. This mapping provides an economic description of correlated electron materials. It is currently used in practical computations of physical properties of real materials. It has also great conceptual value, providing a simple picture of correlated electron phenomena on the lattice, using concepts derived from quantum impurity models such as the Kondo effect. DMFT can also be formulated as a first principles electronic structure method and is applicable to correlated materials.
An improved interfacial bonding model for material interface modeling
Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei
2016-01-01
An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343
A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion
Landy, Michael S.
2016-01-01
Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. SIGNIFICANCE STATEMENT Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. PMID:27807167
A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion.
Sun, Peng; Landy, Michael S
2016-11-02
Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. Copyright © 2016 the authors 0270-6474/16/3611259-16$15.00/0.
An evaluation of bias in propensity score-adjusted non-linear regression models.
Wan, Fei; Mitra, Nandita
2018-03-01
Propensity score methods are commonly used to adjust for observed confounding when estimating the conditional treatment effect in observational studies. One popular method, covariate adjustment of the propensity score in a regression model, has been empirically shown to be biased in non-linear models. However, no compelling underlying theoretical reason has been presented. We propose a new framework to investigate bias and consistency of propensity score-adjusted treatment effects in non-linear models that uses a simple geometric approach to forge a link between the consistency of the propensity score estimator and the collapsibility of non-linear models. Under this framework, we demonstrate that adjustment of the propensity score in an outcome model results in the decomposition of observed covariates into the propensity score and a remainder term. Omission of this remainder term from a non-collapsible regression model leads to biased estimates of the conditional odds ratio and conditional hazard ratio, but not for the conditional rate ratio. We further show, via simulation studies, that the bias in these propensity score-adjusted estimators increases with larger treatment effect size, larger covariate effects, and increasing dissimilarity between the coefficients of the covariates in the treatment model versus the outcome model.
Rouse, Andrew A; Cook, Peter F; Large, Edward W; Reichmuth, Colleen
2016-01-01
Human capacity for entraining movement to external rhythms-i.e., beat keeping-is ubiquitous, but its evolutionary history and neural underpinnings remain a mystery. Recent findings of entrainment to simple and complex rhythms in non-human animals pave the way for a novel comparative approach to assess the origins and mechanisms of rhythmic behavior. The most reliable non-human beat keeper to date is a California sea lion, Ronan, who was trained to match head movements to isochronous repeating stimuli and showed spontaneous generalization of this ability to novel tempos and to the complex rhythms of music. Does Ronan's performance rely on the same neural mechanisms as human rhythmic behavior? In the current study, we presented Ronan with simple rhythmic stimuli at novel tempos. On some trials, we introduced "perturbations," altering either tempo or phase in the middle of a presentation. Ronan quickly adjusted her behavior following all perturbations, recovering her consistent phase and tempo relationships to the stimulus within a few beats. Ronan's performance was consistent with predictions of mathematical models describing coupled oscillation: a model relying solely on phase coupling strongly matched her behavior, and the model was further improved with the addition of period coupling. These findings are the clearest evidence yet for parity in human and non-human beat keeping and support the view that the human ability to perceive and move in time to rhythm may be rooted in broadly conserved neural mechanisms.
Williams, Mobolaji
2018-01-01
The field of disordered systems in statistical physics provides many simple models in which the competing influences of thermal and nonthermal disorder lead to new phases and nontrivial thermal behavior of order parameters. In this paper, we add a model to the subject by considering a disordered system where the state space consists of various orderings of a list. As in spin glasses, the disorder of such "permutation glasses" arises from a parameter in the Hamiltonian being drawn from a distribution of possible values, thus allowing nominally "incorrect orderings" to have lower energies than "correct orderings" in the space of permutations. We analyze a Gaussian, uniform, and symmetric Bernoulli distribution of energy costs, and, by employing Jensen's inequality, derive a simple condition requiring the permutation glass to always transition to the correctly ordered state at a temperature lower than that of the nondisordered system, provided that this correctly ordered state is accessible. We in turn find that in order for the correctly ordered state to be accessible, the probability that an incorrectly ordered component is energetically favored must be less than the inverse of the number of components in the system. We show that all of these results are consistent with a replica symmetric ansatz of the system. We conclude by arguing that there is no distinct permutation glass phase for the simplest model considered here and by discussing how to extend the analysis to more complex Hamiltonians capable of novel phase behavior and replica symmetry breaking. Finally, we outline an apparent correspondence between the presented system and a discrete-energy-level fermion gas. In all, the investigation introduces a class of exactly soluble models into statistical mechanics and provides a fertile ground to investigate statistical models of disorder.
Consistent calculation of the screening and exchange effects in allowed β- transitions
NASA Astrophysics Data System (ADS)
Mougeot, X.; Bisch, C.
2014-07-01
The atomic exchange effect has previously been demonstrated to have a great influence at low energy on the Pu241 β- transition. The screening effect has been given as a possible explanation for a remaining discrepancy. Improved calculations have been made to consistently evaluate these two atomic effects, compared here to the recent high-precision measurements of Pu241 and Ni63 β spectra. In this paper a screening correction has been defined to account for the spatial extension of the electron wave functions. Excellent overall agreement of about 1% from 500 eV to the end-point energy has been obtained for both β spectra, which demonstrates that a rather simple β decay model for allowed transitions, including atomic effects within an independent-particle model, is sufficient to describe well the current most precise measurements.
'Peeling a comet': Layering of comet analogues
NASA Astrophysics Data System (ADS)
Kaufmann, E.; Hagermann, A.
2017-09-01
Using a simple comet analogue we investigate the influence of subsurface solar light absorption by dust. We found that a sample initially consisting of loose water ice grains and carbon particles becomes significantly harder after being irradiated with artificial sunlight for several hours. Further a drastic change of the sample surface could be observed. These results suggests that models should treat the nucleus surface as an interactive transitional zone to better represent cometary processes.
Computational Investigation of Structured Shocks in Al/SiC-Particulate Metal-Matrix Composites
2011-06-01
used to implement the dynamic-mixture model into the VUMAT user-material subroutine of ABAQUS /Explicit. Owing to the attendant large strains and...that the residual thermal - expansion effects are more pronounced in the aluminium-matrix than in SiC-particulates. This finding is consistent with the...simple waves (CSWs) (Davison, 2008). . In accordance with the previously observed larger thermal - expansion effects in Al, Figure 5(b) shows that the
Right Whale Diving and Foraging Behavior in the Southwestern Gulf of Maine
2011-09-30
atop a relatively simple food chain consisting only of phytoplankton, copepods , and whales that can serve as a convenient model to study trophic...oceanographic processes that promote the thin, aggregated layers of copepods upon which the whales feed, and (3) to assess the risks posed to right whales... copepods remained at the surface during the day (Baumgartner et al. 2011). We hypothesize that right whales faithfully track these changes in
Right Whale Diving and Foraging Behavior in the Southwestern Gulf of Maine
2011-09-30
whale sits atop a relatively simple food chain consisting only of phytoplankton, copepods , and whales that can serve as a convenient model to study...influence the vertical distribution of copepods . APPROACH Tagging, tracking, and sampling around right whales was accomplished with two...above this depth feeding on a shallow layer of copepods (Figure 3f). We were able to confirm that this tagged whale was feeding by observing its
A simple model to demonstrate the electronic apex locator.
Tinaz, A C; Alaçam, T; Topuz, O
2002-11-01
To describe and evaluate a newly developed model for demonstrating and teaching the use of electronic apex locators. A phantom model, master jaw model and extracted human teeth were used to construct the demonstration model with alginate impression material as the periapical conductive medium. The model was validated in a series of length determinations with apical foramina enlarged to 0.20, 0.30 and 0.45 mm diameter, and the stability of the model was evaluated up to 45 h after construction. All evaluations were conducted with the Root ZX apex locator with 2.65 and 5.25% NaOCl in the canals. Most length measurements were within 1 mm of actual root length (range: -2.2 to +0.21 mm) and did not change significantly over 45 h for teeth with foramina of 0.3 mm or less. Measurements for teeth with wide (0.45 mm) apices were stable up to 28 h. NaOCl concentration did not significantly affect the readings. A simple, inexpensive model can be manufactured from plastic dental jaws, natural teeth and alginate impression material to demonstrate electronic working length measurement. The model is stable for many hours and provides consistent results with different concentrations of NaOCl in the canal and various apical diameters. The model is a useful teaching aid but needs further evaluation and refinement before use in research applications.
Gamma-Ray Burst Intensity Distributions
NASA Technical Reports Server (NTRS)
Band, David L.; Norris, Jay P.; Bonnell, Jerry T.
2004-01-01
We use the lag-luminosity relation to calculate self-consistently the redshifts, apparent peak bolometric luminosities L(sub B1), and isotropic energies E(sub iso) for a large sample of BATSE bursts. We consider two different forms of the lag-luminosity relation; for both forms the median redshift, for our burst database is 1.6. We model the resulting sample of burst energies with power law and Gaussian dis- tributions, both of which are reasonable models. The power law model has an index of a = 1.76 plus or minus 0.05 (95% confidence) as opposed to the index of a = 2 predicted by the simple universal jet profile model; however, reasonable refinements to this model permit much greater flexibility in reconciling predicted and observed energy distributions.
Mingguang, Zhang; Juncheng, Jiang
2008-10-30
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.
Nilsen, Vegard; Wyller, John
2016-01-01
Dose-response models are essential to quantitative microbial risk assessment (QMRA), providing a link between levels of human exposure to pathogens and the probability of negative health outcomes. In drinking water studies, the class of semi-mechanistic models known as single-hit models, such as the exponential and the exact beta-Poisson, has seen widespread use. In this work, an attempt is made to carefully develop the general mathematical single-hit framework while explicitly accounting for variation in (1) host susceptibility and (2) pathogen infectivity. This allows a precise interpretation of the so-called single-hit probability and precise identification of a set of statistical independence assumptions that are sufficient to arrive at single-hit models. Further analysis of the model framework is facilitated by formulating the single-hit models compactly using probability generating and moment generating functions. Among the more practically relevant conclusions drawn are: (1) for any dose distribution, variation in host susceptibility always reduces the single-hit risk compared to a constant host susceptibility (assuming equal mean susceptibilities), (2) the model-consistent representation of complete host immunity is formally demonstrated to be a simple scaling of the response, (3) the model-consistent expression for the total risk from repeated exposures deviates (gives lower risk) from the conventional expression used in applications, and (4) a model-consistent expression for the mean per-exposure dose that produces the correct total risk from repeated exposures is developed. © 2016 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
Non-additive simple potentials for pre-programmed self-assembly
NASA Astrophysics Data System (ADS)
Mendoza, Carlos
2015-03-01
A major goal in nanoscience and nanotechnology is the self-assembly of any desired complex structure with a system of particles interacting through simple potentials. To achieve this objective, intense experimental and theoretical efforts are currently concentrated in the development of the so called ``patchy'' particles. Here we follow a completely different approach and introduce a very accessible model to produce a large variety of pre-programmed two-dimensional (2D) complex structures. Our model consists of a binary mixture of particles that interact through isotropic interactions that is able to self-assemble into targeted lattices by the appropriate choice of a small number of geometrical parameters and interaction strengths. We study the system using Monte Carlo computer simulations and, despite its simplicity, we are able to self assemble potentially useful structures such as chains, stripes, Kagomé, twisted Kagomé, honeycomb, square, Archimedean and quasicrystalline tilings. Our model is designed such that it may be implemented using discotic particles or, alternatively, using exclusively spherical particles interacting isotropically. Thus, it represents a promising strategy for bottom-up nano-fabrication. Partial Financial Support: DGAPA IN-110613.
Experimental study of the oscillation of spheres in an acoustic levitator.
Andrade, Marco A B; Pérez, Nicolás; Adamowski, Julio C
2014-10-01
The spontaneous oscillation of solid spheres in a single-axis acoustic levitator is experimentally investigated by using a high speed camera to record the position of the levitated sphere as a function of time. The oscillations in the axial and radial directions are systematically studied by changing the sphere density and the acoustic pressure amplitude. In order to interpret the experimental results, a simple model based on a spring-mass system is applied in the analysis of the sphere oscillatory behavior. This model requires the knowledge of the acoustic pressure distribution, which was obtained numerically by using a linear finite element method (FEM). Additionally, the linear acoustic pressure distribution obtained by FEM was compared with that measured with a laser Doppler vibrometer. The comparison between numerical and experimental pressure distributions shows good agreement for low values of pressure amplitude. When the pressure amplitude is increased, the acoustic pressure distribution becomes nonlinear, producing harmonics of the fundamental frequency. The experimental results of the spheres oscillations for low pressure amplitudes are consistent with the results predicted by the simple model based on a spring-mass system.
Knowledge-based vision and simple visual machines.
Cliff, D; Noble, J
1997-01-01
The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684
NASA Astrophysics Data System (ADS)
Sawangwit, U.; Shanks, T.; Abdalla, F. B.; Cannon, R. D.; Croom, S. M.; Edge, A. C.; Ross, Nicholas P.; Wake, D. A.
2011-10-01
We present the angular correlation function measured from photometric samples comprising 1562 800 luminous red galaxies (LRGs). Three LRG samples were extracted from the Sloan Digital Sky Survey (SDSS) imaging data, based on colour-cut selections at redshifts, z≈ 0.35, 0.55 and 0.7 as calibrated by the spectroscopic surveys, SDSS-LRG, 2dF-SDSS LRG and QSO (quasi-stellar object) (2SLAQ) and the AAΩ-LRG survey. The galaxy samples cover ≈7600 deg2 of sky, probing a total cosmic volume of ≈5.5 h-3 Gpc3. The small- and intermediate-scale correlation functions generally show significant deviations from a single power-law fit with a well-detected break at ≈1 h-1 Mpc, consistent with the transition scale between the one- and two-halo terms in halo occupation models. For galaxy separations 1-20 h-1 Mpc and at fixed luminosity, we see virtually no evolution of the clustering with redshift and the data are consistent with a simple high peaks biasing model where the comoving LRG space density is constant with z. At fixed z, the LRG clustering amplitude increases with luminosity in accordance with the simple high peaks model, with a typical LRG dark matter halo mass 1013-1014 h-1 M⊙. For r < 1 h-1 Mpc, the evolution is slightly faster and the clustering decreases towards high redshift consistent with a virialized clustering model. However, assuming the halo occupation distribution (HOD) and Λ cold dark matter (ΛCDM) halo merger frameworks, ˜2-3 per cent/Gyr of the LRGs are required to merge in order to explain the small scales clustering evolution, consistent with previous results. At large scales, our result shows good agreement with the SDSS-LRG result of Eisenstein et al. but we find an apparent excess clustering signal beyond the baryon acoustic oscillations (BAO) scale. Angular power spectrum analyses of similar LRG samples also detect a similar apparent large-scale clustering excess but more data are required to check for this feature in independent galaxy data sets. Certainly, if the ΛCDM model were correct then we would have to conclude that this excess was caused by systematics at the level of Δw≈ 0.001-0.0015 in the photometric AAΩ-LRG sample.
Simple Models of SL-9 Impact Plumes
NASA Astrophysics Data System (ADS)
Harrington, J.; Deming, L. D.
1996-09-01
The impacts of the larger fragments of Comet Shomaker-Levy 9 on Jupiter left debris patterns of consistent appearance, likely caused by the landing of the observed impact plumes. Realistic fluid simulations of impact plume evolution may take months to years for even single computer runs. To provide guidance for these models and to elucidate the most basic aspects of the plumes, debris patterns, and their ultimate effect on the atmosphere, we have developed simple models that reproduce many of the key features. These Monte-Carlo models divide the plume into discrete mass elements, assign to them a velocity distribution based on numerical impact models, and follow their ballistic trajectories until they hit the planet. If particles go no higher than the observed ~ 3,000 km plume heights, they cannot reach the observed crescent pattern located ~ 10,000 km from the impact sites unless they slide horizontally after ballistic flight. By introducing parameterized sliding or higher trajectories, we can reproduce most of the observed impact features, including the central streak, the crescent, and the ephemeral ring located ~ 30,000 km from the impact sites. We also keep track of the amounts of energy and momentum delivered to the atmosphere as a function of time and location, for use in atmospheric models (D. Deming and J. Harrington, this meeting).
Distinguishing among potential mechanisms of singleton suppression.
Gaspelin, Nicholas; Luck, Steven J
2018-04-01
Previous research has revealed that people can suppress salient stimuli that might otherwise capture visual attention. The present study tests between 3 possible mechanisms of visual suppression. According to first-order feature suppression models , items are suppressed on the basis of simple feature values. According to second-order feature suppression models , items are suppressed on the basis of local discontinuities within a given feature dimension. According to global-salience suppression models , items are suppressed on the basis of their dimension-independent salience levels. The current study distinguished among these models by varying the predictability of the singleton color value. If items are suppressed by virtue of salience alone, then it should not matter whether the singleton color is predictable. However, evidence from probe processing and eye movements indicated that suppression is possible only when the color values are predictable. Moreover, the ability to suppress salient items developed gradually as participants gained experience with the feature that defined the salient distractor. These results are consistent with first-order feature suppression models, and are inconsistent with the other models of suppression. In other words, people primarily suppress salient distractors on the basis of their simple features and not on the basis of salience per se. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Assessment of cardiovascular risk based on a data-driven knowledge discovery approach.
Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Cabiddu, R; Morais, J
2015-01-01
The cardioRisk project addresses the development of personalized risk assessment tools for patients who have been admitted to the hospital with acute myocardial infarction. Although there are models available that assess the short-term risk of death/new events for such patients, these models were established in circumstances that do not take into account the present clinical interventions and, in some cases, the risk factors used by such models are not easily available in clinical practice. The integration of the existing risk tools (applied in the clinician's daily practice) with data-driven knowledge discovery mechanisms based on data routinely collected during hospitalizations, will be a breakthrough in overcoming some of these difficulties. In this context, the development of simple and interpretable models (based on recent datasets), unquestionably will facilitate and will introduce confidence in this integration process. In this work, a simple and interpretable model based on a real dataset is proposed. It consists of a decision tree model structure that uses a reduced set of six binary risk factors. The validation is performed using a recent dataset provided by the Portuguese Society of Cardiology (11113 patients), which originally comprised 77 risk factors. A sensitivity, specificity and accuracy of, respectively, 80.42%, 77.25% and 78.80% were achieved showing the effectiveness of the approach.
A hierarchical stress release model for synthetic seismicity
NASA Astrophysics Data System (ADS)
Bebbington, Mark
1997-06-01
We construct a stochastic dynamic model for synthetic seismicity involving stochastic stress input, release, and transfer in an environment of heterogeneous strength and interacting segments. The model is not fault-specific, having a number of adjustable parameters with physical interpretation, namely, stress relaxation, stress transfer, stress dissipation, segment structure, strength, and strength heterogeneity, which affect the seismicity in various ways. Local parameters are chosen to be consistent with large historical events, other parameters to reproduce bulk seismicity statistics for the fault as a whole. The one-dimensional fault is divided into a number of segments, each comprising a varying number of nodes. Stress input occurs at each node in a simple random process, representing the slow buildup due to tectonic plate movements. Events are initiated, subject to a stochastic hazard function, when the stress on a node exceeds the local strength. An event begins with the transfer of excess stress to neighboring nodes, which may in turn transfer their excess stress to the next neighbor. If the event grows to include the entire segment, then most of the stress on the segment is transferred to neighboring segments (or dissipated) in a characteristic event. These large events may themselves spread to other segments. We use the Middle America Trench to demonstrate that this model, using simple stochastic stress input and triggering mechanisms, can produce behavior consistent with the historical record over five units of magnitude. We also investigate the effects of perturbing various parameters in order to show how the model might be tailored to a specific fault structure. The strength of the model lies in this ability to reproduce the behavior of a general linear fault system through the choice of a relatively small number of parameters. It remains to develop a procedure for estimating the internal state of the model from the historical observations in order to use the model for forward prediction.
Interpretation of OAO-2 ultraviolet light curves of beta Doradus
NASA Technical Reports Server (NTRS)
Hutchinson, J. L.; Lillie, C. F.; Hill, S. J.
1975-01-01
Middle-ultraviolet light curves of beta Doradus, obtained by OAO-2, are presented along with other evidence indicating that the small additional bumps observed on the rising branches of these curves have their origin in shock-wave phenomena in the upper atmosphere of this classical Cepheid. A simple piston-driven spherical hydrodynamic model of the atmosphere is developed to explain the bumps, and the calculations are compared with observations. The model is found to be consistent with the shapes of the light curves as well as with measurements of the H-alpha radial velocities.
The effect of data structures on INGRES performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creighton, J.R.
1987-01-01
Computer experiments were conducted to determine the effect of using Heap, ISAM, Hash and B-tree data structures for INGRES relations. Average times for retrieve, append and update were determined for searches by unique key and non-key data. The experiments were conducted on relations of approximately 1000 tuples of 332 byte width. Multiple operations were performed, where appropriate, to obtain average times. Simple models of the data structures are presented and shown to be consistent with experimental results. The models can be used to predict performance, and to select the appropriate data structure for various applications.
NASA Technical Reports Server (NTRS)
Gu, Ye-Ming; Li, Chung-Sheng
1986-01-01
On the basis of the summing-up and analysis of the observations and theories about the impulsive microwave and hard X-ray bursts, the correlations between these two kinds of emissions were investigated. It is shown that it is only possible to explain the optically-thin microwave spectrum and its relations with the hard X-ray spectrum by means of the nonthermal source model. A simple nonthermal trap model in the mildly-relativistic case can consistently explain the main characteristics of the spectrum and the relative time delays.
Postglacial rebound with a non-Newtonian upper mantle and a Newtonian lower mantle rheology
NASA Technical Reports Server (NTRS)
Gasperini, Paolo; Yuen, David A.; Sabadini, Roberto
1992-01-01
A composite rheology is employed consisting of both linear and nonlinear creep mechanisms which are connected by a 'transition' stress. Background stress due to geodynamical processes is included. For models with a non-Newtonian upper-mantle overlying a Newtonian lower-mantle, the temporal responses of the displacements can reproduce those of Newtonian models. The average effective viscosity profile under the ice-load at the end of deglaciation turns out to be the crucial factor governing mantle relaxation. This can explain why simple Newtonian rheology has been successful in fitting the uplift data over formerly glaciated regions.
Geochemistry of the Birch Creek Drainage Basin, Idaho
Swanson, Shawn A.; Rosentreter, Jeffrey J.; Bartholomay, Roy C.; Knobel, LeRoy L.
2003-01-01
The U.S. Survey and Idaho State University, in cooperation with the U.S. Department of Energy, are conducting studies to describe the chemical character of ground water that moves as underflow from drainage basins into the eastern Snake River Plain aquifer (ESRPA) system at and near the Idaho National Engineering and Environmental Laboratory (INEEL) and the effects of these recharge waters on the geochemistry of the ESRPA system. Each of these recharge waters has a hydrochemical character related to geochemical processes, especially water-rock interactions, that occur during migration to the ESRPA. Results of these studies will benefit ongoing and planned geochemical modeling of the ESRPA at the INEEL by providing model input on the hydrochemical character of water from each drainage basin. During 2000, water samples were collected from five wells and one surface-water site in the Birch Creek drainage basin and analyzed for selected inorganic constituents, nutrients, dissolved organic carbon, tritium, measurements of gross alpha and beta radioactivity, and stable isotopes. Four duplicate samples also were collected for quality assurance. Results, which include analyses of samples previously collected from four other sites, in the basin, show that most water from the Birch Creek drainage basin has a calcium-magnesium bicarbonate character. The Birch Creek Valley can be divided roughly into three hydrologic areas. In the northern part, ground water is forced to the surface by a basalt barrier and the sampling sites were either surface water or shallow wells. Water chemistry in this area was characterized by simple evaporation models, simple calcite-carbon dioxide models, or complex models involving carbonate and silicate minerals. The central part of the valley is filled by sedimentary material and the sampling sites were wells that are deeper than those in the northern part. Water chemistry in this area was characterized by simple calcite-dolomite-carbon dioxide models. In the southern part, ground water enters the ESRPA. In this area, the sampling sites were wells with depths and water levels much deeper than those in the northern and central parts of the valley. The calcium and carbon water chemistry in this area was characterized by a simple calcite-carbon dioxide model, but complex calcite-silicate models more accurately accounted for mass transfer in these areas. Throughout the geochemical system, calcite precipitated if it was an active phase in the models. Carbon dioxide either precipitated (outgassed) or dissolved depending on the partial pressure of carbon dioxide in water from the modeled sites. Dolomite was an active phase only in models from the central part of the system. Generally the entire geochemical system could be modeled with either evaporative models, carbonate models, or carbonate-silicate models. In both of the latter types of models, a significant amount of calcite precipitated relative to the mass transfer to and from the other active phases. The amount of calcite precipitated in the more complex models was consistent with the amount of calcite precipitated in the simpler models. This consistency suggests that, although the simpler models can predict calcium and carbon concentrations in Birch Creek Valley ground and surface water, silicate-mineral-based models are required to account for the other constituents. The amount of mass transfer to and from the silicate mineral phases was generally small compared with that in the carbonate phases. It appears that the water chemistry of well USGS 126B represents the chemistry of water recharging the ESRPA by means of underflow from the Birch Creek Valley.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
Logistic regression of family data from retrospective study designs.
Whittemore, Alice S; Halpern, Jerry
2003-11-01
We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
Using energy budgets to combine ecology and toxicology in a mammalian sentinel species
NASA Astrophysics Data System (ADS)
Desforges, Jean-Pierre W.; Sonne, Christian; Dietz, Rune
2017-04-01
Process-driven modelling approaches can resolve many of the shortcomings of traditional descriptive and non-mechanistic toxicology. We developed a simple dynamic energy budget (DEB) model for the mink (Mustela vison), a sentinel species in mammalian toxicology, which coupled animal physiology, ecology and toxicology, in order to mechanistically investigate the accumulation and adverse effects of lifelong dietary exposure to persistent environmental toxicants, most notably polychlorinated biphenyls (PCBs). Our novel mammalian DEB model accurately predicted, based on energy allocations to the interconnected metabolic processes of growth, development, maintenance and reproduction, lifelong patterns in mink growth, reproductive performance and dietary accumulation of PCBs as reported in the literature. Our model results were consistent with empirical data from captive and free-ranging studies in mink and other wildlife and suggest that PCB exposure can have significant population-level impacts resulting from targeted effects on fetal toxicity, kit mortality and growth and development. Our approach provides a simple and cross-species framework to explore the mechanistic interactions of physiological processes and ecotoxicology, thus allowing for a deeper understanding and interpretation of stressor-induced adverse effects at all levels of biological organization.
Thermal dark matter through the Dirac neutrino portal
NASA Astrophysics Data System (ADS)
Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es
2018-04-01
We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.
A coordination theory for intelligent machines
NASA Technical Reports Server (NTRS)
Wang, Fei-Yue; Saridis, George N.
1990-01-01
A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.
A simple sensing mechanism for wireless, passive pressure sensors.
Drazan, John F; Wassick, Michael T; Dahle, Reena; Beardslee, Luke A; Cady, Nathaniel C; Ledet, Eric H
2016-08-01
We have developed a simple wireless pressure sensor that consists of only three electrically isolated components. Two conductive spirals are separated by a closed cell foam that deforms when exposed to changing pressures. This deformation changes the capacitance and thus the resonant frequency of the sensors. Prototype sensors were submerged and wirelessly interrogated while being exposed to physiologically relevant pressures from 10 to 130 mmHg. Sensors consistently exhibited a sensitivity of 4.35 kHz/mmHg which is sufficient for resolving physiologically relevant pressure changes in vivo. These simple sensors have the potential for in vivo pressure sensing.
NASA Astrophysics Data System (ADS)
Solander, K.; David, C. H.; Reager, J. T.; Famiglietti, J. S.
2013-12-01
The ability to reasonably replicate reservoir behavior in terms of storage and outflow is important for studying the potential human impacts on the terrestrial water cycle. Developing a simple method for this purpose could facilitate subsequent integration in a land surface or global climate model. This study attempts to simulate monthly reservoir outflow and storage using a simple, temporally-varying set of heuristics equations with input consisting of in situ records of reservoir inflow and storage. Equations of increasing complexity relative to the number of parameters involved were tested. Only two parameters were employed in the final equations used to predict outflow and storage in an attempt to best mimic seasonal reservoir behavior while still preserving model parsimony. California reservoirs were selected for model development due to the high level of data availability and intensity of water resource management in this region relative to other areas. Calibration was achieved using observations from eight major reservoirs representing approximately 41% of the 107 largest reservoirs in the state. Parameter optimization was accomplished using the minimum RMSE between observed and modeled storage and outflow as the main objective function. Initial results obtained for a multi-reservoir average of the correlation coefficient between observed and modeled storage (resp. outflow) is of 0.78 (resp. 0.75). These results combined with the simplicity of the equations being used show promise for integration into a land surface or a global climate model. This would be invaluable for evaluations of reservoir management impacts on the flow regime and associated ecosystems as well as on the climate at both regional and global scales.
NASA Astrophysics Data System (ADS)
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.
NASA Technical Reports Server (NTRS)
Carder, K. L.; Lee, Z. P.; Marra, John; Steward, R. G.; Perry, M. J.
1995-01-01
The quantum yield of photosynthesis (mol C/mol photons) was calculated at six depths for the waters of the Marine Light-Mixed Layer (MLML) cruise of May 1991. As there were photosynthetically available radiation (PAR) but no spectral irradiance measurements for the primary production incubations, three ways are presented here for the calculation of the absorbed photons (AP) by phytoplankton for the purpose of calculating phi. The first is based on a simple, nonspectral model; the second is based on a nonlinear regression using measured PAR values with depth; and the third is derived through remote sensing measurements. We show that the results of phi calculated using the nonlinear regreesion method and those using remote sensing are in good agreement with each other, and are consistent with the reported values of other studies. In deep waters, however, the simple nonspectral model may cause quantum yield values much higher than theoretically possible.
Cellular reprogramming dynamics follow a simple 1D reaction coordinate
NASA Astrophysics Data System (ADS)
Teja Pusuluri, Sai; Lang, Alex H.; Mehta, Pankaj; Castillo, Horacio E.
2018-01-01
Cellular reprogramming, the conversion of one cell type to another, induces global changes in gene expression involving thousands of genes, and understanding how cells globally alter their gene expression profile during reprogramming is an ongoing problem. Here we reanalyze time-course data on cellular reprogramming from differentiated cell types to induced pluripotent stem cells (iPSCs) and show that gene expression dynamics during reprogramming follow a simple 1D reaction coordinate. This reaction coordinate is independent of both the time it takes to reach the iPSC state as well as the details of the experimental protocol used. Using Monte-Carlo simulations, we show that such a reaction coordinate emerges from epigenetic landscape models where cellular reprogramming is viewed as a ‘barrier-crossing’ process between cell fates. Overall, our analysis and model suggest that gene expression dynamics during reprogramming follow a canonical trajectory consistent with the idea of an ‘optimal path’ in gene expression space for reprogramming.
A simple microviscometric approach based on Brownian motion tracking.
Hnyluchová, Zuzana; Bjalončíková, Petra; Karas, Pavel; Mravec, Filip; Halasová, Tereza; Pekař, Miloslav; Kubala, Lukáš; Víteček, Jan
2015-02-01
Viscosity-an integral property of a liquid-is traditionally determined by mechanical instruments. The most pronounced disadvantage of such an approach is the requirement of a large sample volume, which poses a serious obstacle, particularly in biology and biophysics when working with limited samples. Scaling down the required volume by means of microviscometry based on tracking the Brownian motion of particles can provide a reasonable alternative. In this paper, we report a simple microviscometric approach which can be conducted with common laboratory equipment. The core of this approach consists in a freely available standalone script to process particle trajectory data based on a Newtonian model. In our study, this setup allowed the sample to be scaled down to 10 μl. The utility of the approach was demonstrated using model solutions of glycerine, hyaluronate, and mouse blood plasma. Therefore, this microviscometric approach based on a newly developed freely available script can be suggested for determination of the viscosity of small biological samples (e.g., body fluids).
Mah, In Kyoung
2017-01-01
For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314
Crowding with conjunctions of simple features.
Põder, Endel; Wagemans, Johan
2007-11-20
Several recent studies have related crowding with the feature integration stage in visual processing. In order to understand the mechanisms involved in this stage, it is important to use stimuli that have several features to integrate, and these features should be clearly defined and measurable. In this study, Gabor patches were used as target and distractor stimuli. The stimuli differed in three dimensions: spatial frequency, orientation, and color. A group of 3, 5, or 7 objects was presented briefly at 4 deg eccentricity of the visual field. The observers' task was to identify the object located in the center of the group. A strong effect of the number of distractors was observed, consistent with various spatial pooling models. The analysis of incorrect responses revealed that these were a mix of feature errors and mislocalizations of the target object. Feature errors were not purely random, but biased by the features of distractors. We propose a simple feature integration model that predicts most of the observed regularities.
Evaluation of a pulse control law for flexible spacecraft
NASA Technical Reports Server (NTRS)
1985-01-01
The following analytical and experimental studies were conducted: (1) A simple algorithm was developed to suppress the structural vibrations of 3-dimensional distributed parameter systems, subjected to interface motion and/or directly applied forces. The algorithm is designed to cope with structural oscillations superposed on top of rigid-body motion: a situation identical to that encountered by the SCOLE components. A significant feature of the method is that only local measurements of the structural displacements and velocities relative to the moving frame of reference are needed. (2) A numerical simulation study was conducted on a simple linear finite element model of a cantilevered plate which was subjected to test excitations consisting of impulsive base motion and of nonstationary wide-band random excitation applied at its root. In each situation, the aim was to suppress the vibrations of the plate relative to the moving base. (3) A small mechanical model resembling an aircraft wing was designed and fabricated to investigate the control algorithm under realistic laboratory conditions.
Numerical evaluation of a single ellipsoid motion in Newtonian and power-law fluids
NASA Astrophysics Data System (ADS)
Férec, Julien; Ausias, Gilles; Natale, Giovanniantonio
2018-05-01
A computational model is developed for simulating the motion of a single ellipsoid suspended in a Newtonian and power-law fluid, respectively. Based on a finite element method (FEM), the approach consists in seeking solutions for the linear and angular particle velocities using a minimization algorithm, such that the net hydrodynamic force and torque acting on the ellipsoid are zero. For a Newtonian fluid subjected to a simple shear flow, the Jeffery's predictions are recovered at any aspect ratios. The motion of a single ellipsoidal fiber is found to be slightly disturbed by the shear-thinning character of the suspending fluid, when compared with the Jeffery's solutions. Surprisingly, the perturbation can be completely neglected for a particle with a large aspect ratio. Furthermore, the particle centroid is also found to translate with the same linear velocity as the undisturbed simple shear flow evaluated at particle centroid. This is confirmed by recent works based on experimental investigations and modeling approach (1-2).
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems
NASA Astrophysics Data System (ADS)
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Partitioning degrees of freedom in hierarchical and other richly-parameterized models.
Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P
2010-02-01
Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-05-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various timescales is discussed.
Oscillations in a simple climate-vegetation model
NASA Astrophysics Data System (ADS)
Rombouts, J.; Ghil, M.
2015-02-01
We formulate and analyze a simple dynamical systems model for climate-vegetation interaction. The planet we consider consists of a large ocean and a land surface on which vegetation can grow. The temperature affects vegetation growth on land and the amount of sea ice on the ocean. Conversely, vegetation and sea ice change the albedo of the planet, which in turn changes its energy balance and hence the temperature evolution. Our highly idealized, conceptual model is governed by two nonlinear, coupled ordinary differential equations, one for global temperature, the other for vegetation cover. The model exhibits either bistability between a vegetated and a desert state or oscillatory behavior. The oscillations arise through a Hopf bifurcation off the vegetated state, when the death rate of vegetation is low enough. These oscillations are anharmonic and exhibit a sawtooth shape that is characteristic of relaxation oscillations, as well as suggestive of the sharp deglaciations of the Quaternary. Our model's behavior can be compared, on the one hand, with the bistability of even simpler, Daisyworld-style climate-vegetation models. On the other hand, it can be integrated into the hierarchy of models trying to simulate and explain oscillatory behavior in the climate system. Rigorous mathematical results are obtained that link the nature of the feedbacks with the nature and the stability of the solutions. The relevance of model results to climate variability on various time scales is discussed.
The Pressure-Dependence of the Size of Extruded Vesicles
Patty, Philipus J.; Frisken, Barbara J.
2003-01-01
Variations in the size of vesicles formed by extrusion through small pores are discussed in terms of a simple model. Our model predicts that the radius should decrease as the square root of the applied pressure, consistent with data for vesicles extruded under various conditions. The model also predicts dependencies on the pore size used and on the lysis tension of the vesicles being extruded that are consistent with our data. The pore size was varied by using track-etched polycarbonate membranes with average pore diameters ranging from 50 to 200 nm. To vary the lysis tension, vesicles made from POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine), mixtures of POPC and cholesterol, and mixtures of POPC and C16-ceramide were studied. The lysis tension, as measured by an extrusion-based technique, of POPC:cholesterol vesicles is higher than that of pure POPC vesicles whereas POPC:ceramide vesicles have lower lysis tensions than POPC vesicles. PMID:12885646
SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES
Zhu, Liping; Huang, Mian; Li, Runze
2012-01-01
This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536
Temporal eye movement strategies during naturalistic viewing
Wang, Helena X.; Freeman, Jeremy; Merriam, Elisha P.; Hasson, Uri; Heeger, David J.
2011-01-01
The deployment of eye movements to complex spatiotemporal stimuli likely involves a variety of cognitive factors. However, eye movements to movies are surprisingly reliable both within and across observers. We exploited and manipulated that reliability to characterize observers’ temporal viewing strategies. Introducing cuts and scrambling the temporal order of the resulting clips systematically changed eye movement reliability. We developed a computational model that exhibited this behavior and provided an excellent fit to the measured eye movement reliability. The model assumed that observers searched for, found, and tracked a point-of-interest, and that this process reset when there was a cut. The model did not require that eye movements depend on temporal context in any other way, and it managed to describe eye movements consistently across different observers and two movie sequences. Thus, we found no evidence for the integration of information over long time scales (greater than a second). The results are consistent with the idea that observers employ a simple tracking strategy even while viewing complex, engaging naturalistic stimuli. PMID:22262911
Consistency among distance measurements: transparency, BAO scale and accelerated expansion
NASA Astrophysics Data System (ADS)
Avgoustidis, Anastasios; Verde, Licia; Jimenez, Raul
2009-06-01
We explore consistency among different distance measures, including Supernovae Type Ia data, measurements of the Hubble parameter, and determination of the Baryon acoustic oscillation scale. We present new constraints on the cosmic transparency combining H(z) data together with the latest Supernovae Type Ia data compilation. This combination, in the context of a flat ΛCDM model, improves current constraints by nearly an order of magnitude although the constraints presented here are parametric rather than non-parametric. We re-examine the recently reported tension between the Baryon acoustic oscillation scale and Supernovae data in light of possible deviations from transparency, concluding that the source of the discrepancy may most likely be found among systematic effects of the modelling of the low redshift data or a simple ~ 2-σ statistical fluke, rather than in exotic physics. Finally, we attempt to draw model-independent conclusions about the recent accelerated expansion, determining the acceleration redshift to be zacc = 0.35+0.20-0.13 (1-σ).
Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment
NASA Astrophysics Data System (ADS)
Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz
2017-04-01
Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.
Macroscopic acoustoelectric charge transport in graphene
NASA Astrophysics Data System (ADS)
Bandhu, L.; Lawton, L. M.; Nash, G. R.
2013-09-01
We demonstrate macroscopic acoustoelectric transport in graphene, transferred onto piezoelectric lithium niobate substrates, between electrodes up to 500 μm apart. Using double finger interdigital transducers we have characterised the acoustoelectric current as a function of both surface acoustic wave intensity and frequency. The results are consistent with a relatively simple classical relaxation model, in which the acoustoelectric current is proportional to both the surface acoustic wave intensity and the attenuation of the wave caused by the charge transport.
Effect of the Environment and Environmental Uncertainty on Ship Routes
2012-06-01
models consisting of basic differential equations simulating the fluid dynamic process and physics of the environment. Based on Newton’s second law of...Charles and Hazel Hall, for their unconditional love and support. They were there for me during this entire process , as they have been throughout...A simple transit across the Atlantic Ocean can easily become a rough voyage if the ship encounters high winds, which in turn will cause a high sea
Reconciling phase diffusion and Hartree-Fock approximation in condensate systems
NASA Astrophysics Data System (ADS)
Giorgi, Gian Luca; de Pasquale, Ferdinando
2012-01-01
Despite the weakly interacting regime, the physics of Bose-Einstein condensates is widely affected by particle-particle interactions. They determine quantum phase diffusion, which is known to be the main cause of loss of coherence. Studying a simple model of two interacting Bose systems, we show how to predict the appearance of phase diffusion beyond the Bogoliubov approximation, providing a self-consistent treatment in the framework of a generalized Hartree-Fock-Bogoliubov perturbation theory.
ISO deep far-infrared survey in the Lockman Hole
NASA Astrophysics Data System (ADS)
Kawara, K.; Sato, Y.; Matsuhara, H.; Taniguchi, Y.; Okuda, H.; Sofue, Y.; Matsumoto, T.; Wakamatsu, K.; Cowie, L. L.; Joseph, R. D.; Sanders, D. B.
1999-03-01
Two 44 arcmin x 44 arcmin fields in the Lockman Hole were mapped at 95 and 175 μm using ISOPHOT. A simple program code combined with PIA works well to correct for the drift in the detector responsivity. The number density of 175 μm sources is 3 - 10 times higher than expected from the no-evolution model. The source counts at 95 and 175 μm are consistent with the cosmic infrared background.
Consistent cosmic bubble embeddings
NASA Astrophysics Data System (ADS)
Haque, S. Shajidul; Underwood, Bret
2017-05-01
The Raychaudhuri equation for null rays is a powerful tool for finding consistent embeddings of cosmological bubbles in a background spacetime in a way that is largely independent of the matter content. We find that spatially flat or positively curved thin wall bubbles surrounded by a cosmological background must have a Hubble expansion that is either contracting or expanding slower than the background, which is a more stringent constraint than those obtained by the usual Israel thin-wall formalism. Similarly, a cosmological bubble surrounded by Schwarzschild space, occasionally used as a simple "swiss cheese" model of inhomogenities in an expanding universe, must be contracting (for spatially flat and positively curved bubbles) and bounded in size by the apparent horizon.
Removal of phosphate from greenhouse wastewater using hydrated lime.
Dunets, C Siobhan; Zheng, Youbin
2014-01-01
Phosphate (P) contamination in nutrient-laden wastewater is currently a major topic of discussion in the North American greenhouse industry. Precipitation of P as calcium phosphate minerals using hydrated lime could provide a simple, inexpensive method for retrieval. A combination of batch experiments and chemical equilibrium modelling was used to confirm the viability of this P removal method and determine lime addition rates and pH requirements for greenhouse wastewater of varying nutrient compositions. Lime: P ratio (molar ratio of CaMg(OH)₄: PO₄‒P) provided a consistent parameter for estimating lime addition requirements regardless of initial P concentration, with a ratio of 1.5 providing around 99% removal of dissolved P. Optimal P removal occurred when lime addition increased the pH from 8.6 to 9.0, suggesting that pH monitoring during the P removal process could provide a simple method for ensuring consistent adherence to P removal standards. A Visual MINTEQ model, validated using experimental data, provided a means of predicting lime addition and pH requirements as influenced by changes in other parameters of the lime-wastewater system (e.g. calcium concentration, temperature, and initial wastewater pH). Hydrated lime addition did not contribute to the removal of macronutrient elements such as nitrate and ammonium, but did decrease the concentration of some micronutrients. This study provides basic guidance for greenhouse operators to use hydrated lime for phosphate removal from greenhouse wastewater.
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
Born-Oppenheimer approximation for a singular system
NASA Astrophysics Data System (ADS)
Akbas, Haci; Turgut, O. Teoman
2018-01-01
We discuss a simple singular system in one dimension, two heavy particles interacting with a light particle via an attractive contact interaction and not interacting among themselves. It is natural to apply the Born-Oppenheimer approximation to this problem. We present a detailed discussion of this approach; the advantage of this simple model is that one can estimate the error terms self-consistently. Moreover, a Fock space approach to this problem is presented where an expansion can be proposed to get higher order corrections. A slight modification of the same problem in which the light particle is relativistic is discussed in a later section by neglecting pair creation processes. Here, the second quantized description is more challenging, but with some care, one can recover the first order expression exactly.
Synthesis of full Poincaré beams by means of uniaxial crystals
NASA Astrophysics Data System (ADS)
Piquero, G.; Monroy, L.; Santarsiero, M.; Alonzo, M.; de Sande, J. C. G.
2018-06-01
A simple optical system is proposed to generate full-Poincaré beams (FPBs), i.e. beams presenting all possible states of (total) polarization across their transverse section. The method consists in focusing a uniformly polarized laser beam onto a uniaxial crystal having its optic axis parallel to the propagation axis of the impinging beam. A simple approximated model is used to obtain the analytical expression of the beam polarization at the output of the crystal. The output beam is then proved to be a FPB. By changing the polarization state of the input field, full-Poincaré beams are still obtained, but presenting different distributions of the polarization state across the beam section. Experimental results are reported, showing an excellent agreement with the theoretical predictions.
Wagner, Peter J
2012-02-23
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.
Kinetics versus thermodynamics in materials modeling: The case of the di-vacancy in iron
NASA Astrophysics Data System (ADS)
Djurabekova, F.; Malerba, L.; Pasianot, R. C.; Olsson, P.; Nordlund, K.
2010-07-01
Monte Carlo models are widely used for the study of microstructural and microchemical evolution of materials under irradiation. However, they often link explicitly the relevant activation energies to the energy difference between local equilibrium states. We provide a simple example (di-vacancy migration in iron) in which a rigorous activation energy calculation, by means of both empirical interatomic potentials and density functional theory methods, clearly shows that such a link is not granted, revealing a migration mechanism that a thermodynamics-linked activation energy model cannot predict. Such a mechanism is, however, fully consistent with thermodynamics. This example emphasizes the importance of basing Monte Carlo methods on models where the activation energies are rigorously calculated, rather than deduced from widespread heuristic equations.
Measuring memory with the order of fractional derivative
NASA Astrophysics Data System (ADS)
Du, Maolin; Wang, Zaihua; Hu, Haiyan
2013-12-01
Fractional derivative has a history as long as that of classical calculus, but it is much less popular than it should be. What is the physical meaning of fractional derivative? This is still an open problem. In modeling various memory phenomena, we observe that a memory process usually consists of two stages. One is short with permanent retention, and the other is governed by a simple model of fractional derivative. With the numerical least square method, we show that the fractional model perfectly fits the test data of memory phenomena in different disciplines, not only in mechanics, but also in biology and psychology. Based on this model, we find that a physical meaning of the fractional order is an index of memory.
PROM7: 1D modeler of solar filaments or prominences
NASA Astrophysics Data System (ADS)
Gouttebroze, P.
2018-05-01
PROM7 is an update of PROM4 (ascl:1306.004) and computes simple models of solar prominences and filaments using Partial Radiative Distribution (PRD). The models consist of plane-parallel slabs standing vertically above the solar surface. Each model is defined by 5 parameters: temperature, density, geometrical thickness, microturbulent velocity and height above the solar surface. It solves the equations of radiative transfer, statistical equilibrium, ionization and pressure equilibria, and computes electron and hydrogen level population and hydrogen line profiles. Moreover, the code treats calcium atom which is reduced to 3 ionization states (Ca I, Ca II, CA III). Ca II ion has 5 levels which are useful for computing 2 resonance lines (H and K) and infrared triplet (to 8500 A).
Extended inflation from higher dimensional theories
NASA Technical Reports Server (NTRS)
Holman, Richard; Kolb, Edward W.; Vadas, Sharon L.; Wang, Yun
1990-01-01
The possibility is considered that higher dimensional theories may, upon reduction to four dimensions, allow extended inflation to occur. Two separate models are analayzed. One is a very simple toy model consisting of higher dimensional gravity coupled to a scalar field whose potential allows for a first-order phase transition. The other is a more sophisticated model incorporating the effects of non-trivial field configurations (monopole, Casimir, and fermion bilinear condensate effects) that yield a non-trivial potential for the radius of the internal space. It was found that extended inflation does not occur in these models. It was also found that the bubble nucleation rate in these theories is time dependent unlike the case in the original version of extended inflation.
Frequency, thermal and voltage supercapacitor characterization and modeling
NASA Astrophysics Data System (ADS)
Rafik, F.; Gualous, H.; Gallay, R.; Crausaz, A.; Berthon, A.
A simple electrical model has been established to describe supercapacitor behaviour as a function of frequency, voltage and temperature for hybrid vehicle applications. The electrical model consists of 14 RLC elements, which have been determined from experimental data using electrochemical impedance spectroscopy (EIS) applied on a commercial supercapacitor. The frequency analysis has been extended for the first time to the millihertz range to take into account the leakage current and the charge redistribution on the electrode. Simulation and experimental results of supercapacitor charge and discharge have been compared and analysed. A good correlation between the model and the EIS results has been demonstrated from 1 mHz to 1 kHz, from -20 to 60 °C and from 0 to 2.5 V.
Vanuytrecht, Eline; Thorburn, Peter J
2017-05-01
Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kassem Jebai, Al; Malrait, François; Martin, Philippe; Rouchon, Pierre
2016-03-01
Sensorless control of permanent-magnet synchronous motors at low velocity remains a challenging task. A now well-established method consists of injecting a high-frequency signal and using the rotor saliency, both geometric and magnetic-saturation induced. This paper proposes a clear and original analysis based on second-order averaging of how to recover the position information from signal injection; this analysis blends well with a general model of magnetic saturation. It also proposes a simple parametric model of the saturated motor, based on an energy function which simply encompasses saturation and cross-saturation effects. Experimental results on a surface-mounted motor and an interior magnet motor illustrate the relevance of the approach.
Hypergeometric Equation in Modeling Relativistic Isotropic Sphere
NASA Astrophysics Data System (ADS)
Thirukkanesh, S.; Ragel, F. C.
2014-04-01
We study the Einstein system of equations in static spherically symmetric spacetimes. We obtained classes of exact solutions to the Einstein system by transforming the condition for pressure isotropy to a hypergeometric equation choosing a rational form for one of the gravitational potentials. The solutions are given in simple form that is a desirable requisite to study the behavior of relativistic compact objects in detail. A physical analysis indicate that our models satisfy all the fundamental requirements of realistic star and match smoothly with the exterior Schwarzschild metric. The derived masses and densities are consistent with the previously reported experimental and theoretical studies describing strange stars. The models satisfy the standard energy conditions required by normal matter.
Forces between permanent magnets: experiments and model
NASA Astrophysics Data System (ADS)
González, Manuel I.
2017-03-01
This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.
An electronic implementation of amoeba anticipation
NASA Astrophysics Data System (ADS)
Ziegler, Martin; Ochs, Karlheinz; Hansen, Mirko; Kohlstedt, Hermann
2014-02-01
In nature, the capability of memorizing environmental changes and recalling past events can be observed in unicellular organisms like amoebas. Pershin and Di Ventra have shown that such learning behavior can be mimicked in a simple memristive circuit model consisting of an LC (inductance capacitance) contour and a memristive device. Here, we implement this model experimentally by using an Ag/TiO2- x /Al memristive device. A theoretical analysis of the circuit is used to gain insight into the functionality of this model and to give advice for the circuit implementation. In this respect, the transfer function, resonant frequency, and damping behavior for a varying resistance of the memristive device are discussed in detail.
Self-consistent asset pricing models
NASA Astrophysics Data System (ADS)
Malevergne, Y.; Sornette, D.
2007-08-01
We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the self-consistency condition derives a risk-factor decomposition in the multi-factor case which is identical to the principal component analysis (PCA), thus providing a direct link between model-driven and data-driven constructions of risk factors. This correspondence shows that PCA will therefore suffer from the same limitations as the CAPM and its multi-factor generalization, namely lack of out-of-sample explanatory power and predictability. In the multi-period context, the self-consistency conditions force the betas to be time-dependent with specific constraints.
A consistent modelling methodology for secondary settling tanks in wastewater treatment.
Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar
2011-03-01
The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Prateek; Fox, Patrick J.; Harnik, Roni
2015-05-01
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, t t-bar ), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for b b-bar is also enlarged to 35 GeV ∼< m{sub χ} ∼< 165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
The SAMPL4 host-guest blind prediction challenge: an overview.
Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K
2014-04-01
Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.
A simple, efficient polarizable coarse-grained water model for molecular dynamics simulations.
Riniker, Sereina; van Gunsteren, Wilfred F
2011-02-28
The development of coarse-grained (CG) models that correctly represent the important features of compounds is essential to overcome the limitations in time scale and system size currently encountered in atomistic molecular dynamics simulations. Most approaches reported in the literature model one or several molecules into a single uncharged CG bead. For water, this implicit treatment of the electrostatic interactions, however, fails to mimic important properties, e.g., the dielectric screening. Therefore, a coarse-grained model for water is proposed which treats the electrostatic interactions between clusters of water molecules explicitly. Five water molecules are embedded in a spherical CG bead consisting of two oppositely charged particles which represent a dipole. The bond connecting the two particles in a bead is unconstrained, which makes the model polarizable. Experimental and all-atom simulated data of liquid water at room temperature are used for parametrization of the model. The experimental density and the relative static dielectric permittivity were chosen as primary target properties. The model properties are compared with those obtained from experiment, from clusters of simple-point-charge water molecules of appropriate size in the liquid phase, and for other CG water models if available. The comparison shows that not all atomistic properties can be reproduced by a CG model, so properties of key importance have to be selected when coarse graining is applied. Yet, the CG model reproduces the key characteristics of liquid water while being computationally 1-2 orders of magnitude more efficient than standard fine-grained atomistic water models.
A semiparametric spatio-temporal model for solar irradiance data
Patrick, Joshua D.; Harvill, Jane L.; Hansen, Clifford W.
2016-03-01
Here, we evaluate semiparametric spatio-temporal models for global horizontal irradiance at high spatial and temporal resolution. These models represent the spatial domain as a lattice and are capable of predicting irradiance at lattice points, given data measured at other lattice points. Using data from a 1.2 MW PV plant located in Lanai, Hawaii, we show that a semiparametric model can be more accurate than simple interpolation between sensor locations. We investigate spatio-temporal models with separable and nonseparable covariance structures and find no evidence to support assuming a separable covariance structure. These results indicate a promising approach for modeling irradiance atmore » high spatial resolution consistent with available ground-based measurements. Moreover, this kind of modeling may find application in design, valuation, and operation of fleets of utility-scale photovoltaic power systems.« less
FUNGIBILITY AND CONSUMER CHOICE: EVIDENCE FROM COMMODITY PRICE SHOCKS.
Hastings, Justine S; Shapiro, Jesse M
2013-11-01
We formulate a test of the fungibility of money based on parallel shifts in the prices of different quality grades of a commodity. We embed the test in a discrete-choice model of product quality choice and estimate the model using panel microdata on gasoline purchases. We find that when gasoline prices rise consumers substitute to lower octane gasoline, to an extent that cannot be explained by income effects. Across a wide range of specifications, we consistently reject the null hypothesis that households treat "gas money" as fungible with other income. We compare the empirical fit of three psychological models of decision-making. A simple model of category budgeting fits the data well, with models of loss aversion and salience both capturing important features of the time series.
FUNGIBILITY AND CONSUMER CHOICE: EVIDENCE FROM COMMODITY PRICE SHOCKS*
Hastings, Justine S.; Shapiro, Jesse M.
2015-01-01
We formulate a test of the fungibility of money based on parallel shifts in the prices of different quality grades of a commodity. We embed the test in a discrete-choice model of product quality choice and estimate the model using panel microdata on gasoline purchases. We find that when gasoline prices rise consumers substitute to lower octane gasoline, to an extent that cannot be explained by income effects. Across a wide range of specifications, we consistently reject the null hypothesis that households treat “gas money” as fungible with other income. We compare the empirical fit of three psychological models of decision-making. A simple model of category budgeting fits the data well, with models of loss aversion and salience both capturing important features of the time series. PMID:26937053
Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling
NASA Technical Reports Server (NTRS)
Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.;
2014-01-01
Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid
2017-08-01
Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].
Dynamics in a one-dimensional ferrogel model: relaxation, pairing, shock-wave propagation.
Goh, Segun; Menzel, Andreas M; Löwen, Hartmut
2018-05-23
Ferrogels are smart soft materials, consisting of a polymeric network and embedded magnetic particles. Novel phenomena, such as the variation of the overall mechanical properties by external magnetic fields, emerge consequently. However, the dynamic behavior of ferrogels remains largely unveiled. In this paper, we consider a one-dimensional chain consisting of magnetic dipoles and elastic springs between them as a simple model for ferrogels. The model is evaluated by corresponding simulations. To probe the dynamics theoretically, we investigate a continuum limit of the energy governing the system and the corresponding equation of motion. We provide general classification scenarios for the dynamics, elucidating the touching/detachment dynamics of the magnetic particles along the chain. In particular, it is verified in certain cases that the long-time relaxation corresponds to solutions of shock-wave propagation, while formations of particle pairs underlie the initial stage of the dynamics. We expect that these results will provide insight into the understanding of the dynamics of more realistic models with randomness in parameters and time-dependent magnetic fields.
A Theory of Age-Dependent Mutation and Senescence
Moorad, Jacob A.; Promislow, Daniel E. L.
2008-01-01
Laboratory experiments show us that the deleterious character of accumulated novel age-specific mutations is reduced and made less variable with increased age. While theories of aging predict that the frequency of deleterious mutations at mutation–selection equilibrium will increase with the mutation's age of effect, they do not account for these age-related changes in the distribution of de novo mutational effects. Furthermore, no model predicts why this dependence of mutational effects upon age exists. Because the nature of mutational distributions plays a critical role in shaping patterns of senescence, we need to develop aging theory that explains and incorporates these effects. Here we propose a model that explains the age dependency of mutational effects by extending Fisher's geometrical model of adaptation to include a temporal dimension. Using a combination of simple analytical arguments and simulations, we show that our model predicts age-specific mutational distributions that are consistent with observations from mutation-accumulation experiments. Simulations show us that these age-specific mutational effects may generate patterns of senescence at mutation–selection equilibrium that are consistent with observed demographic patterns that are otherwise difficult to explain. PMID:18660535
Mechanical behavior in living cells consistent with the tensegrity model
NASA Technical Reports Server (NTRS)
Wang, N.; Naruse, K.; Stamenovic, D.; Fredberg, J. J.; Mijailovich, S. M.; Tolic-Norrelykke, I. M.; Polte, T.; Mannix, R.; Ingber, D. E.
2001-01-01
Alternative models of cell mechanics depict the living cell as a simple mechanical continuum, porous filament gel, tensed cortical membrane, or tensegrity network that maintains a stabilizing prestress through incorporation of discrete structural elements that bear compression. Real-time microscopic analysis of cells containing GFP-labeled microtubules and associated mitochondria revealed that living cells behave like discrete structures composed of an interconnected network of actin microfilaments and microtubules when mechanical stresses are applied to cell surface integrin receptors. Quantitation of cell tractional forces and cellular prestress by using traction force microscopy confirmed that microtubules bear compression and are responsible for a significant portion of the cytoskeletal prestress that determines cell shape stability under conditions in which myosin light chain phosphorylation and intracellular calcium remained unchanged. Quantitative measurements of both static and dynamic mechanical behaviors in cells also were consistent with specific a priori predictions of the tensegrity model. These findings suggest that tensegrity represents a unified model of cell mechanics that may help to explain how mechanical behaviors emerge through collective interactions among different cytoskeletal filaments and extracellular adhesions in living cells.
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
Sense and simplicity in HADDOCK scoring: Lessons from CASP‐CAPRI round 1
Vangone, A.; Rodrigues, J. P. G. L. M.; Xue, L. C.; van Zundert, G. C. P.; Geng, C.; Kurkcuoglu, Z.; Nellen, M.; Narasimhan, S.; Karaca, E.; van Dijk, M.; Melquiond, A. S. J.; Visscher, K. M.; Trellet, M.; Kastritis, P. L.
2016-01-01
ABSTRACT Our information‐driven docking approach HADDOCK is a consistent top predictor and scorer since the start of its participation in the CAPRI community‐wide experiment. This sustained performance is due, in part, to its ability to integrate experimental data and/or bioinformatics information into the modelling process, and also to the overall robustness of the scoring function used to assess and rank the predictions. In the CASP‐CAPRI Round 1 scoring experiment we successfully selected acceptable/medium quality models for 18/14 of the 25 targets – a top‐ranking performance among all scorers. Considering that for only 20 targets acceptable models were generated by the community, our effective success rate reaches as high as 90% (18/20). This was achieved using the standard HADDOCK scoring function, which, thirteen years after its original publication, still consists of a simple linear combination of intermolecular van der Waals and Coulomb electrostatics energies and an empirically derived desolvation energy term. Despite its simplicity, this scoring function makes sense from a physico‐chemical perspective, encoding key aspects of biomolecular recognition. In addition to its success in the scoring experiment, the HADDOCK server takes the first place in the server prediction category, with 16 successful predictions. Much like our scoring protocol, because of the limited time per target, the predictions relied mainly on either an ab initio center‐of‐mass and symmetry restrained protocol, or on a template‐based approach whenever applicable. These results underline the success of our simple but sensible prediction and scoring scheme. Proteins 2017; 85:417–423. © 2016 Wiley Periodicals, Inc. PMID:27802573
Game-Theoretic Models of Information Overload in Social Networks
NASA Astrophysics Data System (ADS)
Borgs, Christian; Chayes, Jennifer; Karrer, Brian; Meeder, Brendan; Ravi, R.; Reagans, Ray; Sayedi, Amin
We study the effect of information overload on user engagement in an asymmetric social network like Twitter. We introduce simple game-theoretic models that capture rate competition between celebrities producing updates in such networks where users non-strategically choose a subset of celebrities to follow based on the utility derived from high quality updates as well as disutility derived from having to wade through too many updates. Our two variants model the two behaviors of users dropping some potential connections (followership model) or leaving the network altogether (engagement model). We show that under a simple formulation of celebrity rate competition, there is no pure strategy Nash equilibrium under the first model. We then identify special cases in both models when pure rate equilibria exist for the celebrities: For the followership model, we show existence of a pure rate equilibrium when there is a global ranking of the celebrities in terms of the quality of their updates to users. This result also generalizes to the case when there is a partial order consistent with all the linear orders of the celebrities based on their qualities to the users. Furthermore, these equilibria can be computed in polynomial time. For the engagement model, pure rate equilibria exist when all users are interested in the same number of celebrities, or when they are interested in at most two. Finally, we also give a finite though inefficient procedure to determine if pure equilibria exist in the general case of the followership model.
NASA Technical Reports Server (NTRS)
Flowers, George T.
1994-01-01
Substantial progress has been made toward the goals of this research effort in the past six months. A simplified rotor model with a flexible shaft and backup bearings has been developed. The model is based upon the work of Ishii and Kirk. Parameter studies of the behavior of this model are currently being conducted. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. The study consists of simulation work coupled with experimental verification. The work is documented in the attached paper. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. The dynamics of this model are currently being studied with the objective of verifying the conclusions obtained from the simpler models. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501.
A surrogate model for thermal characteristics of stratospheric airship
NASA Astrophysics Data System (ADS)
Zhao, Da; Liu, Dongxu; Zhu, Ming
2018-06-01
A simple and accurate surrogate model is extremely needed to reduce the analysis complexity of thermal characteristics for a stratospheric airship. In this paper, a surrogate model based on the Least Squares Support Vector Regression (LSSVR) is proposed. The Gravitational Search Algorithm (GSA) is used to optimize hyper parameters. A novel framework consisting of a preprocessing classifier and two regression models is designed to train the surrogate model. Various temperature datasets of the airship envelope and the internal gas are obtained by a three-dimensional transient model for thermal characteristics. Using these thermal datasets, two-factor and multi-factor surrogate models are trained and several comparison simulations are conducted. Results illustrate that the surrogate models based on LSSVR-GSA have good fitting and generalization abilities. The pre-treated classification strategy proposed in this paper plays a significant role in improving the accuracy of the surrogate model.
Second-order closure models for supersonic turbulent flows
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Sarkar, Sutanu
1991-01-01
Recent work by the authors on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulence closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equation are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.
Second-order closure models for supersonic turbulent flows
NASA Technical Reports Server (NTRS)
Speziale, Charles G.; Sarkar, Sutanu
1991-01-01
Recent work on the development of a second-order closure model for high-speed compressible flows is reviewed. This turbulent closure is based on the solution of modeled transport equations for the Favre-averaged Reynolds stress tensor and the solenoidal part of the turbulent dissipation rate. A new model for the compressible dissipation is used along with traditional gradient transport models for the Reynolds heat flux and mass flux terms. Consistent with simple asymptotic analyses, the deviatoric part of the remaining higher-order correlations in the Reynolds stress transport equations are modeled by a variable density extension of the newest incompressible models. The resulting second-order closure model is tested in a variety of compressible turbulent flows which include the decay of isotropic turbulence, homogeneous shear flow, the supersonic mixing layer, and the supersonic flat-plate turbulent boundary layer. Comparisons between the model predictions and the results of physical and numerical experiments are quite encouraging.
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
Method of frequency dependent correlations: investigating the variability of total solar irradiance
NASA Astrophysics Data System (ADS)
Pelt, J.; Käpylä, M. J.; Olspert, N.
2017-04-01
Context. This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites. Aims: We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain. Methods: Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves. Results: We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum. A significant part of the variability can be explained by a very simple model consisting of two components: the original proxy describing blanketing by sunspots, and the low-pass-filtered curve describing the overall activity level. The models with the full library of the different building blocks can be applied to hindcasting with a high level of confidence, Rc ≈ 0.90. The usefulness of these models is limited by the major target controversy. Conclusions: The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.
Full quantum mechanical analysis of atomic three-grating Mach–Zehnder interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanz, A.S., E-mail: asanz@iff.csic.es; Davidović, M.; Božić, M.
2015-02-15
Atomic three-grating Mach–Zehnder interferometry constitutes an important tool to probe fundamental aspects of the quantum theory. There is, however, a remarkable gap in the literature between the oversimplified models and robust numerical simulations considered to describe the corresponding experiments. Consequently, the former usually lead to paradoxical scenarios, such as the wave–particle dual behavior of atoms, while the latter make difficult the data analysis in simple terms. Here these issues are tackled by means of a simple grating working model consisting of evenly-spaced Gaussian slits. As is shown, this model suffices to explore and explain such experiments both analytically and numerically,more » giving a good account of the full atomic journey inside the interferometer, and hence contributing to make less mystic the physics involved. More specifically, it provides a clear and unambiguous picture of the wavefront splitting that takes place inside the interferometer, illustrating how the momentum along each emerging diffraction order is well defined even though the wave function itself still displays a rather complex shape. To this end, the local transverse momentum is also introduced in this context as a reliable analytical tool. The splitting, apart from being a key issue to understand atomic Mach–Zehnder interferometry, also demonstrates at a fundamental level how wave and particle aspects are always present in the experiment, without incurring in any contradiction or interpretive paradox. On the other hand, at a practical level, the generality and versatility of the model and methodology presented, makes them suitable to attack analogous problems in a simple manner after a convenient tuning. - Highlights: • A simple model is proposed to analyze experiments based on atomic Mach–Zehnder interferometry. • The model can be easily handled both analytically and computationally. • A theoretical analysis based on the combination of the position and momentum representations is considered. • Wave and particle aspects are shown to coexist within the same experiment, thus removing the old wave-corpuscle dichotomy. • A good agreement between numerical simulations and experimental data is found without appealing to best-fit procedures.« less
Andereggen, Lukas; Neuschmelting, Volker; von Gunten, Michael; Widmer, Hans Rudolf; Takala, Jukka; Jakob, Stephan M; Fandino, Javier; Marbacher, Serge
2014-10-02
Early brain injury and delayed cerebral vasospasm both contribute to unfavorable outcomes after subarachnoid hemorrhage (SAH). Reproducible and controllable animal models that simulate both conditions are presently uncommon. Therefore, new models are needed in order to mimic human pathophysiological conditions resulting from SAH. This report describes the technical nuances of a rabbit blood-shunt SAH model that enables control of intracerebral pressure (ICP). An extracorporeal shunt is placed between the arterial system and the subarachnoid space, which enables examiner-independent SAH in a closed cranium. Step-by-step procedural instructions and necessary equipment are described, as well as technical considerations to produce the model with minimal mortality and morbidity. Important details required for successful surgical creation of this robust, simple and consistent ICP-controlled SAH rabbit model are described.
A Simultaneous Equation Demand Model for Block Rates
NASA Astrophysics Data System (ADS)
Agthe, Donald E.; Billings, R. Bruce; Dobra, John L.; Raffiee, Kambiz
1986-01-01
This paper examines the problem of simultaneous-equations bias in estimation of the water demand function under an increasing block rate structure. The Hausman specification test is used to detect the presence of simultaneous-equations bias arising from correlation of the price measures with the regression error term in the results of a previously published study of water demand in Tucson, Arizona. An alternative simultaneous equation model is proposed for estimating the elasticity of demand in the presence of block rate pricing structures and availability of service charges. This model is used to reestimate the price and rate premium elasticities of demand in Tucson, Arizona for both the usual long-run static model and for a simple short-run demand model. The results from these simultaneous equation models are consistent with a priori expectations and are unbiased.
Evidence of Collisionless Shocks in a Hall Thruster Plume
2003-04-25
Triple Langmuir probes and emissive probes are used to measure the electron number density, electron temperature, and plasma potential downstream of a low-power Hall thruster . The results show a high density plasma core with elevated electron temperature and plasma potential along the thruster centerline. These properties are believed to be due to collisionless shocks formed as a result of the ion/ion acoustic instability. A simple model is presented that shows the existence of a collisionless shock to be consistent with the observed phenomena.
Dynamic stability of maglev systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Chen, S.S.; Mulcahy, T.M.
1994-05-01
Because dynamic instabilities are not acceptable in any commercial maglev system, it is important to consider dynamic instability in the development of all maglev systems. This study considers the stability of maglev systems based on experimental data, scoping calculations, and simple mathematical models. Divergence and flutter are obtained for coupled vibration of a three-degree-of-freedom maglev vehicle on a guideway consisting of double L-shaped aluminum segments. The theory and analysis developed in this study provides basic stability characteristics and identifies future research needs for maglev systems.
Right Whale Diving and Foraging Behavior in the Southwestern Gulf of Maine
2012-09-30
atop a relatively simple food chain consisting only of phytoplankton, copepods , and whales that can serve as a convenient model to study trophic...biological and physical oceanographic processes that promote the thin, aggregated layers of copepods upon which the whales feed, and (3) to assess the risks... copepods m-3 on average, SD = 14,300, n = 15 whales, range = 1,890-62,900; VPR: 24,500 copepods m-3 on average, SD = 11,800, n = 13, range=1,100
Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories
NASA Technical Reports Server (NTRS)
Burchett, Bradley T.
2003-01-01
The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.
Formation of the nitrogen aggregates in annealed diamond by neutron irradiation
NASA Astrophysics Data System (ADS)
Mita, Y.; Nisida, Y.; Okada, M.
2018-02-01
Neutron heavy irradiation was performed on synthetic diamonds contain nitrogen atoms in isolated substitutional form (called "type Ib diamond") and they were annealed under a pressure of 6 GPa. A large number of nitrogen B-aggregate which consists of four substitutional nitrogen atoms symmetrically surrounding a vacancy was formed within 30 m from single nitrogen atoms. Furthermore it is observed that, in these diamonds, single nitrogen atoms coexist with the B-aggregates, which is unexplainable by the simple nitrogen aggregation model.
Examining the Crossover from the Hadronic to Partonic Phase in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Mingmei; Yu Meiling; Liu Lianshou
2008-03-07
A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.
Critical Frequency in Nuclear Chiral Rotation
NASA Astrophysics Data System (ADS)
Olbratowski, P.; Dobaczewski, J.; Dudek, J.; Płóciennik, W.
2004-07-01
Self-consistent solutions for the so-called planar and chiral rotational bands in 132La are obtained for the first time within the Skyrme-Hartree-Fock cranking approach. It is suggested that the chiral rotation cannot exist below a certain critical frequency which under the approximations used is estimated as ℏωcrit≈0.5 0.6 MeV. However, the exact values of ℏωcrit may vary, to an extent, depending on the microscopic model used, in particular, through the pairing correlations and/or calculated equilibrium deformations. The existence of the critical frequency is explained in terms of a simple classical model of two gyroscopes coupled to a triaxial rigid body.
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
Hidden order and unconventional superconductivity in URu2Si2
NASA Astrophysics Data System (ADS)
Rau, Jeffrey; Kee, Hae-Young
2012-02-01
The nature of the so-called hidden order in URu2Si2 and the subsequent superconducting phase have remained a puzzle for over two decades. Motivated by evidence for rotational symmetry breaking seen in recent magnetic torque measurements [Okazaki et al. Science 331, 439 (2011)], we derive a simple tight-binding model consistent with experimental Fermi surface probes and ab-initio calculations. From this model we use mean-field theory to examine the variety of hidden orders allowed by existing experimental results, including the torque measurements. We then construct a phase diagram in temperature and pressure and discuss relevant experimental consequences.
NASA Astrophysics Data System (ADS)
Longone, P.; Romá, F.
2018-06-01
Chemical techniques are an efficient method to synthesize one-dimensional perovskite manganite oxide nanostructures with a granular morphology, that is, formed by arrays of monodomain magnetic nanoparticles. Integrating the stochastic Landau-Lifshitz-Gilbert equation, we simulate the dynamics of a simple disordered model for such materials that only takes into account the morphological characteristics of their nanograins. We show that it is possible to describe reasonably well experimental hysteresis loops reported in the literature for single La0.67Ca0.33MnO3 nanotubes and powders of these nanostructures, simulating small systems consisting of only 100 nanoparticles.
Development of single cell lithium ion battery model using Scilab/Xcos
NASA Astrophysics Data System (ADS)
Arianto, Sigit; Yunaningsih, Rietje Y.; Astuti, Edi Tri; Hafiz, Samsul
2016-02-01
In this research, a lithium battery model, as a component in a simulation environment, was developed and implemented using Scicos/Xcos graphical language programming. Scicos used in this research was actually Xcos that is a variant of Scicos which is embedded in Scilab. The equivalent circuit model used in modeling the battery was Double Polarization (DP) model. DP model consists of one open circuit voltage (VOC), one internal resistance (Ri), and two parallel RC circuits. The parameters of the battery were extracted using Hybrid Power Pulse Characterization (HPPC) testing. In this experiment, the Double Polarization (DP) electrical circuit model was used to describe the lithium battery dynamic behavior. The results of simulation of the model were validated with the experimental results. Using simple error analysis, it was found out that the biggest error was 0.275 Volt. It was occurred mostly at the low end of the state of charge (SOC).
Statistics of the geomagnetic secular variation for the past 5Ma
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1986-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Statistics of the geomagnetic secular variation for the past 5 m.y
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1988-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Learning to use working memory: a reinforcement learning gating model of rule acquisition in rats
Lloyd, Kevin; Becker, Nadine; Jones, Matthew W.; Bogacz, Rafal
2012-01-01
Learning to form appropriate, task-relevant working memory representations is a complex process central to cognition. Gating models frame working memory as a collection of past observations and use reinforcement learning (RL) to solve the problem of when to update these observations. Investigation of how gating models relate to brain and behavior remains, however, at an early stage. The current study sought to explore the ability of simple RL gating models to replicate rule learning behavior in rats. Rats were trained in a maze-based spatial learning task that required animals to make trial-by-trial choices contingent upon their previous experience. Using an abstract version of this task, we tested the ability of two gating algorithms, one based on the Actor-Critic and the other on the State-Action-Reward-State-Action (SARSA) algorithm, to generate behavior consistent with the rats'. Both models produced rule-acquisition behavior consistent with the experimental data, though only the SARSA gating model mirrored faster learning following rule reversal. We also found that both gating models learned multiple strategies in solving the initial task, a property which highlights the multi-agent nature of such models and which is of importance in considering the neural basis of individual differences in behavior. PMID:23115551
Block modeling of crustal deformation in Tierra del Fuego from GNSS velocities
NASA Astrophysics Data System (ADS)
Mendoza, L.; Richter, A.; Fritsche, M.; Hormaechea, J. L.; Perdomo, R.; Dietrich, R.
2015-05-01
The Tierra del Fuego (TDF) main island is divided by a major transform boundary between the South America and Scotia tectonic plates. Using a block model, we infer slip rates, locking depths and inclinations of active faults in TDF from inversion of site velocities derived from Global Navigation Satellite System observations. We use interseismic velocities from 48 sites, obtained from field measurements spanning 20 years. Euler vectors consistent with a simple seismic cycle are estimated for each block. In addition, we introduce far-field information into the modeling by applying constraints on Euler vectors of major tectonic plates. The difference between model and observed surface deformation near the Magallanes Fagnano Fault System (MFS) is reduced by considering finite dip in the forward model. For this tectonic boundary global plate circuits models predict relative movements between 7 and 9 mm yr- 1, while our regional model indicates that a strike-slip rate of 5.9 ± 0.2 mm yr- 1 is accommodated across the MFS. Our results indicate faults dipping 66- 4+ 6° southward, locked to a depth of 11- 5+ 5 km, which are consistent with geological models for the MFS. However, normal slip also dominates the fault perpendicular motion throughout the eastern MFS, with a maximum rate along the Fagnano Lake.
A study on assimilating potential vorticity data
NASA Astrophysics Data System (ADS)
Li, Yong; Ménard, Richard; Riishøjgaard, Lars Peter; Cohn, Stephen E.; Rood, Richard B.
1998-08-01
The correlation that exists between the potential vorticity (PV) field and the distribution of chemical tracers such as ozone suggests the possibility of using tracer observations as proxy PV data in atmospheric data assimilation systems. Especially in the stratosphere, there are plentiful tracer observations but a general lack of reliable wind observations, and the correlation is most pronounced. The issue investigated in this study is how model dynamics would respond to the assimilation of PV data. First, numerical experiments of identical-twin type were conducted with a simple univariate nuding algorithm and a global shallow water model based on PV and divergence (PV-D model). All model fields are successfully reconstructed through the insertion of complete PV data alone if an appropriate value for the nudging coefficient is used. A simple linear analysis suggests that slow modes are recovered rapidly, at a rate nearly independent of spatial scale. In a more realistic experiment, appropriately scaled total ozone data from the NIMBUS-7 TOMS instrument were assimilated as proxy PV data into the PV-D model over a 10-day period. The resulting model PV field matches the observed total ozone field relatively well on large spatial scales, and the PV, geopotential and divergence fields are dynamically consistent. These results indicate the potential usefulness that tracer observations, as proxy PV data, may offer in a data assimilation system.
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
Low-angle detachment origin for the Red Sea Rift System?
NASA Astrophysics Data System (ADS)
Voggenreiter, W.; Hötzl, H.; Mechie, J.
1988-07-01
The tectonic and magmatic history of the Jizan coastal plain (Tihama Asir, southwest Arabia) suggests a two-stage evolution. A first stage of extension began during the Oligocene and ended with uplift of the Arabian graben shoulder which began about 14 Ma ago. It was followed by a period of approximately 10 Ma characterized by magmatic and tectonic quiescence. A second stage of extension began roughly contemporaneously with the onset of seafloor spreading in the southern Red Sea some 4-5 Ma ago and is still active today. The geometry of faulting in the Jizan area supports a Wernicke model of simple shear for the development of the southern Red Sea. Regional asymmetries of the Red Sea area, such as the distribution of volcanism, the marginal topography and asymmetries in the geophysical signatures are consistent with such a model. Available seismic profiles allow a rough estimate for β-values of the Arabian Red Sea margin and were used to simulate subsidence history and heat flow of the Red Sea for "classical" two-layer stretching models. Neither finite uniform nor finite non-uniform stretching models can account for observed subsidence and heat flow data. Thus, two model scenarios of whole-lithosphere normal simple-shear are presented for the geological history of the southwestern Arabian margin of the Red Sea. These models are limited because of the Serravallian rearrangement in the kinematics of the Red Sea.
NASA Astrophysics Data System (ADS)
Motte, Fabrice; Bugler-Lamb, Samuel L.; Falcoz, Quentin
2015-07-01
The attraction of solar energy is greatly enhanced by the possibility of it being used during times of reduced or non-existent solar flux, such as weather induced intermittences or the darkness of the night. Therefore optimizing thermal storage for use in solar energy plants is crucial for the success of this sustainable energy source. Here we present a study of a structured bed filler dedicated to Thermocline type thermal storage, believed to outweigh the financial and thermal benefits of other systems currently in use such as packed bed Thermocline tanks. Several criterions such as Thermocline thickness and Thermocline centering are defined with the purpose of facilitating the assessment of the efficiency of the tank to complement the standard concepts of power output. A numerical model is developed that reduces to two dimensions the modeling of such a tank. The structure within the tank is designed to be built using simple bricks harboring rectangular channels through which the solar heat transfer and storage fluid will flow. The model is scrutinized and tested for physical robustness, and the results are presented in this paper. The consistency of the model is achieved within particular ranges for each physical variable.
Two modes of motion of the alligator lizard cochlea: Measurements and model predictions
NASA Astrophysics Data System (ADS)
Aranyosi, A. J.; Freeman, Dennis M.
2005-09-01
Measurements of motion of an in vitro preparation of the alligator lizard basilar papilla in response to sound demonstrate elliptical trajectories. These trajectories are consistent with the presence of both a translational and rotational mode of motion. The translational mode is independent of frequency, and the rotational mode has a displacement peak near 5 kHz. These measurements can be explained by a simple mechanical system in which the basilar papilla is supported asymmetrically on the basilar membrane. In a quantitative model, the translational admittance is compliant while the rotational admittance is second order. Best-fit model parameters are consistent with estimates based on anatomy and predict that fluid flow across hair bundles is a primary source of viscous damping. The model predicts that the rotational mode contributes to the high-frequency slopes of auditory nerve fiber tuning curves, providing a physical explanation for a low-pass filter required in models of this cochlea. The combination of modes makes the sensitivity of hair bundles more uniform with radial position than that which would result from pure rotation. A mechanical analogy with the organ of Corti suggests that these two modes of motion may also be present in the mammalian cochlea.
Charge and energy dependence of the residence time of cosmic ray nuclei below 15 GeV/nucleon
NASA Technical Reports Server (NTRS)
Soutoul, A.; Engelmann, J. J.; Ferrando, P.; Koch-Miramond, L.; Masse, P.; Webber, W. R.
1985-01-01
The relative abundance of nuclear species measured in cosmic rays at Earth has often been interpreted with the simple leaky box model. For this model to be consistent an essential requirement is that the escape length does not depend on the nuclear species. The discrepancy between escape length values derived from iron secondaries and from the B/C ratio was identified by Garcia-Munoz and his co-workers using a large amount of experimental data. Ormes and Protheroe found a similar trend in the HEAO data although they questioned its significance against uncertainties. They also showed that the change in the B/C ratio values implies a decrease of the residence time of cosmic rays at low energies in conflict with the diffusive convective picture. These conclusions crucially depend on the partial cross section values and their uncertainties. Recently new accurate cross sections of key importance for propagation calculations have been measured. Their statistical uncertainties are often better than 4% and their values significantly different from those previously accepted. Here, these new cross sections are used to compare the observed B/C+O and (Sc to Cr)/Fe ratio to those predicted with the simple leaky box model.
An approach to multivariable control of manipulators
NASA Technical Reports Server (NTRS)
Seraji, H.
1987-01-01
The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV
NASA Astrophysics Data System (ADS)
Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.
2004-08-01
The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .
NASA Astrophysics Data System (ADS)
George, D. S.; Onischenko, A.; Holmes, A. S.
2004-03-01
Focused laser ablation by single laser pulses at varying angles of incidence is studied in two materials of interest: a solgel (Ormocer 4) and a polymer (SU8). For a range of angles (up to 70° from normal), and for low-energy (<20 μJ), 40 ns pulses at 266 nm wavelength, the ablation depth along the direction of the incident laser beam is found to be independent of the angle of incidence. This allows the crater profiles at oblique incidence to be generated directly from the crater profiles at normal incidence by a simple coordinate transformation. This result is of use in the development of simulation tools for direct-write laser ablation. A simple model based on the moving ablation front approach is shown to be consistent with the observed behavior.
Chemical consequences of the initial diffusional growth of cloud droplets - A clean marine case
NASA Technical Reports Server (NTRS)
Twohy, C. H.; Charlson, R. J.; Austin, P. H.
1989-01-01
A simple microphysical cloud parcel model and a simple representation of the background marine aerosol are used to predict the concentrations and compositions of droplets of various sizes near cloud base. The aerosol consists of an externally-mixed ammonium bisulfate accumulation mode and a sea-salt coarse particle mode. The difference in diffusional growth rates between the small and large droplets as well as the differences in composition between the two aerosol modes result in substantial differences in solute concentration and composition with size of droplets in the parcel. The chemistry of individual droplets is not, in general, representative of the bulk (volume-weighted mean) cloud water sample. These differences, calculated to occur early in the parcel's lifetime, should have important consequences for chemical reactions such as aqueous phase sulfate production.
Wagner, Peter J.
2012-01-01
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266
Emergent organization in a model market
NASA Astrophysics Data System (ADS)
Yadav, Avinash Chand; Manchanda, Kaustubh; Ramaswamy, Ramakrishna
2017-09-01
We study the collective behaviour of interacting agents in a simple model of market economics that was originally introduced by Nørrelykke and Bak. A general theoretical framework for interacting traders on an arbitrary network is presented, with the interaction consisting of buying (namely consumption) and selling (namely production) of commodities. Extremal dynamics is introduced by having the agent with least profit in the market readjust prices, causing the market to self-organize. In addition to examining this model market on regular lattices in two-dimensions, we also study the cases of random complex networks both with and without community structures. Fluctuations in an activity signal exhibit properties that are characteristic of avalanches observed in models of self-organized criticality, and these can be described by power-law distributions when the system is in the critical state.
Correlation lengths in hydrodynamic models of active nematics.
Hemingway, Ewan J; Mishra, Prashant; Marchetti, M Cristina; Fielding, Suzanne M
2016-09-28
We examine the scaling with activity of the emergent length scales that control the nonequilibrium dynamics of an active nematic liquid crystal, using two popular hydrodynamic models that have been employed in previous studies. In both models we find that the chaotic spatio-temporal dynamics in the regime of fully developed active turbulence is controlled by a single active scale determined by the balance of active and elastic stresses, regardless of whether the active stress is extensile or contractile in nature. The observed scaling of the kinetic energy and enstrophy with activity is consistent with our single-length scale argument and simple dimensional analysis. Our results provide a unified understanding of apparent discrepancies in the previous literature and demonstrate that the essential physics is robust to the choice of model.
Scattering measurements on natural and model trees
NASA Technical Reports Server (NTRS)
Rogers, James C.; Lee, Sung M.
1990-01-01
The acoustical back scattering from a simple scale model of a tree has been experimentally measured. The model consisted of a trunk and six limbs, each with 4 branches; no foliage or twigs were included. The data from the anechoic chamber measurements were then mathematically combined to construct the effective back scattering from groups of trees. Also, initial measurements have been conducted out-of-doors on a single tree in an open field in order to characterize its acoustic scattering as a function of azimuth angle. These measurements were performed in the spring, prior to leaf development. The data support a statistical model of forest scattering; the scattered signal spectrum is highly irregular but with a remarkable general resemblance to the incident signal spectrum. Also, the scattered signal's spectra showed little dependence upon scattering angle.
Ultrahigh-density sub-10 nm nanowire array formation via surface-controlled phase separation.
Tian, Yuan; Mukherjee, Pinaki; Jayaraman, Tanjore V; Xu, Zhanping; Yu, Yongsheng; Tan, Li; Sellmyer, David J; Shield, Jeffrey E
2014-08-13
We present simple, self-assembled, and robust fabrication of ultrahigh density cobalt nanowire arrays. The binary Co-Al and Co-Si systems phase-separate during physical vapor deposition, resulting in Co nanowire arrays with average diameter as small as 4.9 nm and nanowire density on the order of 10(16)/m(2). The nanowire diameters were controlled by moderating the surface diffusivity, which affected the lateral diffusion lengths. High resolution transmission electron microscopy reveals that the Co nanowires formed in the face-centered cubic structure. Elemental mapping showed that in both systems the nanowires consisted of Co with undetectable Al or Si and that the matrix consisted of Al with no distinguishable Co in the Co-Al system and a mixture of Si and Co in the Co-Si system. Magnetic measurements clearly indicate anisotropic behavior consistent with shape anisotropy. The dynamics of nanowire growth, simulated using an Ising model, is consistent with the experimental phase and geometry of the nanowires.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farooqi, Rahmat Ullah; Hrma, Pavel
2016-06-01
We have investigated the effect of A1/B ratio on the Product Consistency Test (PCT) response. In an aluminoborosilicate soda-lime glass based on a modified International Simple Glass, ISG-3, the A1/B ratio varied from 0 to 0.55 (in mole fractions). In agreement with various models of the PCT response as a function of glass composition, we observed a monotonic increase of B and Na releases with decreasing A1/B mole ratio, but only when the ratio was higher than 0.05. Below this value (A1/B < 0.05), we observed a sharp decrease that we attribute to B in tetrahedral coordination.
Minimal analytical model for undular tidal bore profile; quantum and Hawking effect analogies
NASA Astrophysics Data System (ADS)
Berry, M. V.
2018-05-01
Waves travelling up-river, driven by high tides, often consist of a smooth front followed by a series of undulations. A simple approximate theory gives the rigidly travelling profile of such ‘undular hydraulic jumps’, up to scaling, as the integral of the Airy function; applying self-consistency fixes the scaling. The theory combines the standard hydraulic jump with ideas borrowed from quantum physics: Hamiltonian operators and zero-energy eigenfunctions. There is an analogy between undular bores and the Hawking effect in relativity: both concern waves associated with horizons. ‘Physics is not just Concerning the Nature of Things, but Concerning the Interconnectedness of all the Natures of Things’(Sir Charles Frank, retirement speech 1976).
TeV-PeV neutrinos from low-power gamma-ray burst jets inside stars.
Murase, Kohta; Ioka, Kunihito
2013-09-20
We study high-energy neutrino production in collimated jets inside progenitors of gamma-ray bursts (GRBs) and supernovae, considering both collimation and internal shocks. We obtain simple, useful constraints, using the often overlooked point that shock acceleration of particles is ineffective at radiation-mediated shocks. Classical GRBs may be too powerful to produce high-energy neutrinos inside stars, which is consistent with IceCube nondetections. We find that ultralong GRBs avoid such constraints and detecting the TeV signal will support giant progenitors. Predictions for low-power GRB classes including low-luminosity GRBs can be consistent with the astrophysical neutrino background IceCube may detect, with a spectral steepening around PeV. The models can be tested with future GRB monitors.
The binding domain of the HMGB1 inhibitor carbenoxolone: Theory and experiment
NASA Astrophysics Data System (ADS)
Mollica, Luca; Curioni, Alessandro; Andreoni, Wanda; Bianchi, Marco E.; Musco, Giovanna
2008-05-01
We present a combined computational and experimental study of the interaction of the Box A of the HMGB1 protein and carbenoxolone, an inhibitor of its pro-inflammatory activity. The computational approach consists of classical molecular dynamics (MD) simulations based on the GROMOS force field with quantum-refined (QRFF) atomic charges for the ligand. Experimental data consist of fluorescence intensities, chemical shift displacements, saturation transfer differences and intermolecular Nuclear Overhauser Enhancement signals. Good agreement is found between observations and the conformation of the ligand-protein complex resulting from QRFF-MD. In contrast, simple docking procedures and MD based on the unrefined force field provide models inconsistent with experiment. The ligand-protein binding is dominated by non-directional interactions.
Transport Signatures of Quasiparticle Poisoning in a Majorana Island.
Albrecht, S M; Hansen, E B; Higginbotham, A P; Kuemmeth, F; Jespersen, T S; Nygård, J; Krogstrup, P; Danon, J; Flensberg, K; Marcus, C M
2017-03-31
We investigate effects of quasiparticle poisoning in a Majorana island with strong tunnel coupling to normal-metal leads. In addition to the main Coulomb blockade diamonds, "shadow" diamonds appear, shifted by 1e in gate voltage, consistent with transport through an excited (poisoned) state of the island. Comparison to a simple model yields an estimate of parity lifetime for the strongly coupled island (∼1 μs) and sets a bound for a weakly coupled island (>10 μs). Fluctuations in the gate-voltage spacing of Coulomb peaks at high field, reflecting Majorana hybridization, are enhanced by the reduced lever arm at strong coupling. When converted from gate voltage to energy units, fluctuations are consistent with previous measurements.
Des Roches, Carrie A; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David; Kiran, Swathi
2016-12-01
The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type.
A Kinematically Consistent Two-Point Correlation Function
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
Modular rate laws for enzymatic reactions: thermodynamics, elasticities and implementation.
Liebermeister, Wolfram; Uhlendorf, Jannis; Klipp, Edda
2010-06-15
Standard rate laws are a key requisite for systematically turning metabolic networks into kinetic models. They should provide simple, general and biochemically plausible formulae for reaction velocities and reaction elasticities. At the same time, they need to respect thermodynamic relations between the kinetic constants and the metabolic fluxes and concentrations. We present a family of reversible rate laws for reactions with arbitrary stoichiometries and various types of regulation, including mass-action, Michaelis-Menten and uni-uni reversible Hill kinetics as special cases. With a thermodynamically safe parameterization of these rate laws, parameter sets obtained by model fitting, sampling or optimization are guaranteed to lead to consistent chemical equilibrium states. A reformulation using saturation values yields simple formulae for rates and elasticities, which can be easily adjusted to the given stationary flux distributions. Furthermore, this formulation highlights the role of chemical potential differences as thermodynamic driving forces. We compare the modular rate laws to the thermodynamic-kinetic modelling formalism and discuss a simplified rate law in which the reaction rate directly depends on the reaction affinity. For automatic handling of modular rate laws, we propose a standard syntax and semantic annotations for the Systems Biology Markup Language. An online tool for inserting the rate laws into SBML models is freely available at www.semanticsbml.org. Supplementary data are available at Bioinformatics online.
Microscopic motion of particles flowing through a porous medium
NASA Astrophysics Data System (ADS)
Lee, Jysoo; Koplik, Joel
1999-01-01
Stokesian dynamics simulations are used to study the microscopic motion of particles suspended in fluids passing through porous media. Model porous media with fixed spherical particles are constructed, and mobile ones move through this fixed bed under the action of an ambient velocity field. The pore scale motion of individual suspended particles at pore junctions are first considered. The relative particle flux into different possible directions exiting from a single pore, for two- and three-dimensional model porous media is found to approximately equal the corresponding fractional channel width or area. Next the waiting time distribution for particles which are delayed in a junction due to a stagnation point caused by a flow bifurcation is considered. The waiting times are found to be controlled by two-particle interactions, and the distributions take the same form in model porous media as in two-particle systems. A simple theoretical estimate of the waiting time is consistent with the simulations. It is found that perturbing such a slow-moving particle by another nearby one leads to rather complicated behavior. Finally, the stability of geometrically trapped particles is studied. For simple model traps, it is found that particles passing nearby can "relaunch" the trapped particle through its hydrodynamic interaction, although the conditions for relaunching depend sensitively on the details of the trap and its surroundings.
A description of rotations for DEM models of particle systems
NASA Astrophysics Data System (ADS)
Campello, Eduardo M. B.
2015-06-01
In this work, we show how a vector parameterization of rotations can be adopted to describe the rotational motion of particles within the framework of the discrete element method (DEM). It is based on the use of a special rotation vector, called Rodrigues rotation vector, and accounts for finite rotations in a fully exact manner. The use of fictitious entities such as quaternions or complicated structures such as Euler angles is thereby circumvented. As an additional advantage, stick-slip friction models with inter-particle rolling motion are made possible in a consistent and elegant way. A few examples are provided to illustrate the applicability of the scheme. We believe that simple vector descriptions of rotations are very useful for DEM models of particle systems.
NASA Astrophysics Data System (ADS)
Franta, Daniel; Nečas, David; Giglia, Angelo; Franta, Pavel; Ohlídal, Ivan
2017-11-01
Optical characterization of magnesium fluoride thin films is performed in a wide spectral range from far infrared to extreme ultraviolet (0.01-45 eV) utilizing the universal dispersion model. Two film defects, i.e. random roughness of the upper boundaries and defect transition layer at lower boundary are taken into account. An extension of universal dispersion model consisting in expressing the excitonic contributions as linear combinations of Gaussian and truncated Lorentzian terms is introduced. The spectral dependencies of the optical constants are presented in a graphical form and by the complete set of dispersion parameters that allows generating tabulated optical constants with required range and step using a simple utility in the newAD2 software package.
Dedkov, V S
2009-01-01
The specificity of DNA-methyltransferase M.Bsc4I was defined in cellular lysate of Bacillus schlegelii 4. For this purpose, we used methylation sensitivity of restriction endonucleases, and also modeling of methylation. The modeling consisted in editing sequences of DNA using replacements of methylated bases and their complementary bases. The substratum DNA processed by M.Bsc4I also were used for studying sensitivity of some restriction endonucleases to methylation. Thus, it was shown that M.Bsc4I methylated 5'-Cm4CNNNNNNNGG-3' and the overlapped dcm-methylation blocked its activity. The offered approach can appear universal enough and simple for definition of specificity of DNA-methyltransferases.
Testing averaged cosmology with type Ia supernovae and BAO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, B.; Alcaniz, J.S.; Coley, A.A.
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less
Using Biowin, Bayes, and batteries to predict ready biodegradability.
Boethling, Robert S; Lynch, David G; Jaworska, Joanna S; Tunkel, Jay L; Thom, Gary C; Webb, Simon
2004-04-01
Whether or not a given chemical substance is readily biodegradable is an important piece of information in risk screening for both new and existing chemicals. Despite the relatively low cost of Organization for Economic Cooperation and Development tests, data are often unavailable and biodegradability must be estimated. In this paper, we focus on the predictive value of selected Biowin models and model batteries using Bayesian analysis. Posterior probabilities, calculated based on performance with the model training sets using Bayes' theorem, were closely matched by actual performance with an expanded set of 374 premanufacture notice (PMN) substances. Further analysis suggested that a simple battery consisting of Biowin3 (survey ultimate biodegradation model) and Biowin5 (Ministry of International Trade and Industry [MITI] linear model) would have enhanced predictive power in comparison to individual models. Application of the battery to PMN substances showed that performance matched expectation. This approach significantly reduced both false positives for ready biodegradability and the overall misclassification rate. Similar results were obtained for a set of 63 pharmaceuticals using a battery consisting of Biowin3 and Biowin6 (MITI nonlinear model). Biodegradation data for PMNs tested in multiple ready tests or both inherent and ready biodegradation tests yielded additional insights that may be useful in risk screening.
X-Ray Burst Oscillations: From Flame Spreading to the Cooling Wake
NASA Technical Reports Server (NTRS)
Mahmoodifar, Simin; Strohmayer, Tod
2016-01-01
Type I X-ray bursts are thermonuclear flashes observed from the surfaces of accreting neutron stars (NSs) in low mass X-ray binaries. Oscillations have been observed during the rise and/or decay of some of these X-ray bursts. Those seen during the rise can be well explained by a spreading hot spot model, but large amplitude oscillations in the decay phase remain mysterious because of the absence of a clear-cut source of asymmetry. To date there have not been any quantitative studies that consistently track the oscillation amplitude both during the rise and decay (cooling tail) of bursts. Here we compute the light curves and amplitudes of oscillations in X-ray burst models that realistically account for both flame spreading and subsequent cooling. We present results for several such "cooling wake" models, a "canonical" cooling model where each patch on the NS surface heats and cools identically, or with a latitude-dependent cooling timescale set by the local effective gravity, and an "asymmetric" model where parts of the star cool at significantly different rates. We show that while the canonical cooling models can generate oscillations in the tails of bursts, they cannot easily produce the highest observed modulation amplitudes. Alternatively, a simple phenomenological model with asymmetric cooling can achieve higher amplitudes consistent with the observations.
Model for macroevolutionary dynamics.
Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E
2013-07-02
The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.
Barrett, Matthew JP; Suresh, Vinod
2013-01-01
Neural activation triggers a rapid, focal increase in blood flow and thus oxygen delivery. Local oxygen consumption also increases, although not to the same extent as oxygen delivery. This ‘uncoupling' enables a number of widely-used functional neuroimaging techniques; however, the physiologic mechanisms that govern oxygen transport under these conditions remain unclear. Here, we explore this dynamic process using a new mathematical model. Motivated by experimental observations and previous modeling, we hypothesized that functional recruitment of capillaries has an important role during neural activation. Using conventional mechanisms alone, the model predictions were inconsistent with in vivo measurements of oxygen partial pressure. However, dynamically increasing net capillary permeability, a simple description of functional recruitment, led to predictions consistent with the data. Increasing permeability in all vessel types had the same effect, but two alternative mechanisms were unable to produce predictions consistent with the data. These results are further evidence that conventional models of oxygen transport are not sufficient to predict dynamic experimental data. The data and modeling suggest that it is necessary to include a mechanism that dynamically increases net vascular permeability. While the model cannot distinguish between the different possibilities, we speculate that functional recruitment could have this effect in vivo. PMID:23673433
Modeling the effect of exogenous melatonin on the sleep-wake switch.
Johnson, Nicholas; Jain, Gauray; Sandberg, Lianne; Sheets, Kevin
2012-01-01
According to the Centers for Disease Control and Prevention and the Institute of Medicine of the National Academies, insufficient sleep has become a public health epidemic. Approximately 50-70 million adults (20 years or older) suffer from some disorder of sleep and wakefulness, hindering daily functioning and adversely affecting health and longevity. Melatonin, a naturally produced hormone which plays a role in sleep-wake regulation, is currently offered as an over-the-counter sleep aid. However, the effects of melatonin on the sleep-wake cycle are incompletely understood. The goal of this modeling study was to incorporate the effects of exogenous melatonin administration into a mathematical model of the human sleep-wake switch. The model developed herein adds a simple kinetic model of the MT1 melatonin receptor to an existing model which simulates the interactions of different neuronal groups thought to be involved in sleep-wake regulation. Preliminary results were obtained by simulating the effects of an exogenous melatonin dose typical of over-the-counter sleep aids. The model predicted an increase in homeostatic sleep drive and a resulting alteration in circadian rhythm consistent with experimental results. The time of melatonin administration was also observed to have a strong influence on the sleep-wake effects elicited, which is also consistent with prior experimental findings.
Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser
NASA Technical Reports Server (NTRS)
Monson, D. J.
1977-01-01
The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.
Adding Temporal Characteristics to Geographical Schemata and Instances: A General Framework
NASA Astrophysics Data System (ADS)
Ota, Morishige
2018-05-01
This paper proposes the temporal general feature model (TGFM) as a meta-model for application schemata representing changes of real-world phenomena. It is not very easy to determine history directly from the current application schemata, even if the revision notes are attached to the specification. To solve this problem, the rules for description of the succession between previous and posterior components are added to the general feature model, thus resulting in TGFM. After discussing the concepts associated with the new model, simple examples of application schemata are presented as instances of TGFM. Descriptors for changing properties, the succession of changing properties in moving features, and the succession of features and associations are introduced. The modeling methods proposed in this paper will contribute to the acquisition of consistent and reliable temporal geospatial data.
The Formation of Fibrils by Intertwining of Filaments: Model and Application to Amyloid Aβ Protein
van Gestel, Jeroen; de Leeuw, Simon W.
2007-01-01
We outline a model that describes the interaction of rods that form intertwined bundles. In this simple model, we compare the elastic energy penalty that arises due to the deformation of the rods to the gain in binding energy upon intertwining. We find that, for proper values of the bending Young's modulus and the binding energy, a helical pitch may be found for which the energy of intertwining is most favorable. We apply our description to the problem of Alzheimer's Aβ protein fibrillization. If we forbid configurations that exhibit steric overlap between the protofilaments that make up a protein fibril, our model predicts that fibrils consisting of three protofilaments shall form. This agrees well with experimental results. Our model can also provide an estimate for the helical pitch of suitable fibrils. PMID:17114229
Mesoscopic model for binary fluids
NASA Astrophysics Data System (ADS)
Echeverria, C.; Tucci, K.; Alvarez-Llamoza, O.; Orozco-Guillén, E. E.; Morales, M.; Cosenza, M. G.
2017-10-01
We propose a model for studying binary fluids based on the mesoscopic molecular simulation technique known as multiparticle collision, where the space and state variables are continuous, and time is discrete. We include a repulsion rule to simulate segregation processes that does not require calculation of the interaction forces between particles, so binary fluids can be described on a mesoscopic scale. The model is conceptually simple and computationally efficient; it maintains Galilean invariance and conserves the mass and energy in the system at the micro- and macro-scale, whereas momentum is conserved globally. For a wide range of temperatures and densities, the model yields results in good agreement with the known properties of binary fluids, such as the density profile, interface width, phase separation, and phase growth. We also apply the model to the study of binary fluids in crowded environments with consistent results.
MOAB : a mesh-oriented database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tautges, Timothy James; Ernst, Corey; Stimpson, Clint
A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can storemore » structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers included with MOAB, or as a translator between mesh formats, using readers and writers included with MOAB. The remainder of this report is organized as follows. Section 2, 'Getting Started', provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 contains a brief description of MOAB's relation to the TSTT mesh interface. Section 7 gives a conclusion and future plans for MOAB development. Section 8 gives references cited in this report. A reference description of the full MOAB API is contained in Section 9.« less
Brain Modularity Mediates the Relation between Task Complexity and Performance
NASA Astrophysics Data System (ADS)
Ye, Fengdan; Yue, Qiuhai; Martin, Randi; Fischer-Baum, Simon; Ramos-Nuã+/-Ez, Aurora; Deem, Michael
Recent work in cognitive neuroscience has focused on analyzing the brain as a network, rather than a collection of independent regions. Prior studies taking this approach have found that individual differences in the degree of modularity of the brain network relate to performance on cognitive tasks. However, inconsistent results concerning the direction of this relationship have been obtained, with some tasks showing better performance as modularity increases, and other tasks showing worse performance. A recent theoretical model suggests that these inconsistencies may be explained on the grounds that high-modularity networks favor performance on simple tasks whereas low-modularity networks favor performance on complex tasks. The current study tests these predictions by relating modularity from resting-state fMRI to performance on a set of behavioral tasks. Complex and simple tasks were defined on the basis of whether they drew on executive attention. Consistent with predictions, we found a negative correlation between individuals' modularity and their performance on the complex tasks but a positive correlation with performance on the simple tasks. The results presented here provide a framework for linking measures of whole brain organization to cognitive processing.
NASA Astrophysics Data System (ADS)
Chan, J. Y. H.; Kelly, R. E. J.; Evans, S. G.
2014-12-01
Glacierized regions are one of the most dynamic land surface environments on the planet (Evans and Delaney, In Press). They are susceptible to various types of natural hazards such as landslides, glacier avalanches, and glacial lake outburst floods (GLOF). GLOF events are increasingly common and present catastrophic flood hazards, the causes of which are sensitive to climate change in complex high mountain topography (IPCC, 2013). Inundation and debris flows from GLOF events have repeatedly caused significant infrastructure damages and loss of human lives in the high mountain regions of the world (Huggel et al, 2002). The research is designed to develop methods for the consistent detection of glacier lakes formation during the Landsat Thematic Mapper (TM) era (1982 - present), to quantify the frequency of glacier lake development and estimate lake volume using Landsat imagery and digital elevation model (DEM) data. Landsat TM scenes are used to identify glacier lakes in the Shimshal and Shaksgam valley, particularly the development of Lake Virjeab in year 2000 and Kyagar Lake in 1998. A simple thresholding technique using Landsat TM infrared bands, along with object-based segmentation approaches are used to isolate lake extent. Lake volume is extracted by intersecting the lake extent with the DEM surface. Based on previous studies and DEM characterization in the region, Shuttle Radar Topography Mission (SRTM) DEM is preferred over Advanced Spaceborne Thermal Emission and Reflection (ASTER) GDEM due to higher accuracy. Calculated errors in SRTM height estimates are 5.81 m compared with 8.34 m for ASTER. SRTM data are preferred because the DEM measurements were made over short duration making the DEM internally consistent. Lake volume derived from the Landsat TM imagery and DEM are incorporated into a simple GLOF model identified by Clague and Matthews (1973) to estimate the potential peak discharge (Qmax) of a GLOF event. We compare the simple Qmax estimates with those from a more complex model of lake outflow time-varying discharge using the approach developed by Ng et al. (2007).
Stochastic summation of empirical Green's functions
Wennerberg, Leif
1990-01-01
Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).
Large liquid rocket engine transient performance simulation system
NASA Technical Reports Server (NTRS)
Mason, J. R.; Southwick, R. D.
1989-01-01
Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.
Effect of Critical Displacement Parameter on Slip Regime at Subduction Fault
NASA Astrophysics Data System (ADS)
Muldashev, Iskander; Sobolev, Stephan
2016-04-01
It is widely accepted that for the simple fault models value of critical displacement parameter (Dc) in Ruina-Dietrich's rate-and-state friction law is responsible for the transition from stick-slip regime at low Dc to non-seismic creep regime at large Dc. However, neither the value of "transition" Dc parameter nor the character of the transition is known for the realistic subduction zone setting. Here we investigate effect of Dc on regime of slip at subduction faults for two setups, generic model similar to simple shear elastic slider under quasistatic loading and full subduction model with appropriate geometry, stress and temperature distribution similar to the setting at the site of the Great Chile Earthquake of 1960. In our modeling we use finite element numerical technique that employs non-linear elasto-visco-plastic rheology in the entire model domain with rate-and-state plasticity within the fault zone. The model generates spontaneous earthquake sequence. Adaptive time-step integration procedure varies time step from 40 seconds at instability (earthquake), and gradually increases it to 5 years during postseismic relaxation. The technique allows observing the effect of Dc on period, magnitude of earthquakes through the cycles. We demonstrate that our modeling results for the generic model are consistent with the previous theoretical and numeric modeling results. For the full subduction model we obtain transition from non-seismic creep to stick-slip regime at Dc about 20 cm. We will demonstrate and discuss the features of the transition regimes in both generic and realistic subduction models.
ERIC Educational Resources Information Center
Unsworth, Nash; Engle, Randall W.
2006-01-01
Complex (working memory) span tasks have generally shown larger and more consistent correlations with higher-order cognition than have simple (or short-term memory) span tasks. The relation between verbal complex and simple verbal span tasks to fluid abilities as a function of list-length was examined. The results suggest that the simple…
A simple modern correctness condition for a space-based high-performance multiprocessor
NASA Technical Reports Server (NTRS)
Probst, David K.; Li, Hon F.
1992-01-01
A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.
Fire forbids fifty-fifty forest
Staal, Arie; Hantson, Stijn; Holmgren, Milena; Pueyo, Salvador; Bernardi, Rafael E.; Flores, Bernardo M.; Xu, Chi; Scheffer, Marten
2018-01-01
Recent studies have interpreted patterns of remotely sensed tree cover as evidence that forest with intermediate tree cover might be unstable in the tropics, as it will tip into either a closed forest or a more open savanna state. Here we show that across all continents the frequency of wildfires rises sharply as tree cover falls below ~40%. Using a simple empirical model, we hypothesize that the steepness of this pattern causes intermediate tree cover (30‒60%) to be unstable for a broad range of assumptions on tree growth and fire-driven mortality. We show that across all continents, observed frequency distributions of tropical tree cover are consistent with this hypothesis. We argue that percolation of fire through an open landscape may explain the remarkably universal rise of fire frequency around a critical tree cover, but we show that simple percolation models cannot predict the actual threshold quantitatively. The fire-driven instability of intermediate states implies that tree cover will not change smoothly with climate or other stressors and shifts between closed forest and a state of low tree cover will likely tend to be relatively sharp and difficult to reverse. PMID:29351323
Fire forbids fifty-fifty forest.
van Nes, Egbert H; Staal, Arie; Hantson, Stijn; Holmgren, Milena; Pueyo, Salvador; Bernardi, Rafael E; Flores, Bernardo M; Xu, Chi; Scheffer, Marten
2018-01-01
Recent studies have interpreted patterns of remotely sensed tree cover as evidence that forest with intermediate tree cover might be unstable in the tropics, as it will tip into either a closed forest or a more open savanna state. Here we show that across all continents the frequency of wildfires rises sharply as tree cover falls below ~40%. Using a simple empirical model, we hypothesize that the steepness of this pattern causes intermediate tree cover (30‒60%) to be unstable for a broad range of assumptions on tree growth and fire-driven mortality. We show that across all continents, observed frequency distributions of tropical tree cover are consistent with this hypothesis. We argue that percolation of fire through an open landscape may explain the remarkably universal rise of fire frequency around a critical tree cover, but we show that simple percolation models cannot predict the actual threshold quantitatively. The fire-driven instability of intermediate states implies that tree cover will not change smoothly with climate or other stressors and shifts between closed forest and a state of low tree cover will likely tend to be relatively sharp and difficult to reverse.
Functional renormalization group analysis of tensorial group field theories on Rd
NASA Astrophysics Data System (ADS)
Geloun, Joseph Ben; Martini, Riccardo; Oriti, Daniele
2016-07-01
Rank-d tensorial group field theories are quantum field theories (QFTs) defined on a group manifold G×d , which represent a nonlocal generalization of standard QFT and a candidate formalism for quantum gravity, since, when endowed with appropriate data, they can be interpreted as defining a field theoretic description of the fundamental building blocks of quantum spacetime. Their renormalization analysis is crucial both for establishing their consistency as quantum field theories and for studying the emergence of continuum spacetime and geometry from them. In this paper, we study the renormalization group flow of two simple classes of tensorial group field theories (TGFTs), defined for the group G =R for arbitrary rank, both without and with gauge invariance conditions, by means of functional renormalization group techniques. The issue of IR divergences is tackled by the definition of a proper thermodynamic limit for TGFTs. We map the phase diagram of such models, in a simple truncation, and identify both UV and IR fixed points of the RG flow. Encouragingly, for all the models we study, we find evidence for the existence of a phase transition of condensation type.
[Disposable nursing applicator-pocket of indwelling central venous catheter].
Wei, Congli; Ma, Chunyuan
2017-11-01
Catheter related infection is the most common complication of central venous catheter, which pathogen mainly originate from the pipe joint and the skin around puncture site. How to prevent catheter infection is an important issue in clinical nursing. The utility model disclosed a "disposable nursing applicator-pocket of indwelling central venous catheter", which is mainly used for the fixation and the protection. The main structure consists of two parts, one is medical applicator to protect the skin around puncture site, and the other is gauze pocket to protect the catheter external connector. When in use, the catheter connector is fitted into the pocket, and then the applicator is applied to cover the puncture point of the skin. Integrated design of medical applicator and gauze pocket was designed to realize double functions of fixation and protection. The disposable nursing applicator-pocket is made of medical absorbent gauze (outer layer) and non-woven fabric (inner layer), which has the characteristics of comfortable, breathable, dust filtered, bacteria filtered, waterproof, antiperspirant and anti-pollution. The utility model has the advantages of simple structure, low cost, simple operation, effective protection, easy realization and popularization.
NASA Astrophysics Data System (ADS)
You, Gexin; Liu, Xinsen; Chen, Xiri; Yang, Bo; Zhou, Xiuwen
2018-06-01
In this study, a two-element model consisting of a non-linear spring and a viscous dashpot was proposed to simulate tensile curve of polyurethane fibers. The results showed that the two-element model can simulate the tensile curve of the polyurethane fibers better with a simple and applicable feature compared to the existing three-element model and four-element model. The effects of isocyanate index (R) on the hydrogen bond (H-bond) and the micro-phase separation of polyurethane fibers were investigated by Fourier transform infrared spectroscopy and x-ray pyrometer, respectively. The degree of H-bond and micro-phase separation increased first and then decreased as the R value increased, and gain a maximum at the value of 1.76, which is in good agreement with parameters viscosity coefficient η and the initial modulus c in the model.
NASA Astrophysics Data System (ADS)
Igarashi, Akito; Tsukamoto, Shinji
2000-02-01
Biological molecular motors drive unidirectional transport and transduce chemical energy to mechanical work. In order to identify this energy conversion which is a common feature of molecular motors, many workers have studied various physical models, which consist of Brownian particles in spatially periodic potentials. Most of the models are, however, based on "single-particle" dynamics and too simple as models for biological motors, especially for actin-myosin motors, which cause muscle contraction. In this paper, particles coupled by elastic strings in an asymmetric periodic potential are considered as a model for the motors. We investigate the dynamics of the model and calculate the efficiency of energy conversion with the use of molecular dynamical method. In particular, we find that the velocity and efficiency of the elastically coupled particles where the natural length of the springs is incommensurable with the period of the periodic potential are larger than those of the corresponding single particle model.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Molecular interactions in nanocellulose assembly
NASA Astrophysics Data System (ADS)
Nishiyama, Yoshiharu
2017-12-01
The contribution of hydrogen bonds and the London dispersion force in the cohesion of cellulose is discussed in the light of the structure, spectroscopic data, empirical molecular-modelling parameters and thermodynamics data of analogue molecules. The hydrogen bond of cellulose is mainly electrostatic, and the stabilization energy in cellulose for each hydrogen bond is estimated to be between 17 and 30 kJ mol-1. On average, hydroxyl groups of cellulose form hydrogen bonds comparable to those of other simple alcohols. The London dispersion interaction may be estimated from empirical attraction terms in molecular modelling by simple integration over all components. Although this interaction extends to relatively large distances in colloidal systems, the short-range interaction is dominant for the cohesion of cellulose and is equivalent to a compression of 3 GPa. Trends of heat of vaporization of alkyl alcohols and alkanes suggests a stabilization by such hydroxyl group hydrogen bonding to be of the order of 24 kJ mol-1, whereas the London dispersion force contributes about 0.41 kJ mol-1 Da-1. The simple arithmetic sum of the energy is consistent with the experimental enthalpy of sublimation of small sugars, where the main part of the cohesive energy comes from hydrogen bonds. For cellulose, because of the reduced number of hydroxyl groups, the London dispersion force provides the main contribution to intermolecular cohesion. This article is part of a discussion meeting issue `New horizons for cellulose nanotechnology'.
Wenk, H.-R.; Takeshita, T.; Bechler, E.; Erskine, B.G.; Matthies, S.
1987-01-01
The pattern of lattice preferred orientation (texture) in deformed rocks is an expression of the strain path and the acting deformation mechanisms. A first indication about the strain path is given by the symmetry of pole figures: coaxial deformation produces orthorhombic pole figures, while non-coaxial deformation yields monoclinic or triclinic pole figures. More quantitative information about the strain history can be obtained by comparing natural textures with experimental ones and with theoretical models. For this comparison, a representation in the sensitive three-dimensional orientation distribution space is extremely important and efforts are made to explain this concept. We have been investigating differences between pure shear and simple shear deformation incarbonate rocks and have found considerable agreement between textures produced in plane strain experiments and predictions based on the Taylor model. We were able to simulate the observed changes with strain history (coaxial vs non-coaxial) and the profound texture transition which occurs with increasing temperature. Two natural calcite textures were then selected which we interpreted by comparing them with the experimental and theoretical results. A marble from the Santa Rosa mylonite zone in southern California displays orthorhombic pole figures with patterns consistent with low temperature deformation in pure shear. A limestone from the Tanque Verde detachment fault in Arizona has a monoclinic fabric from which we can interpret that 60% of the deformation occurred by simple shear. ?? 1987.
Uncovering Oscillations, Complexity, and Chaos in Chemical Kinetics Using Mathematica
NASA Astrophysics Data System (ADS)
Ferreira, M. M. C.; Ferreira, W. C., Jr.; Lino, A. C. S.; Porto, M. E. G.
1999-06-01
Unlike reactions with no peculiar temporal behavior, in oscillatory reactions concentrations can rise and fall spontaneously in a cyclic or disorganized fashion. In this article, the software Mathematica is used for a theoretical study of kinetic mechanisms of oscillating and chaotic reactions. A first simple example is introduced through a three-step reaction, called the Lotka model, which exhibits a temporal behavior characterized by damped oscillations. The phase plane method of dynamic systems theory is introduced for a geometric interpretation of the reaction kinetics without solving the differential rate equations. The equations are later numerically solved using the built-in routine NDSolve and the results are plotted. The next example, still with a very simple mechanism, is the Lotka-Volterra model reaction, which oscillates indefinitely. The kinetic process and rate equations are also represented by a three-step reaction mechanism. The most important difference between this and the former reaction is that the undamped oscillation has two autocatalytic steps instead of one. The periods of oscillations are obtained by using the discrete Fourier transform (DFT)-a well-known tool in spectroscopy, although not so common in this context. In the last section, it is shown how a simple model of biochemical interactions can be useful to understand the complex behavior of important biological systems. The model consists of two allosteric enzymes coupled in series and activated by its own products. This reaction scheme is important for explaining many metabolic mechanisms, such as the glycolytic oscillations in muscles, yeast glycolysis, and the periodic synthesis of cyclic AMP. A few of many possible dynamic behaviors are exemplified through a prototype glycolytic enzymatic reaction proposed by Decroly and Goldbeter. By simply modifying the initial concentrations, limit cycles, chaos, and birhythmicity are computationally obtained and visualized.
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
ROSAT observations of pulsed soft X-ray emission from PSR 1055-52
NASA Technical Reports Server (NTRS)
Oegelman, Hakki; Finley, John P.
1993-01-01
Utilizing the position-sensitive proportional counter and the high-resolution imager aboard the orbiting X-ray observatory ROSAT, we have detected pulsations at the radio period from the pulsar PSR 1055-52. The pulse shapes are energy-dependent and show a transition at about 0.5 keV where the phase angle of the pulse peak changes by about -120 deg and the pulsed fraction increases from 11 percent to 63 percent toward larger energies. Simple spectral models are found to be unsatisfactory, while multicomponent models, such as a soft blackbody and hard power-law tail, yield better fits to the pulse-height data. The hard power-law tail is consistent with the extension of the recently reported EGRET results and may indicate a common emission mechanism for the X-ray through GeV gamma-ray regime. The soft blackbody component with T(infinity) = (7.5 +/- 0.6) x 10 exp 5 K, if interpreted as the initial cooling of a neutron star, is consistent with standard cooling models and does not require the presence of exotic components.
Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media
Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.
2000-01-01
To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tautges, Timothy J.
MOAB is a component for representing and evaluating mesh data. MOAB can store stuctured and unstructured mesh, consisting of elements in the finite element "zoo". The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handlesmore » rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms isa powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers induded with MOAB, or as a tanslator between mesh formats, using readers and writers included with MOAB.« less
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-11-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.
A multidimensional stability model for predicting shallow landslide size and shape across landscapes
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-01-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data. PMID:26213663
Progress towards an effective model for FeSe from high-accuracy first-principles quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Busemeyer, Brian; Wagner, Lucas K.
While the origin of superconductivity in the iron-based materials is still controversial, the proximity of the superconductivity to magnetic order is suggestive that magnetism may be important. Our previous work has suggested that first-principles Diffusion Monte Carlo (FN-DMC) can capture magnetic properties of iron-based superconductors that density functional theory (DFT) misses, but which are consistent with experiment. We report on the progress of efforts to find simple effective models consistent with the FN-DMC description of the low-lying Hilbert space of the iron-based superconductor, FeSe. We utilize a procedure outlined by Changlani et al.[1], which both produces parameter values and indications of whether the model is a good description of the first-principles Hamiltonian. Using this procedure, we evaluate several models of the magnetic part of the Hilbert space found in the literature, as well as the Hubbard model, and a spin-fermion model. We discuss which interaction parameters are important for this material, and how the material-specific properties give rise to these interactions. U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program under Award No. FG02-12ER46875, as well as the NSF Graduate Research Fellowship Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pelanti, Marica, E-mail: marica.pelanti@ensta-paristech.fr; Shyue, Keh-Ming, E-mail: shyue@ntu.edu.tw
2014-02-15
We model liquid–gas flows with cavitation by a variant of the six-equation single-velocity two-phase model with stiff mechanical relaxation of Saurel–Petitpas–Berry (Saurel et al., 2009) [9]. In our approach we employ phasic total energy equations instead of the phasic internal energy equations of the classical six-equation system. This alternative formulation allows us to easily design a simple numerical method that ensures consistency with mixture total energy conservation at the discrete level and agreement of the relaxed pressure at equilibrium with the correct mixture equation of state. Temperature and Gibbs free energy exchange terms are included in the equations as relaxationmore » terms to model heat and mass transfer and hence liquid–vapor transition. The algorithm uses a high-resolution wave propagation method for the numerical approximation of the homogeneous hyperbolic portion of the model. In two dimensions a fully-discretized scheme based on a hybrid HLLC/Roe Riemann solver is employed. Thermo-chemical terms are handled numerically via a stiff relaxation solver that forces thermodynamic equilibrium at liquid–vapor interfaces under metastable conditions. We present numerical results of sample tests in one and two space dimensions that show the ability of the proposed model to describe cavitation mechanisms and evaporation wave dynamics.« less
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
Tavecchia, Giacomo; Miranda, Miguel-Angel; Borrás, David; Bengoa, Mikel; Barceló, Carlos; Paredes-Esquivel, Claudia; Schwarz, Carl
2017-01-01
Aedes albopictus (Diptera; Culicidae) is a highly invasive mosquito species and a competent vector of several arboviral diseases that have spread rapidly throughout the world. Prevalence and patterns of dispersal of the mosquito are of central importance for an effective control of the species. We used site-occupancy models accounting for false negative detections to estimate the prevalence, the turnover, the movement pattern and the growth rate in the number of sites occupied by the mosquito in 17 localities throughout Mallorca Island. Site-occupancy probability increased from 0.35 in the 2012, year of first reported observation of the species, to 0.89 in 2015. Despite a steady increase in mosquito presence, the extinction probability was generally high indicating a high turnover in the occupied sites. We considered two site-dependent covariates, namely the distance from the point of first observation and the estimated yearly occupancy rate in the neighborhood, as predicted by diffusion models. Results suggested that mosquito distribution during the first year was consistent with what predicted by simple diffusion models, but was not consistent with the diffusion model in subsequent years when it was similar to those expected from leapfrog dispersal events. Assuming a single initial colonization event, the spread of Ae. albopictus in Mallorca followed two distinct phases, an early one consistent with diffusion movements and a second consistent with long distance, 'leapfrog', movements. The colonization of the island was fast, with ~90% of the sites estimated to be occupied 3 years after the colonization. The fast spread was likely to have occurred through vectors related to human mobility such as cars or other vehicles. Surveillance and management actions near the introduction point would only be effective during the early steps of the colonization.
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
Thermodynamically consistent model calibration in chemical kinetics
2011-01-01
Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
Theory and experiments in model-based space system anomaly management
NASA Astrophysics Data System (ADS)
Kitts, Christopher Adam
This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.
NASA Astrophysics Data System (ADS)
Muraoka, M.; Ohtake, M.; Susuki, N.; Yamamoto, Y.; Suzuki, K.; Tsuji, T.
2014-12-01
This study presents the results of the measurements of the thermal constants of natural methane-hydrate-bearing sediments samples recovered from the Tokai-oki test wells (Nankai-Trough, Japan) in 2004. The thermal conductivity, thermal diffusivity, and specific heat of the samples were simultaneously determined using the hot-disk transient method. The thermal conductivity of natural hydrate-bearing sediments decreases slightly with increasing porosity. In addition, the thermal diffusivity of hydrate-bearing sediment decrease as porosity increases. We also used simple models to calculate the thermal conductivity and thermal diffusivity. The results of the distribution model (geometric-mean model) are relatively consistent with the measurement results. In addition, the measurement results are consistent with the thermal diffusivity, which is estimated by dividing the thermal conductivity obtained from the distribution model by the specific heat obtained from the arithmetic mean. In addition, we discuss the relation between the thermal conductivity and mineral composition of core samples in conference. Acknowledgments. This work was financially supported by MH21 Research Consortium for Methane Hydrate Resources in Japan on the National Methane Hydrate Exploitation Program planned by the Ministry of Economy, Trade and Industry.
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
Magnetization Reversal of Nanoscale Islands: How Size and Shape Affect the Arrhenius Prefactor
NASA Astrophysics Data System (ADS)
Krause, S.; Herzog, G.; Stapelfeldt, T.; Berbil-Bautista, L.; Bode, M.; Vedmedenko, E. Y.; Wiesendanger, R.
2009-09-01
The thermal switching behavior of individual in-plane magnetized Fe/W(110) nanoislands is investigated by a combined study of variable-temperature spin-polarized scanning tunneling microscopy and Monte Carlo simulations. Even for islands consisting of less than 100 atoms the magnetization reversal takes place via nucleation and propagation. The Arrhenius prefactor is found to strongly depend on the individual island size and shape, and based on the experimental results a simple model is developed to describe the magnetization reversal in terms of metastable states. Complementary Monte Carlo simulations confirm the model and provide new insight into the microscopic processes involved in magnetization reversal of smallest nanomagnets.