NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
NASA Astrophysics Data System (ADS)
Baird, M. E.; Walker, S. J.; Wallace, B. B.; Webster, I. T.; Parslow, J. S.
2003-03-01
A simple model of estuarine eutrophication is built on biomechanical (or mechanistic) descriptions of a number of the key ecological processes in estuaries. Mechanistically described processes include the nutrient uptake and light capture of planktonic and benthic autotrophs, and the encounter rates of planktonic predators and prey. Other more complex processes, such as sediment biogeochemistry, detrital processes and phosphate dynamics, are modelled using empirical descriptions from the Port Phillip Bay Environmental Study (PPBES) ecological model. A comparison is made between the mechanistically determined rates of ecological processes and the analogous empirically determined rates in the PPBES ecological model. The rates generally agree, with a few significant exceptions. Model simulations were run at a range of estuarine depths and nutrient loads, with outputs presented as the annually averaged biomass of autotrophs. The simulations followed a simple conceptual model of eutrophication, suggesting a simple biomechanical understanding of estuarine processes can provide a predictive tool for ecological processes in a wide range of estuarine ecosystems.
Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.
2000-01-01
PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.
The Monash University Interactive Simple Climate Model
NASA Astrophysics Data System (ADS)
Dommenget, D.
2013-12-01
The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-05-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.
A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS
We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...
Joe J. Landsberg; Kurt H. Johnsen; Timothy J. Albaugh; H. Lee Allen; Steven E. McKeand
2001-01-01
3-PG is a simple process-based model that requires few parameter values and only readily available input data. We tested the structure of the model by calibrating it against loblolly pine data from the control treatment of the SETRES experiment in Scotland County, NC, then altered the fertility rating to simulate the effects of fertilization. There was excellent...
Simple and Hierarchical Models for Stochastic Test Misgrading.
ERIC Educational Resources Information Center
Wang, Jianjun
1993-01-01
Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)
Four simple ocean carbon models
NASA Technical Reports Server (NTRS)
Moore, Berrien, III
1992-01-01
This paper briefly reviews the key processes that determine oceanic CO2 uptake and sets this description within the context of four simple ocean carbon models. These models capture, in varying degrees, these key processes and establish a clear foundation for more realistic models that incorporate more directly the underlying physics and biology of the ocean rather than relying on simple parametric schemes. The purpose of this paper is more pedagogical than purely scientific. The problems encountered by current attempts to understand the global carbon cycle not only require our efforts but set a demand for a new generation of scientist, and it is hoped that this paper and the text in which it appears will help in this development.
When push comes to shove: Exclusion processes with nonlocal consequences
NASA Astrophysics Data System (ADS)
Almet, Axel A.; Pan, Michael; Hughes, Barry D.; Landman, Kerry A.
2015-11-01
Stochastic agent-based models are useful for modelling collective movement of biological cells. Lattice-based random walk models of interacting agents where each site can be occupied by at most one agent are called simple exclusion processes. An alternative motility mechanism to simple exclusion is formulated, in which agents are granted more freedom to move under the compromise that interactions are no longer necessarily local. This mechanism is termed shoving. A nonlinear diffusion equation is derived for a single population of shoving agents using mean-field continuum approximations. A continuum model is also derived for a multispecies problem with interacting subpopulations, which either obey the shoving rules or the simple exclusion rules. Numerical solutions of the derived partial differential equations compare well with averaged simulation results for both the single species and multispecies processes in two dimensions, while some issues arise in one dimension for the multispecies case.
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
Simple model of inhibition of chain-branching combustion processes
NASA Astrophysics Data System (ADS)
Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.
2017-11-01
A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.
Modeling How, When, and What Is Learned in a Simple Fault-Finding Task
ERIC Educational Resources Information Center
Ritter, Frank E.; Bibby, Peter A.
2008-01-01
We have developed a process model that learns in multiple ways while finding faults in a simple control panel device. The model predicts human participants' learning through its own learning. The model's performance was systematically compared to human learning data, including the time course and specific sequence of learned behaviors. These…
Estimating linear temporal trends from aggregated environmental monitoring data
Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.
2017-01-01
Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.
Meesters, Johannes A J; Koelmans, Albert A; Quik, Joris T K; Hendriks, A Jan; van de Meent, Dik
2014-05-20
Screening level models for environmental assessment of engineered nanoparticles (ENP) are not generally available. Here, we present SimpleBox4Nano (SB4N) as the first model of this type, assess its validity, and evaluate it by comparisons with a known material flow model. SB4N expresses ENP transport and concentrations in and across air, rain, surface waters, soil, and sediment, accounting for nanospecific processes such as aggregation, attachment, and dissolution. The model solves simultaneous mass balance equations (MBE) using simple matrix algebra. The MBEs link all concentrations and transfer processes using first-order rate constants for all processes known to be relevant for ENPs. The first-order rate constants are obtained from the literature. The output of SB4N is mass concentrations of ENPs as free dispersive species, heteroaggregates with natural colloids, and larger natural particles in each compartment in time and at steady state. Known scenario studies for Switzerland were used to demonstrate the impact of the transport processes included in SB4N on the prediction of environmental concentrations. We argue that SB4N-predicted environmental concentrations are useful as background concentrations in environmental risk assessment.
Modelling tidewater glacier calving: from detailed process models to simple calving laws
NASA Astrophysics Data System (ADS)
Benn, Doug; Åström, Jan; Zwinger, Thomas; Todd, Joe; Nick, Faezeh
2017-04-01
The simple calving laws currently used in ice sheet models do not adequately reflect the complexity and diversity of calving processes. To be effective, calving laws must be grounded in a sound understanding of how calving actually works. We have developed a new approach to formulating calving laws, using a) the Helsinki Discrete Element Model (HiDEM) to explicitly model fracture and calving processes, and b) the full-Stokes continuum model Elmer/Ice to identify critical stress states associated with HiDEM calving events. A range of observed calving processes emerges spontaneously from HiDEM in response to variations in ice-front buoyancy and the size of subaqueous undercuts, and we show that HiDEM calving events are associated with characteristic stress patterns simulated in Elmer/Ice. Our results open the way to developing calving laws that properly reflect the diversity of calving processes, and provide a framework for a unified theory of the calving process continuum.
How Long is my Toilet Roll?--A Simple Exercise in Mathematical Modelling
ERIC Educational Resources Information Center
Johnston, Peter R.
2013-01-01
The simple question of how much paper is left on my toilet roll is studied from a mathematical modelling perspective. As is typical with applied mathematics, models of increasing complexity are introduced and solved. Solutions produced at each step are compared with the solution from the previous step. This process exposes students to the typical…
Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection
NASA Astrophysics Data System (ADS)
Harwati
2017-06-01
Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Building Regression Models: The Importance of Graphics.
ERIC Educational Resources Information Center
Dunn, Richard
1989-01-01
Points out reasons for using graphical methods to teach simple and multiple regression analysis. Argues that a graphically oriented approach has considerable pedagogic advantages in the exposition of simple and multiple regression. Shows that graphical methods may play a central role in the process of building regression models. (Author/LS)
Testing the Structure of Hydrological Models using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, B.; Muttil, N.
2009-04-01
Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
Accounting For Gains And Orientations In Polarimetric SAR
NASA Technical Reports Server (NTRS)
Freeman, Anthony
1992-01-01
Calibration method accounts for characteristics of real radar equipment invalidating standard 2 X 2 complex-amplitude R (receiving) and T (transmitting) matrices. Overall gain in each combination of transmitting and receiving channels assumed different even when only one transmitter and one receiver used. One characterizes departure of polarimetric Synthetic Aperture Radar (SAR) system from simple 2 X 2 model in terms of single parameter used to transform measurements into format compatible with simple 2 X 2 model. Data processed by applicable one of several prior methods based on simple model.
System-level modeling of acetone-butanol-ethanol fermentation.
Liao, Chen; Seo, Seung-Oh; Lu, Ting
2016-05-01
Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Perspective: Sloppiness and emergent theories in physics, biology, and beyond.
Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P
2015-07-07
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
2016-06-01
team processes, such as identifying motifs of dynamic communication exchanges which goes well beyond simple dyadic and triadic configurations; as well...new metrics and ways to formulate team processes, such as identifying motifs of dynamic communication exchanges which goes well beyond simple dyadic ...sensing, communication , information, and decision networks - Darryl Ahner (AFIT: Air Force Inst Tech) Panel Session: Mathematical Models of
A simple hyperbolic model for communication in parallel processing environments
NASA Technical Reports Server (NTRS)
Stoica, Ion; Sultan, Florin; Keyes, David
1994-01-01
We introduce a model for communication costs in parallel processing environments called the 'hyperbolic model,' which generalizes two-parameter dedicated-link models in an analytically simple way. Dedicated interprocessor links parameterized by a latency and a transfer rate that are independent of load are assumed by many existing communication models; such models are unrealistic for workstation networks. The communication system is modeled as a directed communication graph in which terminal nodes represent the application processes that initiate the sending and receiving of the information and in which internal nodes, called communication blocks (CBs), reflect the layered structure of the underlying communication architecture. The direction of graph edges specifies the flow of the information carried through messages. Each CB is characterized by a two-parameter hyperbolic function of the message size that represents the service time needed for processing the message. The parameters are evaluated in the limits of very large and very small messages. Rules are given for reducing a communication graph consisting of many to an equivalent two-parameter form, while maintaining an approximation for the service time that is exact in both large and small limits. The model is validated on a dedicated Ethernet network of workstations by experiments with communication subprograms arising in scientific applications, for which a tight fit of the model predictions with actual measurements of the communication and synchronization time between end processes is demonstrated. The model is then used to evaluate the performance of two simple parallel scientific applications from partial differential equations: domain decomposition and time-parallel multigrid. In an appropriate limit, we also show the compatibility of the hyperbolic model with the recently proposed LogP model.
OBSIFRAC: database-supported software for 3D modeling of rock mass fragmentation
NASA Astrophysics Data System (ADS)
Empereur-Mot, Luc; Villemin, Thierry
2003-03-01
Under stress, fractures in rock masses tend to form fully connected networks. The mass can thus be thought of as a 3D series of blocks produced by fragmentation processes. A numerical model has been developed that uses a relational database to describe such a mass. The model, which assumes the fractures to be plane, allows data from natural networks to test theories concerning fragmentation processes. In the model, blocks are bordered by faces that are composed of edges and vertices. A fracture can originate from a seed point, its orientation being controlled by the stress field specified by an orientation matrix. Alternatively, it can be generated from a discrete set of given orientations and positions. Both kinds of fracture can occur together in a model. From an original simple block, a given fracture produces two simple polyhedral blocks, and the original block becomes compound. Compound and simple blocks created throughout fragmentation are stored in the database. Several fragmentation processes have been studied. In one scenario, a constant proportion of blocks is fragmented at each step of the process. The resulting distribution appears to be fractal, although seed points are random in each fragmented block. In a second scenario, division affects only one random block at each stage of the process, and gives a Weibull volume distribution law. This software can be used for a large number of other applications.
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
Interaction of Simple Ions with Water: Theoretical Models for the Study of Ion Hydration
ERIC Educational Resources Information Center
Gancheff, Jorge S.; Kremer, Carlos; Ventura, Oscar N.
2009-01-01
A computational experiment aimed to create and systematically analyze models of simple cation hydrates is presented. The changes in the structure (bond distances and angles) and the electronic density distribution of the solvent and the thermodynamic parameters of the hydration process are calculated and compared with the experimental data. The…
Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A
2008-12-01
Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
Teaching Mathematical Modelling: Demonstrating Enrichment and Elaboration
ERIC Educational Resources Information Center
Warwick, Jon
2015-01-01
This paper uses a series of models to illustrate one of the fundamental processes of model building--that of enrichment and elaboration. The paper describes how a problem context is given which allows a series of models to be developed from a simple initial model using a queuing theory framework. The process encourages students to think about the…
A simple 2D biofilm model yields a variety of morphological features.
Hermanowicz, S W
2001-01-01
A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.
Modelling morphology evolution during solidification of IPP in processing conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pantani, R., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; De Santis, F., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it; Speranza, V., E-mail: rpantani@unisa.it, E-mail: fedesantis@unisa.it, E-mail: vsperanza@unisa.it, E-mail: gtitomanlio@unisa.it
During polymer processing, crystallization takes place during or soon after flow. In most of cases, the flow field dramatically influences both the crystallization kinetics and the crystal morphology. On their turn, crystallinity and morphology affect product properties. Consequently, in the last decade, researchers tried to identify the main parameters determining crystallinity and morphology evolution during solidification In processing conditions. In this work, we present an approach to model flow-induced crystallization with the aim of predicting the morphology after processing. The approach is based on: interpretation of the FIC as the effect of molecular stretch on the thermodynamic crystallization temperature; modelingmore » the molecular stretch evolution by means of a model simple and easy to be implemented in polymer processing simulation codes; identification of the effect of flow on nucleation density and spherulites growth rate by means of simple experiments; determination of the condition under which fibers form instead of spherulites. Model predictions reproduce most of the features of final morphology observed in the samples after solidification.« less
ERIC Educational Resources Information Center
Heckler, Andrew F.; Scaife, Thomas M.
2015-01-01
We report on five experiments investigating response choices and response times to simple science questions that evoke student "misconceptions," and we construct a simple model to explain the patterns of response choices. Physics students were asked to compare a physical quantity represented by the slope, such as speed, on simple physics…
Engineering model for ultrafast laser microprocessing
NASA Astrophysics Data System (ADS)
Audouard, E.; Mottay, E.
2016-03-01
Ultrafast laser micro-machining relies on complex laser-matter interaction processes, leading to a virtually athermal laser ablation. The development of industrial ultrafast laser applications benefits from a better understanding of these processes. To this end, a number of sophisticated scientific models have been developed, providing valuable insights in the physics of the interaction. Yet, from an engineering point of view, they are often difficult to use, and require a number of adjustable parameters. We present a simple engineering model for ultrafast laser processing, applied in various real life applications: percussion drilling, line engraving, and non normal incidence trepanning. The model requires only two global parameters. Analytical results are derived for single pulse percussion drilling or simple pass engraving. Simple assumptions allow to predict the effect of non normal incident beams to obtain key parameters for trepanning drilling. The model is compared to experimental data on stainless steel with a wide range of laser characteristics (time duration, repetition rate, pulse energy) and machining conditions (sample or beam speed). Ablation depth and volume ablation rate are modeled for pulse durations from 100 fs to 1 ps. Trepanning time of 5.4 s with a conicity of 0.15° is obtained for a hole of 900 μm depth and 100 μm diameter.
Tufvesson, Pär; Bach, Christian; Woodley, John M
2014-02-01
Acetone removal by evaporation has been proposed as a simple and cheap way to shift the equilibrium in the biocatalytic asymmetric synthesis of optically pure chiral amines, when 2-propylamine is used as the amine donor. However, dependent on the system properties, this may or may not be a suitable strategy. To avoid excessive laboratory work a model was used to assess the process feasibility. The results from the current study show that a simple model of the acetone removal dependence on temperature and sparging gas flowrate can be developed and fits the experimental data well. The model for acetone removal was then coupled to a simple model for biocatalyst kinetics and also for loss of substrate ketone by evaporation. The three models were used to simulate the effects of varying the critical process parameters and reaction equilibrium constants (K eq) as well as different substrate ketone volatilities (Henry's constant). The simulations were used to estimate the substrate losses and also the maximum yield that could be expected. The approach was seen to give a clear indication for which target amines the acetone evaporation strategy would be feasible and for which amines it would not. The study also shows the value of a modeling approach in conceptual process design prior to entering a biocatalyst screening or engineering program to assess the feasibility of a particular process strategy for a given target product. © 2013 Wiley Periodicals, Inc.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M
2017-06-01
Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
Testing the structure of a hydrological model using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, Benny; Muttil, Nitin
2011-01-01
SummaryGenetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that Genetic Programming can be used to test the structure of hydrological models and to identify dominant processes in hydrological systems. To test this, Genetic Programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, watertable depths and water ponding times during surface irrigation. Using Genetic Programming, a simple model of deep percolation was recurrently evolved in multiple Genetic Programming runs. This simple and interpretable model supported the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that Genetic Programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
Simple stochastic model for El Niño with westerly wind bursts
Thual, Sulian; Majda, Andrew J.; Chen, Nan; Stechmann, Samuel N.
2016-01-01
Atmospheric wind bursts in the tropics play a key role in the dynamics of the El Niño Southern Oscillation (ENSO). A simple modeling framework is proposed that summarizes this relationship and captures major features of the observational record while remaining physically consistent and amenable to detailed analysis. Within this simple framework, wind burst activity evolves according to a stochastic two-state Markov switching–diffusion process that depends on the strength of the western Pacific warm pool, and is coupled to simple ocean–atmosphere processes that are otherwise deterministic, stable, and linear. A simple model with this parameterization and no additional nonlinearities reproduces a realistic ENSO cycle with intermittent El Niño and La Niña events of varying intensity and strength as well as realistic buildup and shutdown of wind burst activity in the western Pacific. The wind burst activity has a direct causal effect on the ENSO variability: in particular, it intermittently triggers regular El Niño or La Niña events, super El Niño events, or no events at all, which enables the model to capture observed ENSO statistics such as the probability density function and power spectrum of eastern Pacific sea surface temperatures. The present framework provides further theoretical and practical insight on the relationship between wind burst activity and the ENSO. PMID:27573821
A simple model of the effect of ocean ventilation on ocean heat uptake
NASA Astrophysics Data System (ADS)
Nadiga, Balu; Urban, Nathan
2017-11-01
Transport of water from the surface mixed layer into the ocean interior is achieved, in large part, by the process of ventilation-a process associated with outcropping isopycnals. Starting from such a configuration of outcropping isopycnals, we derive a simple model of the effect of ventilation on ocean uptake of anomalous radiative forcing. This model can be seen as an improvement of the popular anomaly-diffusing class of energy balance models (AD-EBM) that are routinely employed to analyze and emulate the warming response of both observed and simulated Earth system. We demonstrate that neither multi-layer, nor continuous-diffusion AD-EBM variants can properly represent both surface-warming and the vertical distribution of ocean heat uptake. The new model overcomes this deficiency. The simplicity of the models notwithstanding, the analysis presented and the necessity of the modification is indicative of the role played by processes related to the down-welling branch of global ocean circulation in shaping the vertical distribution of ocean heat uptake.
Generative Models in Deep Learning: Constraints for Galaxy Evolution
NASA Astrophysics Data System (ADS)
Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.
2018-01-01
New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.
Simple model of hydrophobic hydration.
Lukšič, Miha; Urbic, Tomaz; Hribar-Lee, Barbara; Dill, Ken A
2012-05-31
Water is an unusual liquid in its solvation properties. Here, we model the process of transferring a nonpolar solute into water. Our goal was to capture the physical balance between water's hydrogen bonding and van der Waals interactions in a model that is simple enough to be nearly analytical and not heavily computational. We develop a 2-dimensional Mercedes-Benz-like model of water with which we compute the free energy, enthalpy, entropy, and the heat capacity of transfer as a function of temperature, pressure, and solute size. As validation, we find that this model gives the same trends as Monte Carlo simulations of the underlying 2D model and gives qualitative agreement with experiments. The advantages of this model are that it gives simple insights and that computational time is negligible. It may provide a useful starting point for developing more efficient and more realistic 3D models of aqueous solvation.
PHOTOCHEMICAL MODELING APPLIED TO NATURAL WATERS
The study examines the application of modeling photochemical processes in natural water systems. For many photochemical reactions occurring in natural waters, a simple photochemical model describing reaction rate as a function of intensity, radiation attenuation, reactant absorpt...
NASA Astrophysics Data System (ADS)
Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko
2015-06-01
We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.
Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition
NASA Astrophysics Data System (ADS)
McGilvray, M.; Dann, A. G.; Jacobs, P. A.
2013-07-01
Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.
Towards a Model for Protein Production Rates
NASA Astrophysics Data System (ADS)
Dong, J. J.; Schmittmann, B.; Zia, R. K. P.
2007-07-01
In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.
Active disturbance rejection controller for chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Both, Roxana; Dulf, Eva H.; Muresan, Cristina I., E-mail: roxana.both@aut.utcluj.ro
2015-03-10
In the petrochemical industry, the synthesis of 2 ethyl-hexanol-oxo-alcohols (plasticizers alcohol) is of high importance, being achieved through hydrogenation of 2 ethyl-hexenal inside catalytic trickle bed three-phase reactors. For this type of processes the use of advanced control strategies is suitable due to their nonlinear behavior and extreme sensitivity to load changes and other disturbances. Due to the complexity of the mathematical model an approach was to use a simple linear model of the process in combination with an advanced control algorithm which takes into account the model uncertainties, the disturbances and command signal limitations like robust control. However themore » resulting controller is complex, involving cost effective hardware. This paper proposes a simple integer-order control scheme using a linear model of the process, based on active disturbance rejection method. By treating the model dynamics as a common disturbance and actively rejecting it, active disturbance rejection control (ADRC) can achieve the desired response. Simulation results are provided to demonstrate the effectiveness of the proposed method.« less
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
Implications of Biospheric Energization
NASA Astrophysics Data System (ADS)
Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet
2016-07-01
Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.
Occupation probabilities and fluctuations in the asymmetric simple inclusion process
NASA Astrophysics Data System (ADS)
Reuveni, Shlomi; Hirschberg, Ori; Eliazar, Iddo; Yechiali, Uri
2014-04-01
The asymmetric simple inclusion process (ASIP), a lattice-gas model of unidirectional transport and aggregation, was recently proposed as an "inclusion" counterpart of the asymmetric simple exclusion process. In this paper we present an exact closed-form expression for the probability that a given number of particles occupies a given set of consecutive lattice sites. Our results are expressed in terms of the entries of Catalan's trapezoids—number arrays which generalize Catalan's numbers and Catalan's triangle. We further prove that the ASIP is asymptotically governed by the following: (i) an inverse square-root law of occupation, (ii) a square-root law of fluctuation, and (iii) a Rayleigh law for the distribution of interexit times. The universality of these results is discussed.
The attentional drift-diffusion model extends to simple purchasing decisions.
Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio
2012-01-01
How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions.
The Attentional Drift-Diffusion Model Extends to Simple Purchasing Decisions
Krajbich, Ian; Lu, Dingchao; Camerer, Colin; Rangel, Antonio
2012-01-01
How do we make simple purchasing decisions (e.g., whether or not to buy a product at a given price)? Previous work has shown that the attentional drift-diffusion model (aDDM) can provide accurate quantitative descriptions of the psychometric data for binary and trinary value-based choices, and of how the choice process is guided by visual attention. Here we extend the aDDM to the case of purchasing decisions, and test it using an eye-tracking experiment. We find that the model also provides a reasonably accurate quantitative description of the relationship between choice, reaction time, and visual fixations using parameters that are very similar to those that best fit the previous data. The only critical difference is that the choice biases induced by the fixations are about half as big in purchasing decisions as in binary choices. This suggests that a similar computational process is used to make binary choices, trinary choices, and simple purchasing decisions. PMID:22707945
Giovannini, Giannina; Sbarciog, Mihaela; Steyer, Jean-Philippe; Chamy, Rolando; Vande Wouwer, Alain
2018-05-01
Hydrogen has been found to be an important intermediate during anaerobic digestion (AD) and a key variable for process monitoring as it gives valuable information about the stability of the reactor. However, simple dynamic models describing the evolution of hydrogen are not commonplace. In this work, such a dynamic model is derived using a systematic data driven-approach, which consists of a principal component analysis to deduce the dimension of the minimal reaction subspace explaining the data, followed by an identification of the kinetic parameters in the least-squares sense. The procedure requires the availability of informative data sets. When the available data does not fulfill this condition, the model can still be built from simulated data, obtained using a detailed model such as ADM1. This dynamic model could be exploited in monitoring and control applications after a re-identification of the parameters using actual process data. As an example, the model is used in the framework of a control strategy, and is also fitted to experimental data from raw industrial wine processing wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.
Extended Poisson process modelling and analysis of grouped binary data.
Faddy, Malcolm J; Smith, David M
2012-05-01
A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Peer pressure and Generalised Lotka Volterra models
NASA Astrophysics Data System (ADS)
Richmond, Peter; Sabatelli, Lorenzo
2004-12-01
We develop a novel approach to peer pressure and Generalised Lotka-Volterra (GLV) models that builds on the development of a simple Langevin equation that characterises stochastic processes. We generalise the approach to stochastic equations that model interacting agents. The agent models recently advocated by Marsilli and Solomon are motivated. Using a simple change of variable, we show that the peer pressure model (similar to the one introduced by Marsilli) and the wealth dynamics model of Solomon may be (almost) mapped one into the other. This may help shed light in the (apparently) different wealth dynamics described by GLV and the Marsili-like peer pressure models.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
A Fuzzy Cognitive Model of aeolian instability across the South Texas Sandsheet
NASA Astrophysics Data System (ADS)
Houser, C.; Bishop, M. P.; Barrineau, C. P.
2014-12-01
Characterization of aeolian systems is complicated by rapidly changing surface-process regimes, spatio-temporal scale dependencies, and subjective interpretation of imagery and spatial data. This paper describes the development and application of analytical reasoning to quantify instability of an aeolian environment using scale-dependent information coupled with conceptual knowledge of process and feedback mechanisms. Specifically, a simple Fuzzy Cognitive Model (FCM) for aeolian landscape instability was developed that represents conceptual knowledge of key biophysical processes and feedbacks. Model inputs include satellite-derived surface biophysical and geomorphometric parameters. FCMs are a knowledge-based Artificial Intelligence (AI) technique that merges fuzzy logic and neural computing in which knowledge or concepts are structured as a web of relationships that is similar to both human reasoning and the human decision-making process. Given simple process-form relationships, the analytical reasoning model is able to map the influence of land management practices and the geomorphology of the inherited surface on aeolian instability within the South Texas Sandsheet. Results suggest that FCMs can be used to formalize process-form relationships and information integration analogous to human cognition with future iterations accounting for the spatial interactions and temporal lags across the sand sheets.
Shahaf, Goded; Pratt, Hillel
2013-01-01
In this work we demonstrate the principles of a systematic modeling approach of the neurophysiologic processes underlying a behavioral function. The modeling is based upon a flexible simulation tool, which enables parametric specification of the underlying neurophysiologic characteristics. While the impact of selecting specific parameters is of interest, in this work we focus on the insights, which emerge from rather accepted assumptions regarding neuronal representation. We show that harnessing of even such simple assumptions enables the derivation of significant insights regarding the nature of the neurophysiologic processes underlying behavior. We demonstrate our approach in some detail by modeling the behavioral go/no-go task. We further demonstrate the practical significance of this simplified modeling approach in interpreting experimental data - the manifestation of these processes in the EEG and ERP literature of normal and abnormal (ADHD) function, as well as with comprehensive relevant ERP data analysis. In-fact we show that from the model-based spatiotemporal segregation of the processes, it is possible to derive simple and yet effective and theory-based EEG markers differentiating normal and ADHD subjects. We summarize by claiming that the neurophysiologic processes modeled for the go/no-go task are part of a limited set of neurophysiologic processes which underlie, in a variety of combinations, any behavioral function with measurable operational definition. Such neurophysiologic processes could be sampled directly from EEG on the basis of model-based spatiotemporal segregation.
Simple dynamical models capturing the key features of the Central Pacific El Niño.
Chen, Nan; Majda, Andrew J
2016-10-18
The Central Pacific El Niño (CP El Niño) has been frequently observed in recent decades. The phenomenon is characterized by an anomalous warm sea surface temperature (SST) confined to the central Pacific and has different teleconnections from the traditional El Niño. Here, simple models are developed and shown to capture the key mechanisms of the CP El Niño. The starting model involves coupled atmosphere-ocean processes that are deterministic, linear, and stable. Then, systematic strategies are developed for incorporating several major mechanisms of the CP El Niño into the coupled system. First, simple nonlinear zonal advection with no ad hoc parameterization of the background SST gradient is introduced that creates coupled nonlinear advective modes of the SST. Secondly, due to the recent multidecadal strengthening of the easterly trade wind, a stochastic parameterization of the wind bursts including a mean easterly trade wind anomaly is coupled to the simple atmosphere-ocean processes. Effective stochastic noise in the wind burst model facilitates the intermittent occurrence of the CP El Niño with realistic amplitude and duration. In addition to the anomalous warm SST in the central Pacific, other major features of the CP El Niño such as the rising branch of the anomalous Walker circulation being shifted to the central Pacific and the eastern Pacific cooling with a shallow thermocline are all captured by this simple coupled model. Importantly, the coupled model succeeds in simulating a series of CP El Niño that lasts for 5 y, which resembles the two CP El Niño episodes during 1990-1995 and 2002-2006.
The practical use of simplicity in developing ground water models
Hill, M.C.
2006-01-01
The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.
Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.
2014-01-01
With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.
Goychuk, I
2001-08-01
Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.
A simple analytical model for signal amplification by reversible exchange (SABRE) process.
Barskiy, Danila A; Pravdivtsev, Andrey N; Ivanov, Konstantin L; Kovtunov, Kirill V; Koptyug, Igor V
2016-01-07
We demonstrate an analytical model for the description of the signal amplification by reversible exchange (SABRE) process. The model relies on a combined analysis of chemical kinetics and the evolution of the nuclear spin system during the hyperpolarization process. The presented model for the first time provides rationale for deciding which system parameters (i.e. J-couplings, relaxation rates, reaction rate constants) have to be optimized in order to achieve higher signal enhancement for a substrate of interest in SABRE experiments.
Mathematical modeling of high-pH chemical flooding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhuyan, D.; Lake, L.W.; Pope, G.A.
1990-05-01
This paper describes a generalized compositional reservoir simulator for high-pH chemical flooding processes. This simulator combines the reaction chemistry associated with these processes with the extensive physical- and flow-property modeling schemes of an existing micellar/polymer flood simulator, UTCHEM. Application of the model is illustrated for cases from a simple alkaline preflush to surfactant-enhanced alkaline-polymer flooding.
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Modeling Translation in Protein Synthesis with TASEP: A Tutorial and Recent Developments
NASA Astrophysics Data System (ADS)
Zia, R. K. P.; Dong, J. J.; Schmittmann, B.
2011-07-01
The phenomenon of protein synthesis has been modeled in terms of totally asymmetric simple exclusion processes (TASEP) since 1968. In this article, we provide a tutorial of the biological and mathematical aspects of this approach. We also summarize several new results, concerned with limited resources in the cell and simple estimates for the current (protein production rate) of a TASEP with inhomogeneous hopping rates, reflecting the characteristics of real genes.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
NASA Astrophysics Data System (ADS)
Aye, S. A.; Heyns, P. S.
2017-02-01
This paper proposes an optimal Gaussian process regression (GPR) for the prediction of remaining useful life (RUL) of slow speed bearings based on a novel degradation assessment index obtained from acoustic emission signal. The optimal GPR is obtained from an integration or combination of existing simple mean and covariance functions in order to capture the observed trend of the bearing degradation as well the irregularities in the data. The resulting integrated GPR model provides an excellent fit to the data and improves over the simple GPR models that are based on simple mean and covariance functions. In addition, it achieves a low percentage error prediction of the remaining useful life of slow speed bearings. These findings are robust under varying operating conditions such as loading and speed and can be applied to nonlinear and nonstationary machine response signals useful for effective preventive machine maintenance purposes.
Evidence integration in model-based tree search
Solway, Alec; Botvinick, Matthew M.
2015-01-01
Research on the dynamics of reward-based, goal-directed decision making has largely focused on simple choice, where participants decide among a set of unitary, mutually exclusive options. Recent work suggests that the deliberation process underlying simple choice can be understood in terms of evidence integration: Noisy evidence in favor of each option accrues over time, until the evidence in favor of one option is significantly greater than the rest. However, real-life decisions often involve not one, but several steps of action, requiring a consideration of cumulative rewards and a sensitivity to recursive decision structure. We present results from two experiments that leveraged techniques previously applied to simple choice to shed light on the deliberation process underlying multistep choice. We interpret the results from these experiments in terms of a new computational model, which extends the evidence accumulation perspective to multiple steps of action. PMID:26324932
Methods for Maximizing the Learning Process: A Theoretical and Experimental Analysis.
ERIC Educational Resources Information Center
Atkinson, Richard C.
This research deals with optimizing the instructional process. The approach adopted was to limit consideration to simple learning tasks for which adequate mathematical models could be developed. Optimal or suitable suboptimal instructional strategies were developed for the models. The basic idea was to solve for strategies that either maximize the…
Frank, Steven A.
2010-01-01
We typically observe large-scale outcomes that arise from the interactions of many hidden, small-scale processes. Examples include age of disease onset, rates of amino acid substitutions, and composition of ecological communities. The macroscopic patterns in each problem often vary around a characteristic shape that can be generated by neutral processes. A neutral generative model assumes that each microscopic process follows unbiased or random stochastic fluctuations: random connections of network nodes; amino acid substitutions with no effect on fitness; species that arise or disappear from communities randomly. These neutral generative models often match common patterns of nature. In this paper, I present the theoretical background by which we can understand why these neutral generative models are so successful. I show where the classic patterns come from, such as the Poisson pattern, the normal or Gaussian pattern, and many others. Each classic pattern was often discovered by a simple neutral generative model. The neutral patterns share a special characteristic: they describe the patterns of nature that follow from simple constraints on information. For example, any aggregation of processes that preserves information only about the mean and variance attracts to the Gaussian pattern; any aggregation that preserves information only about the mean attracts to the exponential pattern; any aggregation that preserves information only about the geometric mean attracts to the power law pattern. I present a simple and consistent informational framework of the common patterns of nature based on the method of maximum entropy. This framework shows that each neutral generative model is a special case that helps to discover a particular set of informational constraints; those informational constraints define a much wider domain of non-neutral generative processes that attract to the same neutral pattern. PMID:19538344
Bayesian analysis of volcanic eruptions
NASA Astrophysics Data System (ADS)
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
A Simple Text File for Curing Rainbow Blindness
NASA Technical Reports Server (NTRS)
Krylo, Robert; Tomlin, Marilyn; Seager, Michael
2008-01-01
This slide presentation reviews the use of a simple text file to work with large, multi-component thermal models that present a post-processing challenge. This complexity is due to temperatures for many components, with varying requirements, need to be examined and that false color temperature maps, or rainbows, provide a qualitative assessment of results.
ERIC Educational Resources Information Center
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-01-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in…
A simple predictive model for the structure of the oceanic pycnocline
Gnanadesikan
1999-03-26
A simple theory for the large-scale oceanic circulation is developed, relating pycnocline depth, Northern Hemisphere sinking, and low-latitude upwelling to pycnocline diffusivity and Southern Ocean winds and eddies. The results show that Southern Ocean processes help maintain the global ocean structure and that pycnocline diffusion controls low-latitude upwelling.
USDA-ARS?s Scientific Manuscript database
Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...
Mantle convection and the state of the Earth's interior
NASA Technical Reports Server (NTRS)
Hager, Bradford H.
1987-01-01
During 1983 to 1986 emphasis in the study of mantle convection shifted away from fluid mechanical analysis of simple systems with uniform material properties and simple geometries, toward analysis of the effects of more complicated, presumably more realistic models. The important processes related to mantle convection are considered. The developments in seismology are discussed.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
A coordination theory for intelligent machines
NASA Technical Reports Server (NTRS)
Wang, Fei-Yue; Saridis, George N.
1990-01-01
A formal model for the coordination level of intelligent machines is established. The framework of the coordination level investigated consists of one dispatcher and a number of coordinators. The model called coordination structure has been used to describe analytically the information structure and information flow for the coordination activities in the coordination level. Specifically, the coordination structure offers a formalism to (1) describe the task translation of the dispatcher and coordinators; (2) represent the individual process within the dispatcher and coordinators; (3) specify the cooperation and connection among the dispatcher and coordinators; (4) perform the process analysis and evaluation; and (5) provide a control and communication mechanism for the real-time monitor or simulation of the coordination process. A simple procedure for the task scheduling in the coordination structure is presented. The task translation is achieved by a stochastic learning algorithm. The learning process is measured with entropy and its convergence is guaranteed. Finally, a case study of the coordination structure with three coordinators and one dispatcher for a simple intelligent manipulator system illustrates the proposed model and the simulation of the task processes performed on the model verifies the soundness of the theory.
Guzman, Karen; Bartlett, John
2012-01-01
Biological systems and living processes involve a complex interplay of biochemicals and macromolecular structures that can be challenging for undergraduate students to comprehend and, thus, misconceptions abound. Protein synthesis, or translation, is an example of a biological process for which students often hold many misconceptions. This article describes an exercise that was developed to illustrate the process of translation using simple objects to represent complex molecules. Animations, 3D physical models, computer simulations, laboratory experiments and classroom lectures are also used to reinforce the students' understanding of translation, but by focusing on the simple manipulatives in this exercise, students are better able to visualize concepts that can elude them when using the other methods. The translation exercise is described along with suggestions for background material, questions used to evaluate student comprehension and tips for using the manipulatives to identify common misconceptions. Copyright © 2012 Wiley Periodicals, Inc.
A crack-like rupture model for the 19 September 1985 Michoacan, Mexico, earthquake
NASA Astrophysics Data System (ADS)
Ruppert, Stanley D.; Yomogida, Kiyoshi
1992-09-01
Evidence supporting a smooth crack-like rupture process of the Michoacan earthquake of 1985 is obtained from a major earthquake for the first time. Digital strong motion data from three stations (Caleta de Campos, La Villita, and La Union), recording near-field radiation from the fault, show unusually simple ramped displacements and permanent offsets previously only seen in theoretical models. The recording of low frequency (0 to 1 Hz) near-field waves together with the apparently smooth rupture favors a crack-like model to a step or Haskell-type dislocation model under the constraint of the slip distribution obtained by previous studies. A crack-like rupture, characterized by an approximated dynamic slip function and systematic decrease in slip duration away from the point of rupture nucleation, produces the best fit to the simple ramped displacements observed. Spatially varying rupture duration controls several important aspects of the synthetic seismograms, including the variation in displacement rise times between components of motion observed at Caleta de Campos. Ground motion observed at Caleta de Campos can be explained remarkably well with a smoothly propagating crack model. However, data from La Villita and La Union suggest a more complex rupture process than the simple crack-like model for the south-eastern portion of the fault.
Hartin, Corinne A.; Patel, Pralit L.; Schwarber, Adria; ...
2015-04-01
Simple climate models play an integral role in the policy and scientific communities. They are used for climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe Hector v1.0, an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global-scale earth system processes. Hector has a three-part main carbon cycle: a one-pool atmosphere, land, and ocean. The model's terrestrial carbon cycle includes primary production and respiration fluxes, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. Hector actively solves the inorganicmore » carbon system in the surface ocean, directly calculating air–sea fluxes of carbon and ocean pH. Hector reproduces the global historical trends of atmospheric [CO 2], radiative forcing, and surface temperatures. The model simulates all four Representative Concentration Pathways (RCPs) with equivalent rates of change of key variables over time compared to current observations, MAGICC (a well-known simple climate model), and models from the 5th Coupled Model Intercomparison Project. Hector's flexibility, open-source nature, and modular design will facilitate a broad range of research in various areas.« less
Simple construction and performance of a conical plastic cryocooler
NASA Technical Reports Server (NTRS)
Lambert, N.
1985-01-01
Low power cryocoolers with conical displacers offer several advantages over stepped displacers. The described fabrication process allows quick and reproducible manufacturing of plastic conical displacer units. This could be of commercial interest, but it also makes systematic optimization feasible by constructing a number of different models. The process allows for a wide range of displacer profiles. Low temperature performance as dominated by regenerator losses, and several effects are discussed. A simple device is described which controls gas flow during expansion.
Disaggregation and Refinement of System Dynamics Models via Agent-based Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J; Ozmen, Ozgur; Schryver, Jack C
System dynamics models are usually used to investigate aggregate level behavior, but these models can be decomposed into agents that have more realistic individual behaviors. Here we develop a simple model of the STEM workforce to illuminate the impacts that arise from the disaggregation and refinement of system dynamics models via agent-based modeling. Particularly, alteration of Poisson assumptions, adding heterogeneity to decision-making processes of agents, and discrete-time formulation are investigated and their impacts are illustrated. The goal is to demonstrate both the promise and danger of agent-based modeling in the context of a relatively simple model and to delineate themore » importance of modeling decisions that are often overlooked.« less
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.
2018-03-01
The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.
NASA Technical Reports Server (NTRS)
Poole, L. R.; Huckins, E. K., III
1972-01-01
A general theory on mathematical modeling of elastic parachute suspension lines during the unfurling process was developed. Massless-spring modeling of suspension-line elasticity was evaluated in detail. For this simple model, equations which govern the motion were developed and numerically integrated. The results were compared with flight test data. In most regions, agreement was satisfactory. However, poor agreement was obtained during periods of rapid fluctuations in line tension.
Riesenhuber, Maximilian; Wolff, Brian S.
2009-01-01
Summary A recent article in Acta Psychologica (“Picture-plane inversion leads to qualitative changes of face perception” by B. Rossion, 2008) criticized several aspects of an earlier paper of ours (Riesenhuber et al., “Face processing in humans is compatible with a simple shape-based model of vision”, Proc Biol Sci, 2004). We here address Rossion’s criticisms and correct some misunderstandings. To frame the discussion, we first review our previously presented computational model of face recognition in cortex (Jiang et al., “Evaluation of a shape-based model of human face discrimination using fMRI and behavioral techniques”, Neuron, 2006) that provides a concrete biologically plausible computational substrate for holistic coding, namely a neural representation learned for upright faces, in the spirit of the original simple-to-complex hierarchical model of vision by Hubel and Wiesel. We show that Rossion’s and others’ data support the model, and that there is actually a convergence of views on the mechanisms underlying face recognition, in particular regarding holistic processing. PMID:19665104
Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.
Putzar, Lisa; Gondan, Matthias; Röder, Brigitte
2012-01-01
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
ERIC Educational Resources Information Center
Gibbons, Robert D.; And Others
In the process of developing a conditionally-dependent item response theory (IRT) model, the problem arose of modeling an underlying multivariate normal (MVN) response process with general correlation among the items. Without the assumption of conditional independence, for which the underlying MVN cdf takes on comparatively simple forms and can be…
ERIC Educational Resources Information Center
Ahmet, Kara
2015-01-01
This paper presents a simple model of the provision of higher educational services that considers and exemplifies nonlinear, stochastic, and potentially chaotic processes. I use the methods of system dynamics to simulate these processes in the context of a particular sociologically interesting case, namely that of the Turkish higher education…
Modeling of the merging of two colliding field reversed configuration plasmoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu
2016-06-15
The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
ERIC Educational Resources Information Center
Fasoula, S.; Nikitas, P.; Pappa-Louisi, A.
2017-01-01
A series of Microsoft Excel spreadsheets were developed to simulate the process of separation optimization under isocratic and simple gradient conditions. The optimization procedure is performed in a stepwise fashion using simple macros for an automatic application of this approach. The proposed optimization approach involves modeling of the peak…
Ontological Model of Business Process Management Systems
NASA Astrophysics Data System (ADS)
Manoilov, G.; Deliiska, B.
2008-10-01
The activities which constitute business process management (BPM) can be grouped into five categories: design, modeling, execution, monitoring and optimization. Dedicated software packets for business process management system (BPMS) are available on the market. But the efficiency of its exploitation depends on used ontological model in the development time and run time of the system. In the article an ontological model of BPMS in area of software industry is investigated. The model building is preceded by conceptualization of the domain and taxonomy of BPMS development. On the base of the taxonomy an simple online thesaurus is created.
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Foreshock and aftershocks in simple earthquake models.
Kazemian, J; Tiampo, K F; Klein, W; Dominguez, R
2015-02-27
Many models of earthquake faults have been introduced that connect Gutenberg-Richter (GR) scaling to triggering processes. However, natural earthquake fault systems are composed of a variety of different geometries and materials and the associated heterogeneity in physical properties can cause a variety of spatial and temporal behaviors. This raises the question of how the triggering process and the structure interact to produce the observed phenomena. Here we present a simple earthquake fault model based on the Olami-Feder-Christensen and Rundle-Jackson-Brown cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or asperity cells, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems along with GR scaling. In addition, we observe sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a main shock that is followed by a tail of decreasing activity (aftershocks). This work provides further evidence that the spatial and temporal patterns observed in natural seismicity are strongly influenced by the underlying physical properties and are not solely the result of a simple cascade mechanism.
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Matthews, E.
1984-01-01
A simple method was developed for improved prescription of seasonal surface characteristics and parameterization of land-surface processes in climate models. This method, developed for the Goddard Institute for Space Studies General Circulation Model II (GISS GCM II), maintains the spatial variability of fine-resolution land-cover data while restricting to 8 the number of vegetation types handled in the model. This was achieved by: redefining the large number of vegetation classes in the 1 deg x 1 deg resolution Matthews (1983) vegetation data base as percentages of 8 simple types; deriving roughness length, field capacity, masking depth and seasonal, spectral reflectivity for the 8 types; and aggregating these surface features from the 1 deg x 1 deg resolution to coarser model resolutions, e.g., 8 deg latitude x 10 deg longitude or 4 deg latitude x 5 deg longitude.
Single Canonical Model of Reflexive Memory and Spatial Attention
Patel, Saumil S.; Red, Stuart; Lin, Eric; Sereno, Anne B.
2015-01-01
Many neurons in the dorsal and ventral visual stream have the property that after a brief visual stimulus presentation in their receptive field, the spiking activity in these neurons persists above their baseline levels for several seconds. This maintained activity is not always correlated with the monkey’s task and its origin is unknown. We have previously proposed a simple neural network model, based on shape selective neurons in monkey lateral intraparietal cortex, which predicts the valence and time course of reflexive (bottom-up) spatial attention. In the same simple model, we demonstrate here that passive maintained activity or short-term memory of specific visual events can result without need for an external or top-down modulatory signal. Mutual inhibition and neuronal adaptation play distinct roles in reflexive attention and memory. This modest 4-cell model provides the first simple and unified physiologically plausible mechanism of reflexive spatial attention and passive short-term memory processes. PMID:26493949
Exergetic simulation of a combined infrared-convective drying process
NASA Astrophysics Data System (ADS)
Aghbashlo, Mortaza
2016-04-01
Optimal design and performance of a combined infrared-convective drying system with respect to the energy issue is extremely put through the application of advanced engineering analyses. This article proposes a theoretical approach for exergy analysis of the combined infrared-convective drying process using a simple heat and mass transfer model. The applicability of the developed model to actual drying processes was proved using an illustrative example for a typical food.
The Dairy Greenhouse Gas Emission Model: Reference Manual
USDA-ARS?s Scientific Manuscript database
The Dairy Greenhouse Gas Model (DairyGHG) is a software tool for estimating the greenhouse gas emissions and carbon footprint of dairy production systems. A relatively simple process-based model is used to predict the primary greenhouse gas emissions, which include the net emission of carbon dioxide...
Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.
Camacho, Oscar; De la Cruz, Francisco
2004-04-01
An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de
2016-02-15
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methodsmore » are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.« less
Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases
NASA Astrophysics Data System (ADS)
Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.
2016-02-01
Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil.
Zhang, Xinbao; Long, Yi; He, Xiubin; Fu, Jiexiong; Zhang, Yunqi
2008-08-01
(137)Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of (137)Cs fallout from atmosphere in 1963. (137)Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The (137)Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the (137)Cs depth distribution in profile, where the maximum (137)Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total (137)Cs fallout amount deposited on the earth surface in 1963 and the (137)Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of (137)Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the (137)Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different (137)Cs loss proportions of the reference inventory at the Kaixian site of the Three Gorge Region, China are estimated by the two models. The over-estimation of the soil loss by using the previous simple profile-shape model obviously increases with the time period from the sampling year to the year of 1963 and (137)Cs loss proportion of the reference inventory. As to 20-80% of (137)Cs loss proportions of the reference inventory at the Kaixian site in 2004, the annual soil loss depths estimated by the new simplified transport process model are only 57.90-56.24% of the values estimated by the previous model.
Pencil-and-Paper Neural Networks: An Undergraduate Laboratory Exercise in Computational Neuroscience
Crisp, Kevin M.; Sutter, Ellen N.; Westerberg, Jacob A.
2015-01-01
Although it has been more than 70 years since McCulloch and Pitts published their seminal work on artificial neural networks, such models remain primarily in the domain of computer science departments in undergraduate education. This is unfortunate, as simple network models offer undergraduate students a much-needed bridge between cellular neurobiology and processes governing thought and behavior. Here, we present a very simple laboratory exercise in which students constructed, trained and tested artificial neural networks by hand on paper. They explored a variety of concepts, including pattern recognition, pattern completion, noise elimination and stimulus ambiguity. Learning gains were evident in changes in the use of language when writing about information processing in the brain. PMID:26557791
Exact solutions for network rewiring models
NASA Astrophysics Data System (ADS)
Evans, T. S.
2007-03-01
Evolving networks with a constant number of edges may be modelled using a rewiring process. These models are used to describe many real-world processes including the evolution of cultural artifacts such as family names, the evolution of gene variations, and the popularity of strategies in simple econophysics models such as the minority game. The model is closely related to Urn models used for glasses, quantum gravity and wealth distributions. The full mean field equation for the degree distribution is found and its exact solution and generating solution are given.
EIT Noise Resonance Power Broadening: a probe for coherence dynamics
NASA Astrophysics Data System (ADS)
Crescimanno, Michael; O'Leary, Shannon; Snider, Charles
2012-06-01
EIT noise correlation spectroscopy holds promise as a simple, robust method for performing high resolution spectroscopy used in devices as diverse as magnetometers and clocks. One useful feature of these noise correlation resonances is that they do not power broaden with the EIT window. We report on measurements of the eventual power broadening (at higher optical powers) of these resonances and a simple, quantitative theoretical model that relates the observed power broadening slope with processes such as two-photon detuning gradients and coherence diffusion. These processes reduce the ground state coherence relative to that of a homogeneous system, and thus the power broadening slope of the EIT noise correlation resonance may be a simple, useful probe for coherence dynamics.
A Simple Interactive Introduction to Teaching Genetic Engineering
ERIC Educational Resources Information Center
Child, Paula
2013-01-01
In the UK, at key stage 4, students aged 14-15 studying GCSE Core Science or Unit 1 of the GCSE Biology course are required to be able to describe the process of genetic engineering to produce bacteria that can produce insulin. The simple interactive introduction described in this article allows students to consider the problem, devise a model and…
Cloud fluid models of gas dynamics and star formation in galaxies
NASA Technical Reports Server (NTRS)
Struck-Marcell, Curtis; Scalo, John M.; Appleton, P. N.
1987-01-01
The large dynamic range of star formation in galaxies, and the apparently complex environmental influences involved in triggering or suppressing star formation, challenges the understanding. The key to this understanding may be the detailed study of simple physical models for the dominant nonlinear interactions in interstellar cloud systems. One such model is described, a generalized Oort model cloud fluid, and two simple applications of it are explored. The first of these is the relaxation of an isolated volume of cloud fluid following a disturbance. Though very idealized, this closed box study suggests a physical mechanism for starbursts, which is based on the approximate commensurability of massive cloud lifetimes and cloud collisional growth times. The second application is to the modeling of colliding ring galaxies. In this case, the driving processes operating on a dynamical timescale interact with the local cloud processes operating on the above timescale. The results is a variety of interesting nonequilibrium behaviors, including spatial variations of star formation that do not depend monotonically on gas density.
Ochoa, Silvia; Yoo, Ahrim; Repke, Jens-Uwe; Wozny, Günter; Yang, Dae Ryook
2007-01-01
Despite many environmental advantages of using alcohol as a fuel, there are still serious questions about its economical feasibility when compared with oil-based fuels. The bioethanol industry needs to be more competitive, and therefore, all stages of its production process must be simple, inexpensive, efficient, and "easy" to control. In recent years, there have been significant improvements in process design, such as in the purification technologies for ethanol dehydration (molecular sieves, pressure swing adsorption, pervaporation, etc.) and in genetic modifications of microbial strains. However, a lot of research effort is still required in optimization and control, where the first step is the development of suitable models of the process, which can be used as a simulated plant, as a soft sensor or as part of the control algorithm. Thus, toward developing good, reliable, and simple but highly predictive models that can be used in the future for optimization and process control applications, in this paper an unstructured and a cybernetic model are proposed and compared for the simultaneous saccharification-fermentation process (SSF) for the production of ethanol from starch by a recombinant Saccharomyces cerevisiae strain. The cybernetic model proposed is a new one that considers the degradation of starch not only into glucose but also into dextrins (reducing sugars) and takes into account the intracellular reactions occurring inside the cells, giving a more detailed description of the process. Furthermore, an identification procedure based on the Metropolis Monte Carlo optimization method coupled with a sensitivity analysis is proposed for the identification of the model's parameters, employing experimental data reported in the literature.
The Simple Expenditure Model with Trade: How Should We Model Imports?
ERIC Educational Resources Information Center
Cherry, Robert
2001-01-01
Models imports as a fixed proportion of spending rather than as a function of total or disposable income. Predicts the initial autonomous change in domestic spending by netting out spending shifts. Presents formulation which provides a clearer understanding of how leakages influence the multiplier process. (RLH)
Equivalent circuit models for interpreting impedance perturbation spectroscopy data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell
2004-07-01
As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
Diffusion models of the flanker task: Discrete versus gradual attentional selection
White, Corey N.; Ratcliff, Roger; Starns, Jeffrey S.
2011-01-01
The present study tested diffusion models of processing in the flanker task, in which participants identify a target that is flanked by items that indicate the same (congruent) or opposite response (incongruent). Single- and dual-process flanker models were implemented in a diffusion-model framework and tested against data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. There was strong mimcry among the models, and each captured the main trends in the data for the standard conditions. However, when more complex conditions were used, a single-process spotlight model captured qualitative and quantitative patterns that the dual-process models could not. Since the single-process model provided the best balance of fit quality and parsimony, the results indicate that processing in the simple versions of the flanker task is better described by gradual rather than discrete narrowing of attention. PMID:21964663
Computer simulations of sympatric speciation in a simple food web
NASA Astrophysics Data System (ADS)
Luz-Burgoa, K.; Dell, Tony; de Oliveira, S. Moss
2005-07-01
Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with the formation of the species. Inspired by them, in this paper we investigate the process of sympatric speciation in a simple food web model. For that we modify the individual-based Penna model that has been widely used to study aging as well as other evolutionary processes. Initially, our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we manipulate directly a basal resource distribution and monitor the changes in the populations. Sympatric speciation is obtained for the top species in both cases, and our results suggest that the speciation velocity depends on how far up, in the food chain, the focus population is feeding. Simulations are done with three different sexual imprintinglike mechanisms, in order to discuss adaptation by natural selection.
A sliding mode control proposal for open-loop unstable processes.
Rojas, Rubén; Camacho, Oscar; González, Luis
2004-04-01
This papers presents a sliding mode controller based on a first-order-plus-dead-time model of the process for controlling open-loop unstable systems. The proposed controller has a simple and fixed structure with a set of tuning equations as a function of the desired performance. Both linear and nonlinear models were used to study the controller performance by computer simulations.
Encouraging moderation: clues from a simple model of ideological conflict.
Marvel, Seth A; Hong, Hyunsuk; Papush, Anna; Strogatz, Steven H
2012-09-14
Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.
Encouraging Moderation: Clues from a Simple Model of Ideological Conflict
NASA Astrophysics Data System (ADS)
Marvel, Seth A.; Hong, Hyunsuk; Papush, Anna; Strogatz, Steven H.
2012-09-01
Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.
A simple phenomenological study of photodarkening in As2S3 glasses
NASA Astrophysics Data System (ADS)
Florea, Catalin; Busse, Lynda; Sanghera, Jasbinder; Shaw, Brandon; Aggarwal, Ishwar
2012-06-01
By using a simple photodarkening model we investigate the dynamics of photodarkening in As2S3 glasses under laser illumination. We find that, for illumination at 633 nm, the quantum efficiency of the photodarkening process is of about 4% and that the absorption cross-section of the dark centers is ˜2.2 times larger than that of the intrinsic structural units. The insights gained from the modeling are compared with the experimental results obtained when writing Bragg gratings using 633 nm, 594 nm and 568 nm laser light.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-09-01
This document presents a modeling and control study of the Fluid Bed Gasification (FBG) unit at the Morgantown Energy Technology Center (METC). The work is performed under contract no. DE-FG21-94MC31384. The purpose of this study is to generate a simple FBG model from process data, and then use the model to suggest an improved control scheme which will improve operation of the gasifier. The work first developes a simple linear model of the gasifier, then suggests an improved gasifier pressure and MGCR control configuration, and finally suggests the use of a multivariable control strategy for the gasifier.
Modelling the Active Hearing Process in Mosquitoes
NASA Astrophysics Data System (ADS)
Avitabile, Daniele; Homer, Martin; Jackson, Joe; Robert, Daniel; Champneys, Alan
2011-11-01
A simple microscopic mechanistic model is described of the active amplification within the Johnston's organ of the mosquito species Toxorhynchites brevipalpis. The model is based on the description of the antenna as a forced-damped oscillator coupled to a set of active threads (ensembles of scolopidia) that provide an impulsive force when they twitch. This twitching is in turn controlled by channels that are opened and closed if the antennal oscillation reaches a critical amplitude. The model matches both qualitatively and quantitatively with recent experiments. New results are presented using mathematical homogenization techniques to derive a mesoscopic model as a simple oscillator with nonlinear force and damping characteristics. It is shown how the results from this new model closely resemble those from the microscopic model as the number of threads approach physiologically correct values.
Formulation and Testing of a Novel River Nitrification Model
The nitrification process in many riverwater quality models has been approximated by a simple first order dependency on the water column ammonia concentration, while the benthic contribution has routinely been neglected. In this study a mathematical framework was developed for se...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korhonen, Marko; Lee, Eunghyun
2014-01-15
We treat the N-particle zero range process whose jumping rates satisfy a certain condition. This condition is required to use the Bethe ansatz and the resulting model is the q-boson model by Sasamoto and Wadati [“Exact results for one-dimensional totally asymmetric diffusion models,” J. Phys. A 31, 6057–6071 (1998)] or the q-totally asymmetric zero range process (TAZRP) by Borodin and Corwin [“Macdonald processes,” Probab. Theory Relat. Fields (to be published)]. We find the explicit formula of the transition probability of the q-TAZRP via the Bethe ansatz. By using the transition probability we find the probability distribution of the left-most particle'smore » position at time t. To find the probability for the left-most particle's position we find a new identity corresponding to identity for the asymmetric simple exclusion process by Tracy and Widom [“Integral formulas for the asymmetric simple exclusion process,” Commun. Math. Phys. 279, 815–844 (2008)]. For the initial state that all particles occupy a single site, the probability distribution of the left-most particle's position at time t is represented by the contour integral of a determinant.« less
Wardlow, Nathan; Polin, Chris; Villagomez-Bernabe, Balder; Currell, Fred
2015-11-01
We present a simple model for a component of the radiolytic production of any chemical species due to electron emission from irradiated nanoparticles (NPs) in a liquid environment, provided the expression for the G value for product formation is known and is reasonably well characterized by a linear dependence on beam energy. This model takes nanoparticle size, composition, density and a number of other readily available parameters (such as X-ray and electron attenuation data) as inputs and therefore allows for the ready determination of this contribution. Several approximations are used, thus this model provides an upper limit to the yield of chemical species due to electron emission, rather than a distinct value, and this upper limit is compared with experimental results. After the general model is developed we provide details of its application to the generation of HO• through irradiation of gold nanoparticles (AuNPs), a potentially important process in nanoparticle-based enhancement of radiotherapy. This model has been constructed with the intention of making it accessible to other researchers who wish to estimate chemical yields through this process, and is shown to be applicable to NPs of single elements and mixtures. The model can be applied without the need to develop additional skills (such as using a Monte Carlo toolkit), providing a fast and straightforward method of estimating chemical yields. A simple framework for determining the HO• yield for different NP sizes at constant NP concentration and initial photon energy is also presented.
Kim, Sung-Min
2018-01-01
Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Markov Decision Process Measurement Model.
LaMar, Michelle M
2018-03-01
Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.
A simple statistical model for geomagnetic reversals
NASA Technical Reports Server (NTRS)
Constable, Catherine
1990-01-01
The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.
Dynamics of financial crises in the world trade network
NASA Astrophysics Data System (ADS)
Askari, Marziyeh; Shirazi, Homayoun; Aghababaei Samani, Keivan
2018-07-01
A simple dynamical model is introduced to simulate the spreading of financial crises in the world trade network. In this model a directed network is constructed in which a weighted and directed link indicates the export value between two countries. The weights are subject to the change by a simple dynamical rule. The process begins with a crisis, i.e. a sudden decrease in the export value of a certain country and spreads throughout the whole network. We compare our results with the real values corresponding to the global financial crisis of 2008 and show that the results of our model are in good agreement with reality.
Defining Simple nD Operations Based on Prosmatic nD Objects
NASA Astrophysics Data System (ADS)
Arroyo Ohori, K.; Ledoux, H.; Stoter, J.
2016-10-01
An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.
Vanuytrecht, Eline; Thorburn, Peter J
2017-05-01
Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
An Activity Model for Scientific Inquiry
ERIC Educational Resources Information Center
Harwood, William
2004-01-01
Most people are frustrated with the current scientific method presented in textbooks. The scientific method--a simplistic model of the scientific inquiry process--fails in most cases to provide a successful guide to how science is done. This is not shocking, really. Many simple models used in science are quite useful within their limitations. When…
The interaction of wind and water in the desertification environment
NASA Technical Reports Server (NTRS)
Jacobberger, P. A.
1987-01-01
An appropriate process/response model for the physical basis of desertification is provided by the interactions of wind and water in the desert fringe environment. Essentially, the process of desertification can be thought of as a progressive environmental transition from predominantly fluvial to aeolian processes. This is a simple but useful way of looking at desertification; in this context, desertification is morphogenetic in character. To illustrate the model, a study of drought-related changes in central Mali will serve to trace the interrelated responses of geomorphologic processes to drought conditions.
Mingguang, Zhang; Juncheng, Jiang
2008-10-30
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Damage probability and relative threshold value are two necessary parameters in QRA of this phenomenon. Some simple models had been proposed based on scarce data or oversimplified assumption. Hence, more data about damage to chemical process equipments were gathered and analyzed, a quantitative relationship between damage probability and damage degrees of equipment was built, and reliable probit models were developed associated to specific category of chemical process equipments. Finally, the improvements of present models were evidenced through comparison with other models in literatures, taking into account such parameters: consistency between models and data, depth of quantitativeness in QRA.
Range dynamics models now incorporate many of the mechanisms and interactions that drive species distributions. However, connectivity continues to be studied using overly simple distance-based dispersal models with little consideration of how the individual behavior of dispersin...
Repeatability Modeling for Wind-Tunnel Measurements: Results for Three Langley Facilities
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Houlden, Heather P.
2014-01-01
Data from extensive check standard tests of seven measurement processes in three NASA Langley Research Center wind tunnels are statistically analyzed to test a simple model previously presented in 2000 for characterizing short-term, within-test and across-test repeatability. The analysis is intended to support process improvement and development of uncertainty models for the measurements. The analysis suggests that the repeatability can be estimated adequately as a function of only the test section dynamic pressure over a two-orders- of-magnitude dynamic pressure range. As expected for low instrument loading, short-term coefficient repeatability is determined by the resolution of the instrument alone (air off). However, as previously pointed out, for the highest dynamic pressure range the coefficient repeatability appears to be independent of dynamic pressure, thus presenting a lower floor for the standard deviation for all three time frames. The simple repeatability model is shown to be adequate for all of the cases presented and for all three time frames.
Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Dorman, J. L.
1987-01-01
The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.
Osterberg, T; Norinder, U
2001-01-01
A method of modelling and predicting biopharmaceutical properties using simple theoretically computed molecular descriptors and multivariate statistics has been investigated for several data sets related to solubility, IAM chromatography, permeability across Caco-2 cell monolayers, human intestinal perfusion, brain-blood partitioning, and P-glycoprotein ATPase activity. The molecular descriptors (e.g. molar refractivity, molar volume, index of refraction, surface tension and density) and logP were computed with ACD/ChemSketch and ACD/logP, respectively. Good statistical models were derived that permit simple computational prediction of biopharmaceutical properties. All final models derived had R(2) values ranging from 0.73 to 0.95 and Q(2) values ranging from 0.69 to 0.86. The RMSEP values for the external test sets ranged from 0.24 to 0.85 (log scale).
Computational Models of Laryngeal Aerodynamics: Potentials and Numerical Costs.
Sadeghi, Hossein; Kniesburges, Stefan; Kaltenbacher, Manfred; Schützenberger, Anne; Döllinger, Michael
2018-02-07
Human phonation is based on the interaction between tracheal airflow and laryngeal dynamics. This fluid-structure interaction is based on the energy exchange between airflow and vocal folds. Major challenges in analyzing the phonatory process in-vivo are the small dimensions and the poor accessibility of the region of interest. For improved analysis of the phonatory process, numerical simulations of the airflow and the vocal fold dynamics have been suggested. Even though most of the models reproduced the phonatory process fairly well, development of comprehensive larynx models is still a subject of research. In the context of clinical application, physiological accuracy and computational model efficiency are of great interest. In this study, a simple numerical larynx model is introduced that incorporates the laryngeal fluid flow. It is based on a synthetic experimental model with silicone vocal folds. The degree of realism was successively increased in separate computational models and each model was simulated for 10 oscillation cycles. Results show that relevant features of the laryngeal flow field, such as glottal jet deflection, develop even when applying rather simple static models with oscillating flow rates. Including further phonatory components such as vocal fold motion, mucosal wave propagation, and ventricular folds, the simulations show phonatory key features like intraglottal flow separation and increased flow rate in presence of ventricular folds. The simulation time on 100 CPU cores ranged between 25 and 290 hours, currently restricting clinical application of these models. Nevertheless, results show high potential of numerical simulations for better understanding of phonatory process. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile
DOE Office of Scientific and Technical Information (OSTI.GOV)
CUYLER,DAVID S.
2000-11-01
Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.
NASA Technical Reports Server (NTRS)
Cain, Bruce L.
1990-01-01
The problems of weld quality control and weld process dependability continue to be relevant issues in modern metal welding technology. These become especially important for NASA missions which may require the assembly or repair of larger orbiting platforms using automatic welding techniques. To extend present welding technologies for such applications, NASA/MSFC's Materials and Processes Lab is developing physical models of the arc welding process with the goal of providing both a basis for improved design of weld control systems, and a better understanding of how arc welding variables influence final weld properties. The physics of the plasma arc discharge is reasonably well established in terms of transport processes occurring in the arc column itself, although recourse to sophisticated numerical treatments is normally required to obtain quantitative results. Unfortunately the rigor of these numerical computations often obscures the physics of the underlying model due to its inherent complexity. In contrast, this work has focused on a relatively simple physical model of the arc discharge to describe the gross features observed in welding arcs. Emphasis was placed of deriving analytic expressions for the voltage along the arc axis as a function of known or measurable arc parameters. The model retains the essential physics for a straight polarity, diffusion dominated free burning arc in argon, with major simplifications of collisionless sheaths and simple energy balances at the electrodes.
NASA Technical Reports Server (NTRS)
Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan
2012-01-01
The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Vinita J.; Schaefer, Charles; Kahnhauser, Henry
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was shut down in September 2014. Lead bricks used as radiological shadow shielding within the accelerator were exposed to stray radiation fields during normal operations. The FLUKA code, a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter, was used to estimate induced radioactivity in this shielding and stainless steel beam pipe from known beam losses. The FLUKA output was processed using MICROSHIELD® to estimate on-contact exposure rates with individually exposed bricks to help design and optimize the radiological survey process. Thismore » entire process can be modeled using FLUKA, but use of MICROSHIELD® as a secondary method was chosen because of the project’s resource constraints. Due to the compressed schedule and lack of shielding configuration data, simple FLUKA models were developed in this paper. FLUKA activity estimates for stainless steel were compared with sampling data to validate results, which show that simple FLUKA models and irradiation geometries can be used to predict radioactivity inventories accurately in exposed materials. During decommissioning 0.1% of the lead bricks were found to have measurable levels of induced radioactivity. Finally, post-processing with MICROSHIELD® provides an acceptable secondary method of estimating residual exposure rates.« less
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Struijs, J; van de Meent, D; Schowanek, D; Buchholz, H; Patoux, R; Wolf, T; Austin, T; Tolls, J; van Leeuwen, K; Galay-Burgos, M
2016-09-01
The multimedia model SimpleTreat, evaluates the distribution and elimination of chemicals by municipal sewage treatment plants (STP). It is applied in the framework of REACH (Registration, Evaluation, Authorization and Restriction of Chemicals). This article describes an adaptation of this model for application to industrial sewage treatment plants (I-STP). The intended use of this re-parametrized model is focused on risk assessment during manufacture and subsequent uses of chemicals, also in the framework of REACH. The results of an inquiry on the operational characteristics of industrial sewage treatment installations were used to re-parameterize the model. It appeared that one property of industrial sewage, i.e. Biological Oxygen Demand (BOD) in combination with one parameter of the activated sludge process, the hydraulic retention time (HRT) is satisfactory to define treatment of industrial wastewater by means of the activated sludge process. The adapted model was compared to the original municipal version, SimpleTreat 4.0, by means of a sensitivity analysis. The consistency of the model output was assessed by computing the emission to water from an I-STP of a set of fictitious chemicals. This set of chemicals exhibit a range of physico-chemical and biodegradability properties occurring in industrial wastewater. Predicted removal rates of a chemical from raw sewage are higher in industrial than in municipal STPs. The latter have typically shorter hydraulic retention times with diminished opportunity for elimination of the chemical due to volatilization and biodegradation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Work and information processing in a solvable model of Maxwell's demon.
Mandal, Dibyendu; Jarzynski, Christopher
2012-07-17
We describe a minimal model of an autonomous Maxwell demon, a device that delivers work by rectifying thermal fluctuations while simultaneously writing information to a memory register. We solve exactly for the steady-state behavior of our model, and we construct its phase diagram. We find that our device can also act as a "Landauer eraser", using externally supplied work to remove information from the memory register. By exposing an explicit, transparent mechanism of operation, our model offers a simple paradigm for investigating the thermodynamics of information processing by small systems.
Don Quixote Pond: A Small Scale Model of Weathering and Salt Accumulation
NASA Technical Reports Server (NTRS)
Englert, P.; Bishop, J. L.; Patel, S. N.; Gibson, E. K.; Koeberl, C.
2015-01-01
The formation of Don Quixote Pond in the North Fork of Wright Valley, Antarctica, is a model for unique terrestrial calcium, chlorine, and sulfate weathering, accumulation, and distribution processes. The formation of Don Quixote Pond by simple shallow and deep groundwater contrasts more complex models for Don Juan Pond in the South Fork of Wright Valley. Our study intends to understand the formation of Don Quixote Pond as unique terrestrial processes and as a model for Ca, C1, and S weathering and distribution on Mars.
Mass and Environment as Drivers of Galaxy Evolution: Simplicity and its Consequences
NASA Astrophysics Data System (ADS)
Peng, Yingjie
2012-01-01
The galaxy population appears to be composed of infinitely complex different types and properties at first sight, however, when large samples of galaxies are studied, it appears that the vast majority of galaxies just follow simple scaling relations and similar evolutional modes while the outliers represent some minority. The underlying simplicities of the interrelationships among stellar mass, star formation rate and environment are seen in SDSS and zCOSMOS. We demonstrate that the differential effects of mass and environment are completely separable to z 1, indicating that two distinct physical processes are operating, namely the "mass quenching" and "environment quenching". These two simple quenching processes, plus some additional quenching due to merging, then naturally produce the Schechter form of the galaxy stellar mass functions and make quantitative predictions for the inter-relationships between the Schechter parameters of star-forming and passive galaxies in different environments. All of these detailed quantitative relationships are indeed seen, to very high precision, in SDSS, lending strong support to our simple empirically-based model. The model also offers qualitative explanations for the "anti-hierarchical" age-mass relation and the alpha-enrichment patterns for passive galaxies and makes some other testable predictions such as the mass function of the population of transitory objects that are in the process of being quenched, the galaxy major- and minor-merger rates, the galaxy stellar mass assembly history, star formation history and etc. Although still purely phenomenological, the model makes clear what the evolutionary characteristics of the relevant physical processes must in fact be.
USDA-ARS?s Scientific Manuscript database
Improving process-based crop models is needed to achieve high fidelity forecasts of regional energy, water, and carbon exchange. However, most state-of-the-art Land Surface Models (LSMs) assessed in the fifth phase of the Coupled Model Inter-comparison project (CMIP5) simulated crops as simple C3 or...
Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes
NASA Astrophysics Data System (ADS)
Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi
2018-05-01
Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.
AN ENVIRONMENTAL SIMULATION MODEL FOR TRANSPORT AND FATE OF MERCURY IN SMALL RURAL CATCHMENTS
The development of an extensively modified version of the environmental model GLEAMS to simulate fate and transport of mercury in small catchments is presented. Methods for parameter estimation are proposed and in some cases simple relationships for mercury processes are derived....
NASA Astrophysics Data System (ADS)
Fast, J. D.; Ma, P.; Easter, R. C.; Liu, X.; Zaveri, R. A.; Rasch, P.
2012-12-01
Predictions of aerosol radiative forcing in climate models still contain large uncertainties, resulting from a poor understanding of certain aerosol processes, the level of complexity of aerosol processes represented in models, and the ability of models to account for sub-grid scale variability of aerosols and processes affecting them. In addition, comparing the performance and computational efficiency of new aerosol process modules used in various studies is problematic because different studies often employ different grid configurations, meteorology, trace gas chemistry, and emissions that affect the temporal and spatial evolution of aerosols. To address this issue, we have developed an Aerosol Modeling Testbed (AMT) to systematically and objectively evaluate aerosol process modules. The AMT consists of the modular Weather Research and Forecasting (WRF) model, a series of testbed cases for which extensive in situ and remote sensing measurements of meteorological, trace gas, and aerosol properties are available, and a suite of tools to evaluate the performance of meteorological, chemical, aerosol process modules. WRF contains various parameterizations of meteorological, chemical, and aerosol processes and includes interactive aerosol-cloud-radiation treatments similar to those employed by climate models. In addition, the physics suite from a global climate model, Community Atmosphere Model version 5 (CAM5), has also been ported to WRF so that these parameterizations can be tested at various spatial scales and compared directly with field campaign data and other parameterizations commonly used by the mesoscale modeling community. In this study, we evaluate simple and complex treatments of the aerosol size distribution and secondary organic aerosols using the AMT and measurements collected during three field campaigns: the Megacities Initiative Local and Global Observations (MILAGRO) campaign conducted in the vicinity of Mexico City during March 2006, the Carbonaceous Aerosol and Radiative Effects Study (CARES) conducted in the vicinity of Sacramento California during June 2010, and the California Nexus (CalNex) campaign conducted in southern California during May and June of 2010. For the aerosol size distribution, we compare the predictions from the GOCART bulk aerosol model, the MADE/SORGAM modal aerosol model, the Modal Aerosol Model (MAM) employed by CAM5, and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) which uses a sectional representation. For secondary organic aerosols, we compare simple fixed mass yield approaches with the numerically complex volatility basis set approach. All simulations employ the same emissions, meteorology, trace gas chemistry (except for that involving condensable organic species), and initial and boundary conditions. Performance metrics from the AMT are used to assess performance in terms of simulated mass, composition, size distribution (except for GOCART), and aerosol optical properties in relation to computational expense. In addition to statistical measures, qualitative differences among the different aerosol models over the computational domain are presented to examine variations in how aerosols age among the aerosol models.
Self-organizing biopsychosocial dynamics and the patient-healer relationship.
Pincus, David
2012-01-01
The patient-healer relationship has an increasing area of interest for complementary and alternative medicine (CAM) researchers. This focus on the interpersonal context of treatment is not surprising as dismantling studies, clinical trials and other linear research designs continually point toward the critical role of context and the broadband biopsychosocial nature of therapeutic responses to CAM. Unfortunately, the same traditional research models and methods that fail to find simple and specific treatment-outcome relations are similarly failing to find simple and specific mechanisms to explain how interpersonal processes influence patient outcomes. This paper presents an overview of some of the key models and methods from nonlinear dynamical systems that are better equipped for empirical testing of CAM outcomes on broadband biopsychosocial processes. Suggestions are made for CAM researchers to assist in modeling the interactions among key process dynamics interacting across biopsychosocial scales: empathy, intra-psychic conflict, physiological arousal, and leukocyte telomerase activity. Finally, some speculations are made regarding the possibility for deeper cross-scale information exchange involving quantum temporal nonlocality. Copyright © 2012 S. Karger AG, Basel.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
On the (In)Validity of Tests of Simple Mediation: Threats and Solutions
Pek, Jolynn; Hoyle, Rick H.
2015-01-01
Mediation analysis is a popular framework for identifying underlying mechanisms in social psychology. In the context of simple mediation, we review and discuss the implications of three facets of mediation analysis: (a) conceptualization of the relations between the variables, (b) statistical approaches, and (c) relevant elements of design. We also highlight the issue of equivalent models that are inherent in simple mediation. The extent to which results are meaningful stem directly from choices regarding these three facets of mediation analysis. We conclude by discussing how mediation analysis can be better applied to examine causal processes, highlight the limits of simple mediation, and make recommendations for better practice. PMID:26985234
Valuation of exotic options in the framework of Levy processes
NASA Astrophysics Data System (ADS)
Milev, Mariyan; Georgieva, Svetla; Markovska, Veneta
2013-12-01
In this paper we explore a straightforward procedure to price derivatives by using the Monte Carlo approach when the underlying process is a jump-diffusion. We have compared the Black-Scholes model with one of its extensions that is the Merton model. The latter model is better in capturing the market's phenomena and is comparative to stochastic volatility models in terms of pricing accuracy. We have presented simulations of asset paths and pricing of barrier options for both Geometric Brownian motion and exponential Levy processes as it is the concrete case of the Merton model. A desired level of accuracy is obtained with simple computer operations in MATLAB for efficient computational time.
A Numerical and Experimental Study of Damage Growth in a Composite Laminate
NASA Technical Reports Server (NTRS)
McElroy, Mark; Ratcliffe, James; Czabaj, Michael; Wang, John; Yuan, Fuh-Gwo
2014-01-01
The present study has three goals: (1) perform an experiment where a simple laminate damage process can be characterized in high detail; (2) evaluate the performance of existing commercially available laminate damage simulation tools by modeling the experiment; (3) observe and understand the underlying physics of damage in a composite honeycomb sandwich structure subjected to low-velocity impact. A quasi-static indentation experiment has been devised to provide detailed information about a simple mixed-mode damage growth process. The test specimens consist of an aluminum honeycomb core with a cross-ply laminate facesheet supported on a stiff uniform surface. When the sample is subjected to an indentation load, the honeycomb core provides support to the facesheet resulting in a gradual and stable damage growth process in the skin. This enables real time observation as a matrix crack forms, propagates through a ply, and then causes a delamination. Finite element analyses were conducted in ABAQUS/Explicit(TradeMark) 6.13 that used continuum and cohesive modeling techniques to simulate facesheet damage and a geometric and material nonlinear model to simulate core crushing. The high fidelity of the experimental data allows a detailed investigation and discussion of the accuracy of each numerical modeling approach.
Using energy budgets to combine ecology and toxicology in a mammalian sentinel species
NASA Astrophysics Data System (ADS)
Desforges, Jean-Pierre W.; Sonne, Christian; Dietz, Rune
2017-04-01
Process-driven modelling approaches can resolve many of the shortcomings of traditional descriptive and non-mechanistic toxicology. We developed a simple dynamic energy budget (DEB) model for the mink (Mustela vison), a sentinel species in mammalian toxicology, which coupled animal physiology, ecology and toxicology, in order to mechanistically investigate the accumulation and adverse effects of lifelong dietary exposure to persistent environmental toxicants, most notably polychlorinated biphenyls (PCBs). Our novel mammalian DEB model accurately predicted, based on energy allocations to the interconnected metabolic processes of growth, development, maintenance and reproduction, lifelong patterns in mink growth, reproductive performance and dietary accumulation of PCBs as reported in the literature. Our model results were consistent with empirical data from captive and free-ranging studies in mink and other wildlife and suggest that PCB exposure can have significant population-level impacts resulting from targeted effects on fetal toxicity, kit mortality and growth and development. Our approach provides a simple and cross-species framework to explore the mechanistic interactions of physiological processes and ecotoxicology, thus allowing for a deeper understanding and interpretation of stressor-induced adverse effects at all levels of biological organization.
Advanced Method to Estimate Fuel Slosh Simulation Parameters
NASA Technical Reports Server (NTRS)
Schlee, Keith; Gangadharan, Sathya; Ristow, James; Sudermann, James; Walker, Charles; Hubert, Carl
2005-01-01
The nutation (wobble) of a spinning spacecraft in the presence of energy dissipation is a well-known problem in dynamics and is of particular concern for space missions. The nutation of a spacecraft spinning about its minor axis typically grows exponentially and the rate of growth is characterized by the Nutation Time Constant (NTC). For launch vehicles using spin-stabilized upper stages, fuel slosh in the spacecraft propellant tanks is usually the primary source of energy dissipation. For analytical prediction of the NTC this fuel slosh is commonly modeled using simple mechanical analogies such as pendulums or rigid rotors coupled to the spacecraft. Identifying model parameter values which adequately represent the sloshing dynamics is the most important step in obtaining an accurate NTC estimate. Analytic determination of the slosh model parameters has met with mixed success and is made even more difficult by the introduction of propellant management devices and elastomeric diaphragms. By subjecting full-sized fuel tanks with actual flight fuel loads to motion similar to that experienced in flight and measuring the forces experienced by the tanks these parameters can be determined experimentally. Currently, the identification of the model parameters is a laborious trial-and-error process in which the equations of motion for the mechanical analog are hand-derived, evaluated, and their results are compared with the experimental results. The proposed research is an effort to automate the process of identifying the parameters of the slosh model using a MATLAB/SimMechanics-based computer simulation of the experimental setup. Different parameter estimation and optimization approaches are evaluated and compared in order to arrive at a reliable and effective parameter identification process. To evaluate each parameter identification approach, a simple one-degree-of-freedom pendulum experiment is constructed and motion is induced using an electric motor. By applying the estimation approach to a simple, accurately modeled system, its effectiveness and accuracy can be evaluated. The same experimental setup can then be used with fluid-filled tanks to further evaluate the effectiveness of the process. Ultimately, the proven process can be applied to the full-sized spinning experimental setup to quickly and accurately determine the slosh model parameters for a particular spacecraft mission. Automating the parameter identification process will save time, allow more changes to be made to proposed designs, and lower the cost in the initial design stages.
Analytically tractable climate-carbon cycle feedbacks under 21st century anthropogenic forcing
NASA Astrophysics Data System (ADS)
Lade, Steven J.; Donges, Jonathan F.; Fetzer, Ingo; Anderies, John M.; Beer, Christian; Cornell, Sarah E.; Gasser, Thomas; Norberg, Jon; Richardson, Katherine; Rockström, Johan; Steffen, Will
2018-05-01
Changes to climate-carbon cycle feedbacks may significantly affect the Earth system's response to greenhouse gas emissions. These feedbacks are usually analysed from numerical output of complex and arguably opaque Earth system models. Here, we construct a stylised global climate-carbon cycle model, test its output against comprehensive Earth system models, and investigate the strengths of its climate-carbon cycle feedbacks analytically. The analytical expressions we obtain aid understanding of carbon cycle feedbacks and the operation of the carbon cycle. Specific results include that different feedback formalisms measure fundamentally the same climate-carbon cycle processes; temperature dependence of the solubility pump, biological pump, and CO2 solubility all contribute approximately equally to the ocean climate-carbon feedback; and concentration-carbon feedbacks may be more sensitive to future climate change than climate-carbon feedbacks. Simple models such as that developed here also provide workbenches
for simple but mechanistically based explorations of Earth system processes, such as interactions and feedbacks between the planetary boundaries, that are currently too uncertain to be included in comprehensive Earth system models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooper, R.P.; West, C.T.; Peters, N.E.
1990-08-01
The authors constructed a simple, process-oriented model, called the Alpine Lake Forecaster (ALF), using data collected during the Integrated Watershed Study at Emerald Lake, Sequoia National Park, CA. The model was designed to answer questions concerning the impact of acid deposition on high-elevation watersheds in the Sierra Nevada, CA. ALF is able to capture the basic solute patterns in stream water during snowmelt in this alpine catchment where ground water is a minor contributor to stream flow. It includes an empirical representation of primary mineral weathering as the only alkalinity-generating mechanism. Hydrologic and chemical data from a heavy snow yearmore » were used to calibrate the model. Watershed processes during a light snow year appeared to be different from the calibration year. The model forecast concludes that stream and lake water are most likely to experience a loss of ANC and depression in pH during spring rain storms that occur during the snowmelt dilution phase.« less
Kuan, Hui-Shun; Betterton, Meredith D.
2016-01-01
Motor protein motion on biopolymers can be described by models related to the totally asymmetric simple exclusion process (TASEP). Inspired by experiments on the motion of kinesin-4 motors on antiparallel microtubule overlaps, we analyze a model incorporating the TASEP on two antiparallel lanes with binding kinetics and lane switching. We determine the steady-state motor density profiles using phase-plane analysis of the steady-state mean field equations and kinetic Monte Carlo simulations. We focus on the density-density phase plane, where we find an analytic solution to the mean field model. By studying the phase-space flows, we determine the model’s fixed points and their changes with parameters. Phases previously identified for the single-lane model occur for low switching rate between lanes. We predict a multiple coexistence phase due to additional fixed points that appear as the switching rate increases: switching moves motors from the higher-density to the lower-density lane, causing local jamming and creating multiple domain walls. We determine the phase diagram of the model for both symmetric and general boundary conditions. PMID:27627345
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
Comparing fire spread algorithms using equivalence testing and neutral landscape models
Brian R. Miranda; Brian R. Sturtevant; Jian Yang; Eric J. Gustafson
2009-01-01
We demonstrate a method to evaluate the degree to which a meta-model approximates spatial disturbance processes represented by a more detailed model across a range of landscape conditions, using neutral landscapes and equivalence testing. We illustrate this approach by comparing burn patterns produced by a relatively simple fire spread algorithm with those generated by...
Predicting bending stiffness of randomly oriented hybrid panels
Laura Moya; William T.Y. Tze; Jerrold E. Winandy
2010-01-01
This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...
Dataflow models for fault-tolerant control systems
NASA Technical Reports Server (NTRS)
Papadopoulos, G. M.
1984-01-01
Dataflow concepts are used to generate a unified hardware/software model of redundant physical systems which are prone to faults. Basic results in input congruence and synchronization are shown to reduce to a simple model of data exchanges between processing sites. Procedures are given for the construction of congruence schemata, the distinguishing features of any correctly designed redundant system.
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
Heat pipe life and processing study
NASA Technical Reports Server (NTRS)
Antoniuk, D.; Luedke, E. E.
1979-01-01
The merit of adding water to the reflux charge in chemically and solvent cleaned aluminum/slab wick/ammonia heat pipes was evaluated. The effect of gas in the performance of three heat pipe thermal control systems was found significant in simple heat pipes, less significant in a modified simple heat pipe model with a short wickless pipe section. Use of gas data for the worst and best heat pipes of the matrix in a variable conductance heat pipe model showed a 3 C increase in the source temperature at full on condition after 20 and 246 years, respectively.
Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon
2017-04-05
Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Timpe, Nathalie F.; Stuch, Julia; Scholl, Marcus; Russek, Ulrich A.
2016-03-01
This contribution presents a phenomenological, analytical model for laser welding of polymers which is suited for a quick process quality estimation for the practitioner. Besides material properties of the polymer and processing parameters like welding pressure, feed rate and laser power the model is based on a simple few parameter description of the size and shape of the laser power density distribution (PDD) in the processing zone. The model allows an estimation of the weld seam tensile strength. It is based on energy balance considerations within a thin sheet with the thickness of the optical penetration depth on the surface of the absorbing welding partner. The joining process itself is modelled by a phenomenological approach. The model reproduces the experimentally known process windows for the main process parameters correctly. Using the parameters describing the shape of the laser PDD the critical dependence of the process windows on the PDD shape will be predicted and compared with experiments. The adaption of the model to other laser manufacturing processes where the PDD influence can be modelled comparably will be discussed.
Verification and Validation of Residual Stresses in Bi-Material Composite Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy
Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less
The Attentional Drift Diffusion Model of Simple Perceptual Decision-Making.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
2017-01-01
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
As part of a broader exploratory effort to develop ecological risk assessment approaches to estimate potential chemical effects on non-target populations, we describe an approach for developing simple population models to estimate the extent to which acute effects on individual...
Modeling Spatial and Temporal Aspects of Visual Backward Masking
ERIC Educational Resources Information Center
Hermens, Frouke; Luksys, Gediminas; Gerstner, Wulfram; Herzog, Michael H.; Ernst, Udo
2008-01-01
Visual backward masking is a versatile tool for understanding principles and limitations of visual information processing in the human brain. However, the mechanisms underlying masking are still poorly understood. In the current contribution, the authors show that a structurally simple mathematical model can explain many spatial and temporal…
Laboratory and modeling studies of chemistry in dense molecular clouds
NASA Technical Reports Server (NTRS)
Huntress, W. T., Jr.; Prasad, S. S.; Mitchell, G. F.
1980-01-01
A chemical evolutionary model with a large number of species and a large chemical library is used to examine the principal chemical processes in interstellar clouds. Simple chemical equilibrium arguments show the potential for synthesis of very complex organic species by ion-molecule radiative association reactions.
Gary Achtemeier
2012-01-01
A cellular automata fire model represents âelementsâ of fire by autonomous agents. A few simple algebraic expressions substituted for complex physical and meteorological processes and solved iteratively yield simulations for âsuper-diffusiveâ fire spread and coupled surface-layer (2-m) fireâatmosphere processes. Pressure anomalies, which are integrals of the thermal...
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Kendal, W S
2000-04-01
To illustrate how probability-generating functions (PGFs) can be employed to derive a simple probabilistic model for clonogenic survival after exposure to ionizing irradiation. Both repairable and irreparable radiation damage to DNA were assumed to occur by independent (Poisson) processes, at intensities proportional to the irradiation dose. Also, repairable damage was assumed to be either repaired or further (lethally) injured according to a third (Bernoulli) process, with the probability of lethal conversion being directly proportional to dose. Using the algebra of PGFs, these three processes were combined to yield a composite PGF that described the distribution of lethal DNA lesions in irradiated cells. The composite PGF characterized a Poisson distribution with mean, chiD+betaD2, where D was dose and alpha and beta were radiobiological constants. This distribution yielded the conventional linear-quadratic survival equation. To test the composite model, the derived distribution was used to predict the frequencies of multiple chromosomal aberrations in irradiated human lymphocytes. The predictions agreed well with observation. This probabilistic model was consistent with single-hit mechanisms, but it was not consistent with binary misrepair mechanisms. A stochastic model for radiation survival has been constructed from elementary PGFs that exactly yields the linear-quadratic relationship. This approach can be used to investigate other simple probabilistic survival models.
An interactive modelling tool for understanding hydrological processes in lowland catchments
NASA Astrophysics Data System (ADS)
Brauer, Claudia; Torfs, Paul; Uijlenhoet, Remko
2016-04-01
Recently, we developed the Wageningen Lowland Runoff Simulator (WALRUS), a rainfall-runoff model for catchments with shallow groundwater (Brauer et al., 2014ab). WALRUS explicitly simulates processes which are important in lowland catchments, such as feedbacks between saturated and unsaturated zone and between groundwater and surface water. WALRUS has a simple model structure and few parameters with physical connotations. Some default functions (which can be changed easily for research purposes) are implemented to facilitate application by practitioners and students. The effect of water management on hydrological variables can be simulated explicitly. The model description and applications are published in open access journals (Brauer et al, 2014). The open source code (provided as R package) and manual can be downloaded freely (www.github.com/ClaudiaBrauer/WALRUS). We organised a short course for Dutch water managers and consultants to become acquainted with WALRUS. We are now adapting this course as a stand-alone tutorial suitable for a varied, international audience. In addition, simple models can aid teachers to explain hydrological principles effectively. We used WALRUS to generate examples for simple interactive tools, which we will present at the EGU General Assembly. C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014a): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geosci. Model Dev., 7, 2313-2332. C.C. Brauer, P.J.J.F. Torfs, A.J. Teuling, R. Uijlenhoet (2014b): The Wageningen Lowland Runoff Simulator (WALRUS): application to the Hupsel Brook catchment and Cabauw polder, Hydrol. Earth Syst. Sci., 18, 4007-4028.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
NASA Technical Reports Server (NTRS)
Milligan, James R.; Dutton, James E.
1993-01-01
In this paper, we have introduced a comprehensive method for enterprise modeling that addresses the three important aspects of how an organization goes about its business. FirstEP includes infrastructure modeling, information modeling, and process modeling notations that are intended to be easy to learn and use. The notations stress the use of straightforward visual languages that are intuitive, syntactically simple, and semantically rich. ProSLCSE will be developed with automated tools and services to facilitate enterprise modeling and process enactment. In the spirit of FirstEP, ProSLCSE tools will also be seductively easy to use. Achieving fully managed, optimized software development and support processes will be long and arduous for most software organizations, and many serious problems will have to be solved along the way. ProSLCSE will provide the ability to document, communicate, and modify existing processes, which is the necessary first step.
Szilágyi, N; Kovács, R; Kenyeres, I; Csikor, Zs
2013-01-01
Biofilm development in a fixed bed biofilm reactor system performing municipal wastewater treatment was monitored aiming at accumulating colonization and maximum biofilm mass data usable in engineering practice for process design purposes. Initially a 6 month experimental period was selected for investigations where the biofilm formation and the performance of the reactors were monitored. The results were analyzed by two methods: for simple, steady-state process design purposes the maximum biofilm mass on carriers versus influent load and a time constant of the biofilm growth were determined, whereas for design approaches using dynamic models a simple biofilm mass prediction model including attachment and detachment mechanisms was selected and fitted to the experimental data. According to a detailed statistical analysis, the collected data have not allowed us to determine both the time constant of biofilm growth and the maximum biofilm mass on carriers at the same time. The observed maximum biofilm mass could be determined with a reasonable error and ranged between 438 gTS/m(2) carrier surface and 843 gTS/m(2), depending on influent load, and hydrodynamic conditions. The parallel analysis of the attachment-detachment model showed that the experimental data set allowed us to determine the attachment rate coefficient which was in the range of 0.05-0.4 m d(-1) depending on influent load and hydrodynamic conditions.
NASA Astrophysics Data System (ADS)
RUIZ, L.; Fovet, O.; Faucheux, M.; Molenat, J.; Sekhar, M.; Aquilina, L.; Gascuel-odoux, C.
2013-12-01
The development of simple and easily accessible metrics is required for characterizing and comparing catchment response to external forcings (climate or anthropogenic) and for managing water resources. The hydrological and geochemical signatures in the stream represent the integration of the various processes controlling this response. The complexity of these signatures over several time scales from sub-daily to several decades [Kirchner et al., 2001] makes their deconvolution very difficult. A large range of modeling approaches intent to represent this complexity by accounting for the spatial and/or temporal variability of the processes involved. However, simple metrics are not easily retrieved from these approaches, mostly because of over-parametrization issues. We hypothesize that to obtain relevant metrics, we need to use models that are able to simulate the observed variability of river signatures at different time scales, while being as parsimonious as possible. The lumped model ETNA (modified from[Ruiz et al., 2002]) is able to simulate adequately the seasonal and inter-annual patterns of stream NO3 concentration. Shallow groundwater is represented by two linear stores with double porosity and riparian processes are represented by a constant nitrogen removal function. Our objective was to identify simple metrics of catchment response by calibrating this lumped model on two paired agricultural catchments where both N inputs and outputs were monitored for a period of 20 years. These catchments, belonging to ORE AgrHys, although underlain by the same granitic bedrock are displaying contrasted chemical signatures. The model was able to simulate the two contrasted observed patterns in stream and groundwater, both on hydrology and chemistry, and at the seasonal and pluri-annual scales. It was also compatible with the expected trends of nitrate concentration since 1960. The output variables of the model were used to compute the nitrate residence time in both the catchments. We used the Global Likelihood Uncertainty Estimations (GLUE) approach [Beven and Binley, 1992] to assess the parameter uncertainties and the subsequent error in model outputs and residence times. Reasonably low parameter uncertainties were obtained by calibrating simultaneously the two paired catchments with two outlets time series of stream flow and nitrate concentrations. Finally, only one parameter controlled the contrast in nitrogen residence times between the catchments. Therefore, this approach provided a promising metric for classifying the variability of catchment response to agricultural nitrogen inputs. Beven, K., and A. Binley (1992), THE FUTURE OF DISTRIBUTED MODELS - MODEL CALIBRATION AND UNCERTAINTY PREDICTION, Hydrological Processes, 6(3), 279-298. Kirchner, J. W., X. Feng, and C. Neal (2001), Catchment-scale advection and dispersion as a mechanism for fractal scaling in stream tracer concentrations, Journal of Hydrology, 254(1-4), 82-101. Ruiz, L., S. Abiven, C. Martin, P. Durand, V. Beaujouan, and J. Molenat (2002), Effect on nitrate concentration in stream water of agricultural practices in small catchments in Brittany : II. Temporal variations and mixing processes, Hydrology and Earth System Sciences, 6(3), 507-513.
Diffusion of a new intermediate product in a simple 'classical-Schumpeterian' model.
Haas, David
2018-05-01
This paper deals with the problem of new intermediate products within a simple model, where production is circular and goods enter into the production of other goods. It studies the process by which the new good is absorbed into the economy and the structural transformation that goes with it. By means of a long-period method the forces of structural transformation are examined, in particular the shift of existing means of production towards the innovation and the mechanism of differential growth in terms of alternative techniques and their associated systems of production. We treat two important Schumpeterian topics: the question of technological unemployment and the problem of 'forced saving' and the related problem of an involuntary reduction of real consumption per capita. It is shown that both phenomena are potential by-products of the transformation process.
NASA Astrophysics Data System (ADS)
Tsuzuki, Satori; Yanagisawa, Daichi; Nishinari, Katsuhiro
2018-04-01
This study proposes a model of a totally asymmetric simple exclusion process on a single-channel lane with functions of site assignments along the pit lane. The system model attempts to insert a new particle to the leftmost site at a certain probability by randomly selecting one of the empty sites in the pit lane, and reserving it for the particle. Thereafter, the particle is directed to stop at the site only once during its travel. Recently, the system was determined to show a self-deflection effect, in which the site usage distribution biases spontaneously toward the leftmost site, and the throughput becomes maximum when the site usage distribution is slightly biased to the rightmost site. Our exact analysis describes this deflection effect and show a good agreement with simulations.
SEEPLUS: A SIMPLE ONLINE CLIMATE MODEL
NASA Astrophysics Data System (ADS)
Tsutsui, Junichi
A web application for a simple climate model - SEEPLUS (a Simple climate model to Examine Emission Pathways Leading to Updated Scenarios) - has been developed. SEEPLUS consists of carbon-cycle and climate-change modules, through which it provides the information infrastructure required to perform climate-change experiments, even on a millennial-timescale. The main objective of this application is to share the latest scientific knowledge acquired from climate modeling studies among the different stakeholders involved in climate-change issues. Both the carbon-cycle and climate-change modules employ impulse response functions (IRFs) for their key processes, thereby enabling the model to integrate the outcome from an ensemble of complex climate models. The current IRF parameters and forcing manipulation are basically consistent with, or within an uncertainty range of, the understanding of certain key aspects such as the equivalent climate sensitivity and ocean CO2 uptake data documented in representative literature. The carbon-cycle module enables inverse calculation to determine the emission pathway required in order to attain a given concentration pathway, thereby providing a flexible way to compare the module with more advanced modeling studies. The module also enables analytical evaluation of its equilibrium states, thereby facilitating the long-term planning of global warming mitigation.
Fadel, Ali; Lemaire, Bruno J; Vinçon-Leite, Brigitte; Atoui, Ali; Slim, Kamal; Tassin, Bruno
2017-09-01
Many freshwater bodies worldwide that suffer from harmful algal blooms would benefit for their management from a simple ecological model that requires few field data, e.g. for early warning systems. Beyond a certain degree, adding processes to ecological models can reduce model predictive capabilities. In this work, we assess whether a simple ecological model without nutrients is able to describe the succession of cyanobacterial blooms of different species in a hypereutrophic reservoir and help understand the factors that determine these blooms. In our study site, Karaoun Reservoir, Lebanon, cyanobacteria Aphanizomenon ovalisporum and Microcystis aeruginosa alternatively bloom. A simple configuration of the model DYRESM-CAEDYM was used; both cyanobacteria were simulated, with constant vertical migration velocity for A. ovalisporum, with vertical migration velocity dependent on light for M. aeruginosa and with growth limited by light and temperature and not by nutrients for both species. The model was calibrated on two successive years with contrasted bloom patterns and high variations in water level. It was able to reproduce the measurements; it showed a good performance for the water level (root-mean-square error (RMSE) lower than 1 m, annual variation of 25 m), water temperature profiles (RMSE of 0.22-1.41 °C, range 13-28 °C) and cyanobacteria biomass (RMSE of 1-57 μg Chl a L -1 , range 0-206 μg Chl a L -1 ). The model also helped understand the succession of blooms in both years. The model results suggest that the higher growth rate of M. aeruginosa during favourable temperature and light conditions allowed it to outgrow A. ovalisporum. Our results show that simple model configurations can be sufficient not only for theoretical works when few major processes can be identified but also for operational applications. This approach could be transposed on other hypereutrophic lakes and reservoirs to describe the competition between dominant phytoplankton species, contribute to early warning systems or be used for management scenarios.
The Development from Effortful to Automatic Processing in Mathematical Cognition.
ERIC Educational Resources Information Center
Kaye, Daniel B.; And Others
This investigation capitalizes upon the information processing models that depend upon measurement of latency of response to a mathematical problem and the decomposition of reaction time (RT). Simple two term addition problems were presented with possible solutions for true-false verification, and accuracy and RT to response were recorded. Total…
ERIC Educational Resources Information Center
Knabb, Maureen T.; Misquith, Geraldine
2006-01-01
Incorporating inquiry-based learning in the college-level introductory biology laboratory is challenging because the labs serve the dual purpose of providing a hands-on opportunity to explore content while also emphasizing the development of scientific process skills. Time limitations and variations in student preparedness for college further…
Improved nutrient removal using in situ continuous on-line sensors with short response time.
Ingildsen, P; Wendelboe, H
2003-01-01
Nutrient sensors that can be located directly in the activated sludge processes are gaining in number at wastewater treatment plants. The in situ location of the sensors means that they can be located close to the processes that they aim to control and hence are perfectly suited for automatic process control. Compared to the location of automatic analysers in the effluent from the sedimentation reactors the in situ location means a large reduction in the response time. The settlers typically work as a first-order delay on the signal with a retention time in the range of 4-12 hours depending on the size of the settlers. Automatic process control of the nitrogen and phosphorus removal processes means that considerable improvements in the performance of aeration, internal recirculation, carbon dosage and phosphate precipitation dosage can be reached by using a simple control structure as well as simple PID controllers. The performance improvements can be seen in decreased energy and chemicals consumption and less variation in effluent concentrations of ammonium, total nitrogen and phosphate. Simple control schemes are demonstrated for the pre-denitrification and the post precipitation system by means of full-scale plant experiments and model simulations.
Reduced Order Models Via Continued Fractions Applied to Control Systems,
1980-09-01
a simple * model of a nuclear reactor power generator [20, 21]. The heat generating process of a nuclear reactor is dependent upon the mechanism...called fission (a fragmentation of matter). The power generated by this process is directly related to the population of neutrons, n~t) and can be...150) 6(t ()n~t) - c(t) (151) where 6k(t) 6 kc(t)-an(t) (152) The variable 6k(t) is the input to the process and is given the name "reactivity". It is
Exploring empirical rank-frequency distributions longitudinally through a simple stochastic process.
Finley, Benjamin J; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf's law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process's complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications.
A radio-frequency sheath model for complex waveforms
NASA Astrophysics Data System (ADS)
Turner, M. M.; Chabert, P.
2014-04-01
Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.
NASA Astrophysics Data System (ADS)
McDermid, J. R.; Zurob, H. S.; Bian, Y.
2011-12-01
Two galvanizable high-Al, low-Si transformation-induced plasticity (TRIP)-assisted steels were subjected to isothermal bainitic transformation (IBT) temperatures compatible with the continuous galvanizing (CGL) process and the kinetics of the retained austenite (RA) to martensite transformation during room temperature deformation studied as a function of heat treatment parameters. It was determined that there was a direct relationship between the rate of strain-induced transformation and optimal mechanical properties, with more gradual transformation rates being favored. The RA to martensite transformation kinetics were successfully modeled using two methodologies: (1) the strain-based model of Olsen and Cohen and (2) a simple relationship with the normalized flow stress, ( {{{σ_{{flow}} - σ_{YS} }/{σ_{YS }}}} ) . For the strain-based model, it was determined that the model parameters were a strong function of strain and alloy thermal processing history and a weak function of alloy chemistry. It was verified that the strain-based model in the present work agrees well with those derived by previous workers using TRIP-assisted steels of similar composition. It was further determined that the RA to martensite transformation kinetics for all alloys and heat treatments could be described using a simple model vs the normalized flow stress, indicating that the RA to martensite transformation is stress-induced rather than strain-induced for temperatures above the Ms^{σ }.
Explanatory Models for Psychiatric Illness
Kendler, Kenneth S.
2009-01-01
How can we best develop explanatory models for psychiatric disorders? Because causal factors have an impact on psychiatric illness both at micro levels and macro levels, both within and outside of the individual, and involving processes best understood from biological, psychological, and sociocultural perspectives, traditional models of science that strive for single broadly applicable explanatory laws are ill suited for our field. Such models are based on the incorrect assumption that psychiatric illnesses can be understood from a single perspective. A more appropriate scientific model for psychiatry emphasizes the understanding of mechanisms, an approach that fits naturally with a multicausal framework and provides a realistic paradigm for scientific progress, that is, understanding mechanisms through decomposition and reassembly. Simple subunits of complicated mechanisms can be usefully studied in isolation. Reassembling these constituent parts into a functioning whole, which is straightforward for simple additive mechanisms, will be far more challenging in psychiatry where causal networks contain multiple nonlinear interactions and causal loops. Our field has long struggled with the interrelationship between biological and psychological explanatory perspectives. Building from the seminal work of the neuronal modeler and philosopher David Marr, the author suggests that biology will implement but not replace psychology within our explanatory systems. The iterative process of interactions between biology and psychology needed to achieve this implementation will deepen our understanding of both classes of processes. PMID:18483135
Saxton, Michael J
2007-01-01
Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Macroscopic Fluctuation Theory for Stationary Non-Equilibrium States
NASA Astrophysics Data System (ADS)
Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.
2002-05-01
We formulate a dynamical fluctuation theory for stationary non-equilibrium states (SNS) which is tested explicitly in stochastic models of interacting particles. In our theory a crucial role is played by the time reversed dynamics. Within this theory we derive the following results: the modification of the Onsager-Machlup theory in the SNS; a general Hamilton-Jacobi equation for the macroscopic entropy; a non-equilibrium, nonlinear fluctuation dissipation relation valid for a wide class of systems; an H theorem for the entropy. We discuss in detail two models of stochastic boundary driven lattice gases: the zero range and the simple exclusion processes. In the first model the invariant measure is explicitly known and we verify the predictions of the general theory. For the one dimensional simple exclusion process, as recently shown by Derrida, Lebowitz, and Speer, it is possible to express the macroscopic entropy in terms of the solution of a nonlinear ordinary differential equation; by using the Hamilton-Jacobi equation, we obtain a logically independent derivation of this result.
Using explanatory crop models to develop simple tools for Advanced Life Support system studies
NASA Technical Reports Server (NTRS)
Cavazzoni, J.
2004-01-01
System-level analyses for Advanced Life Support require mathematical models for various processes, such as for biomass production and waste management, which would ideally be integrated into overall system models. Explanatory models (also referred to as mechanistic or process models) would provide the basis for a more robust system model, as these would be based on an understanding of specific processes. However, implementing such models at the system level may not always be practicable because of their complexity. For the area of biomass production, explanatory models were used to generate parameters and multivariable polynomial equations for basic models that are suitable for estimating the direction and magnitude of daily changes in canopy gas-exchange, harvest index, and production scheduling for both nominal and off-nominal growing conditions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
Quantum Optics Models of EIT Noise and Power Broadening
NASA Astrophysics Data System (ADS)
Snider, Chad; Crescimanno, Michael; O'Leary, Shannon
2011-04-01
When two coherent beams of light interact with an atom they tend to drive the atom to a non-absorbing state through a process called Electromagnetically Induced Transparency (EIT). If the light's frequency dithers, the atom's state stochastically moves in and out of this non-absorbing state. We describe a simple quantum optics model of this process that captures the essential experimentally observed statistical features of this EIT noise, with a particular emphasis on understanding power broadening.
Animal models to study microRNA function
Pal, Arpita S.; Kasinski, Andrea L.
2018-01-01
The discovery of the microRNAs, lin-4 and let-7 as critical mediators of normal development in Caenorhabditis elegans and their conservation throughout evolution has spearheaded research towards identifying novel roles of microRNAs in other cellular processes. To accurately elucidate these fundamental functions, especially in the context of an intact organism various microRNA transgenic models have been generated and evaluated. Transgenic C. elegans (worms), Drosophila melanogaster (flies), Danio rerio (zebrafish), and Mus musculus (mouse) have contributed immensely towards uncovering the roles of multiple microRNAs in cellular processes such as proliferation, differentiation, and apoptosis, pathways that are severely altered in human diseases such as cancer. The simple model organisms, C. elegans, D. melanogaster and D. rerio do not develop cancers, but have proved to be convenient systesm in microRNA research, especially in characterizing the microRNA biogenesis machinery which is often dysregulated during human tumorigenesis. The microRNA-dependent events delineated via these simple in vivo systems have been further verified in vitro, and in more complex models of cancers, such as M. musculus. The focus of this review is to provide an overview of the important contributions made in the microRNA field using model organisms. The simple model systems provided the basis for the importance of microRNAs in normal cellular physiology, while the more complex animal systems provided evidence for the role of microRNAs dysregulation in cancers. Highlights include an overview of the various strategies used to generate transgenic organisms and a review of the use of transgenic mice for evaluating pre-clinical efficacy of microRNA-based cancer therapeutics. PMID:28882225
Biomat development in soil treatment units for on-site wastewater treatment.
Winstanley, H F; Fowler, A C
2013-10-01
We provide a simple mathematical model of the bioremediation of contaminated wastewater leaching into the subsoil below a septic tank percolation system. The model comprises a description of the percolation system's flows, together with equations describing the growth of biomass and the uptake of an organic contaminant concentration. By first rendering the model dimensionless, it can be partially solved, to provide simple insights into the processes which control the efficacy of the system. In particular, we provide quantitative insight into the effect of a near surface biomat on subsoil permeability; this can lead to trench ponding, and thus propagation of effluent further down the trench. Using the computed vadose zone flow field, the model can be simply extended to include reactive transport of other contaminants of interest.
Generalized estimators of avian abundance from count survey data
Royle, J. Andrew
2004-01-01
I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
Single- and Dual-Process Models of Biased Contingency Detection.
Vadillo, Miguel A; Blanco, Fernando; Yarritu, Ion; Matute, Helena
2016-01-01
Decades of research in causal and contingency learning show that people's estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artifacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive.
A simple diagnostic model of cumulus convective clouds is developed and used in a sensitivity study to examine the extent to which the rate of change of mixed and cloud layer pollutant concentration is influenced by vertical transport and chemical transformation processes occurri...
A Computer Model of Simple Forms of Learning.
ERIC Educational Resources Information Center
Jones, Thomas L.
A basic unsolved problem in science is that of understanding learning, the process by which people and machines use their experience in a situation to guide future action in similar situations. The ideas of Piaget, Pavlov, Hull, and other learning theorists, as well as previous heuristic programing models of human intelligence, stimulated this…
Insight into nuclear body formation of phytochromes through stochastic modelling and experiment.
Grima, Ramon; Sonntag, Sebastian; Venezia, Filippo; Kircher, Stefan; Smith, Robert W; Fleck, Christian
2018-05-01
Spatial relocalization of proteins is crucial for the correct functioning of living cells. An interesting example of spatial ordering is the light-induced clustering of plant photoreceptor proteins. Upon irradiation by white or red light, the red light-active phytochrome, phytochrome B, enters the nucleus and accumulates in large nuclear bodies. The underlying physical process of nuclear body formation remains unclear, but phytochrome B is thought to coagulate via a simple protein-protein binding process. We measure, for the first time, the distribution of the number of phytochrome B-containing nuclear bodies as well as their volume distribution. We show that the experimental data cannot be explained by a stochastic model of nuclear body formation via simple protein-protein binding processes using physically meaningful parameter values. Rather modelling suggests that the data is consistent with a two step process: a fast nucleation step leading to macroparticles followed by a subsequent slow step in which the macroparticles bind to form the nuclear body. An alternative explanation for the observed nuclear body distribution is that the phytochromes bind to a so far unknown molecular structure. We believe it is likely this result holds more generally for other nuclear body-forming plant photoreceptors and proteins. Creative Commons Attribution license.
Induced Radioactivity in Lead Shielding at the National Synchrotron Light Source
Ghosh, Vinita J.; Schaefer, Charles; Kahnhauser, Henry
2017-06-30
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was shut down in September 2014. Lead bricks used as radiological shadow shielding within the accelerator were exposed to stray radiation fields during normal operations. The FLUKA code, a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter, was used to estimate induced radioactivity in this shielding and stainless steel beam pipe from known beam losses. The FLUKA output was processed using MICROSHIELD® to estimate on-contact exposure rates with individually exposed bricks to help design and optimize the radiological survey process. Thismore » entire process can be modeled using FLUKA, but use of MICROSHIELD® as a secondary method was chosen because of the project’s resource constraints. Due to the compressed schedule and lack of shielding configuration data, simple FLUKA models were developed in this paper. FLUKA activity estimates for stainless steel were compared with sampling data to validate results, which show that simple FLUKA models and irradiation geometries can be used to predict radioactivity inventories accurately in exposed materials. During decommissioning 0.1% of the lead bricks were found to have measurable levels of induced radioactivity. Finally, post-processing with MICROSHIELD® provides an acceptable secondary method of estimating residual exposure rates.« less
Induced Radioactivity in Lead Shielding at the National Synchrotron Light Source.
Ghosh, Vinita J; Schaefer, Charles; Kahnhauser, Henry
2017-06-01
The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was shut down in September 2014. Lead bricks used as radiological shadow shielding within the accelerator were exposed to stray radiation fields during normal operations. The FLUKA code, a fully integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter, was used to estimate induced radioactivity in this shielding and stainless steel beam pipe from known beam losses. The FLUKA output was processed using MICROSHIELD® to estimate on-contact exposure rates with individually exposed bricks to help design and optimize the radiological survey process. This entire process can be modeled using FLUKA, but use of MICROSHIELD® as a secondary method was chosen because of the project's resource constraints. Due to the compressed schedule and lack of shielding configuration data, simple FLUKA models were developed. FLUKA activity estimates for stainless steel were compared with sampling data to validate results, which show that simple FLUKA models and irradiation geometries can be used to predict radioactivity inventories accurately in exposed materials. During decommissioning 0.1% of the lead bricks were found to have measurable levels of induced radioactivity. Post-processing with MICROSHIELD® provides an acceptable secondary method of estimating residual exposure rates.
Exploring Empirical Rank-Frequency Distributions Longitudinally through a Simple Stochastic Process
Finley, Benjamin J.; Kilkki, Kalevi
2014-01-01
The frequent appearance of empirical rank-frequency laws, such as Zipf’s law, in a wide range of domains reinforces the importance of understanding and modeling these laws and rank-frequency distributions in general. In this spirit, we utilize a simple stochastic cascade process to simulate several empirical rank-frequency distributions longitudinally. We focus especially on limiting the process’s complexity to increase accessibility for non-experts in mathematics. The process provides a good fit for many empirical distributions because the stochastic multiplicative nature of the process leads to an often observed concave rank-frequency distribution (on a log-log scale) and the finiteness of the cascade replicates real-world finite size effects. Furthermore, we show that repeated trials of the process can roughly simulate the longitudinal variation of empirical ranks. However, we find that the empirical variation is often less that the average simulated process variation, likely due to longitudinal dependencies in the empirical datasets. Finally, we discuss the process limitations and practical applications. PMID:24755621
NASA Astrophysics Data System (ADS)
Yarce, Andrés; Sebastián Rodríguez, Juan; Galvez, Julián; Gómez, Alejandro; García, Manuel J.
2017-06-01
This paper presents the development stage of a communication module for a solid propellant mid-power rocket model. The communication module was named. Simple-1 and this work considers its design, construction and testing. A rocket model Estes Ventris Series Pro II® was modified to introduce, on the top of the payload, several sensors in a CanSat form factor. The Printed Circuit Board (PCB) was designed and fabricated from Commercial Off The Shelf (COTS) components and assembled in a cylindrical rack structure similar to this small format satellite concept. The sensors data was processed using one Arduino Mini and transmitted using a radio module to a Software Defined Radio (SDR) HackRF based platform on the ground station. The Simple-1 was tested using a drone in successive releases, reaching altitudes from 200 to 300 meters. Different kind of data, in terms of altitude, position, atmospheric pressure and vehicle temperature were successfully measured, making possible the progress to a next stage of launching and analysis.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
ERIC Educational Resources Information Center
Hannan, Michael T.
This document is part of a series of chapters described in SO 011 759. Stochastic models for the sociological analysis of change and the change process in quantitative variables are presented. The author lays groundwork for the statistical treatment of simple stochastic differential equations (SDEs) and discusses some of the continuities of…
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
Input-Output Modeling and Control of the Departure Process of Congested Airports
NASA Technical Reports Server (NTRS)
Pujet, Nicolas; Delcaire, Bertrand; Feron, Eric
2003-01-01
A simple queueing model of busy airport departure operations is proposed. This model is calibrated and validated using available runway configuration and traffic data. The model is then used to evaluate preliminary control schemes aimed at alleviating departure traffic congestion on the airport surface. The potential impact of these control strategies on direct operating costs, environmental costs and overall delay is quantified and discussed.
Development of a bi-equilibrium model for biomass gasification in a downdraft bed reactor.
Biagini, Enrico; Barontini, Federica; Tognotti, Leonardo
2016-02-01
This work proposes a simple and accurate tool for predicting the main parameters of biomass gasification (syngas composition, heating value, flow rate), suitable for process study and system analysis. A multizonal model based on non-stoichiometric equilibrium models and a repartition factor, simulating the bypass of pyrolysis products through the oxidant zone, was developed. The results of tests with different feedstocks (corn cobs, wood pellets, rice husks and vine pruning) in a demonstrative downdraft gasifier (350kW) were used for validation. The average discrepancy between model and experimental results was up to 8 times less than the one with the simple equilibrium model. The repartition factor was successfully related to the operating conditions and characteristics of the biomass to simulate different conditions of the gasifier (variation in potentiality, densification and mixing of feedstock) and analyze the model sensitivity. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sellers, Piers J.; Shuttleworth, W. James; Dorman, Jeff L.; Dalcher, Amnon; Roberts, John M.
1989-01-01
Using meteorological and hydrological measurements taken in and above the central-Amazon-basin tropical forest, calibration of the Sellers et al. (1986) simple biosphere (SiB) model are described. The SiB model is a one-dimensional soil-vegetation-atmosphere model designed for use within GCMs models, representing the vegetation cover by analogy with processes operating within a single representative plant. The experimental systems and the procedures used to obtain field data are described, together with the specification of the physiological parameterization required to provide an average description of data. It was found that some of the existing literature on stomatal behavior for tropical species is inconsistent with the observed behavior of the complete canopy in Amazonia, and that the rainfall interception store of the canopy is considerably smaller than originally specified in the SiB model.
Modeling Simple Driving Tasks with a One-Boundary Diffusion Model
Ratcliff, Roger; Strayer, David
2014-01-01
A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620
Laponite as carrier for controlled in vitro delivery of dexamethasone in vitreous humor models.
Fraile, José M; Garcia-Martin, Elena; Gil, Cristina; Mayoral, José A; Pablo, Luis E; Polo, Vicente; Prieto, Esther; Vispe, Eugenio
2016-11-01
Laponite clay is able to retain dexamethasone by simple physisorption, presumably accomplished by hydrogen bonding formation and/or complexation with sodium counterions, as shown by solid state NMR. The physisorption can be somehow modulated by changing the solvent in the adsorption process. This simple system is able to deliver dexamethasone in a controlled manner to solutions used as models for vitreous humor. The proven biocompatibility of laponite as well as its transparency in the gel state, together with the simplicity of the preparation method, makes this system suitable for future in vivo tests of ophthalmic treatment. Copyright © 2016 Elsevier B.V. All rights reserved.
Transcription, intercellular variability and correlated random walk.
Müller, Johannes; Kuttler, Christina; Hense, Burkhard A; Zeiser, Stefan; Liebscher, Volkmar
2008-11-01
We develop a simple model for the random distribution of a gene product. It is assumed that the only source of variance is due to switching transcription on and off by a random process. Under the condition that the transition rates between on and off are constant we find that the amount of mRNA follows a scaled Beta distribution. Additionally, a simple positive feedback loop is considered. The simplicity of the model allows for an explicit solution also in this setting. These findings in turn allow, e.g., for easy parameter scans. We find that bistable behavior translates into bimodal distributions. These theoretical findings are in line with experimental results.
Time-independent models of asset returns revisited
NASA Astrophysics Data System (ADS)
Gillemot, L.; Töyli, J.; Kertesz, J.; Kaski, K.
2000-07-01
In this study we investigate various well-known time-independent models of asset returns being simple normal distribution, Student t-distribution, Lévy, truncated Lévy, general stable distribution, mixed diffusion jump, and compound normal distribution. For this we use Standard and Poor's 500 index data of the New York Stock Exchange, Helsinki Stock Exchange index data describing a small volatile market, and artificial data. The results indicate that all models, excluding the simple normal distribution, are, at least, quite reasonable descriptions of the data. Furthermore, the use of differences instead of logarithmic returns tends to make the data looking visually more Lévy-type distributed than it is. This phenomenon is especially evident in the artificial data that has been generated by an inflated random walk process.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Low energy analysis of νN→νNγ in the standard model
NASA Astrophysics Data System (ADS)
Hill, Richard J.
2010-01-01
The production of single photons in low energy (˜1GeV) neutrino scattering off nucleons is analyzed in the standard model. At very low energies, Eν≪GeV, a simple description of the chiral Lagrangian involving baryons and arbitrary SU(2)L×U(1)Y gauge fields is developed. Extrapolation of the process into the ˜1-2GeV region is treated in a simple phenomenological model. Coherent enhancements in compound nuclei are studied. The relevance of single-photon events as a background to experimental searches for νμ→νe is discussed. In particular, single photons are a plausible explanation for excess events observed by the MiniBooNE experiment.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
A Course for All Students: Foundations of Modern Engineering
ERIC Educational Resources Information Center
Best, Charles L.
1971-01-01
Describes a course for non-engineering students at Lafayette College which includes the design process in a project. Also included are the study of modeling, optimization, simulation, computer application, and simple feedback controls. (Author/TS)
Control of DNA strand displacement kinetics using toehold exchange.
Zhang, David Yu; Winfree, Erik
2009-12-02
DNA is increasingly being used as the engineering material of choice for the construction of nanoscale circuits, structures, and motors. Many of these enzyme-free constructions function by DNA strand displacement reactions. The kinetics of strand displacement can be modulated by toeholds, short single-stranded segments of DNA that colocalize reactant DNA molecules. Recently, the toehold exchange process was introduced as a method for designing fast and reversible strand displacement reactions. Here, we characterize the kinetics of DNA toehold exchange and model it as a three-step process. This model is simple and quantitatively predicts the kinetics of 85 different strand displacement reactions from the DNA sequences. Furthermore, we use toehold exchange to construct a simple catalytic reaction. This work improves the understanding of the kinetics of nucleic acid reactions and will be useful in the rational design of dynamic DNA and RNA circuits and nanodevices.
Diffusion of a new intermediate product in a simple ‘classical‐Schumpeterian’ model
2017-01-01
Abstract This paper deals with the problem of new intermediate products within a simple model, where production is circular and goods enter into the production of other goods. It studies the process by which the new good is absorbed into the economy and the structural transformation that goes with it. By means of a long‐period method the forces of structural transformation are examined, in particular the shift of existing means of production towards the innovation and the mechanism of differential growth in terms of alternative techniques and their associated systems of production. We treat two important Schumpeterian topics: the question of technological unemployment and the problem of ‘forced saving’ and the related problem of an involuntary reduction of real consumption per capita. It is shown that both phenomena are potential by‐products of the transformation process. PMID:29695874
Ruckert, Kelsey L; Shaffer, Gary; Pollard, David; Guan, Yawen; Wong, Tony E; Forest, Chris E; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing climate forcings is an important driver of sea-level changes. Anthropogenic climate change may drive a sizeable AIS tipping point response with subsequent increases in coastal flooding risks. Many studies analyzing flood risks use simple models to project the future responses of AIS and its sea-level contributions. These analyses have provided important new insights, but they are often silent on the effects of potentially important processes such as Marine Ice Sheet Instability (MISI) or Marine Ice Cliff Instability (MICI). These approximations can be well justified and result in more parsimonious and transparent model structures. This raises the question of how this approximation impacts hindcasts and projections. Here, we calibrate a previously published and relatively simple AIS model, which neglects the effects of MICI and regional characteristics, using a combination of observational constraints and a Bayesian inversion method. Specifically, we approximate the effects of missing MICI by comparing our results to those from expert assessments with more realistic models and quantify the bias during the last interglacial when MICI may have been triggered. Our results suggest that the model can approximate the process of MISI and reproduce the projected median melt from some previous expert assessments in the year 2100. Yet, our mean hindcast is roughly 3/4 of the observed data during the last interglacial period and our mean projection is roughly 1/6 and 1/10 of the mean from a model accounting for MICI in the year 2100. These results suggest that missing MICI and/or regional characteristics can lead to a low-bias during warming period AIS melting and hence a potential low-bias in projected sea levels and flood risks.
NASA Astrophysics Data System (ADS)
Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.
2017-12-01
Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.
BP fusion model for the detection of oil spills on the sea by remote sensing
NASA Astrophysics Data System (ADS)
Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin
2003-06-01
Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.
MEG evidence that the central auditory system simultaneously encodes multiple temporal cues.
Simpson, Michael I G; Barnes, Gareth R; Johnson, Sam R; Hillebrand, Arjan; Singh, Krish D; Green, Gary G R
2009-09-01
Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of simple sinusoidal amplitude modulations. In this study we used magnetoencephalography (MEG) to generate source space current estimates of the steady-state responses to simple one-component amplitude modulations and to a two-component amplitude modulation. A two-component modulation introduces the simplest form of modulation complexity into the waveform; the summation of the two-modulation rates introduces a beat-like modulation at the difference frequency between the two modulation rates. We compared the cortical representations of responses to the one-component and two-component modulations. In particular, we show that the temporal complexity in the two-component amplitude modulation stimuli was preserved at the cortical level. The method of stimulus normalization that we used also allows us to interpret these results as evidence that the important feature in sound modulations is the relative depth of one modulation rate with respect to another, rather than the absolute carrier-to-sideband modulation depth. More generally, this may be interpreted as evidence that modulation detection accurately preserves a representation of the modulation envelope. This is an important observation with respect to models of modulation processing, as it suggests that models may need a dynamic processing step to effectively model non-stationary stimuli. We suggest that the classic modulation filterbank model needs to be modified to take these findings into account.
Chaos and unpredictability in evolution.
Doebeli, Michael; Ispolatov, Iaroslav
2014-05-01
The possibility of complicated dynamic behavior driven by nonlinear feedbacks in dynamical systems has revolutionized science in the latter part of the last century. Yet despite examples of complicated frequency dynamics, the possibility of long-term evolutionary chaos is rarely considered. The concept of "survival of the fittest" is central to much evolutionary thinking and embodies a perspective of evolution as a directional optimization process exhibiting simple, predictable dynamics. This perspective is adequate for simple scenarios, when frequency-independent selection acts on scalar phenotypes. However, in most organisms many phenotypic properties combine in complicated ways to determine ecological interactions, and hence frequency-dependent selection. Therefore, it is natural to consider models for evolutionary dynamics generated by frequency-dependent selection acting simultaneously on many different phenotypes. Here we show that complicated, chaotic dynamics of long-term evolutionary trajectories in phenotype space is very common in a large class of such models when the dimension of phenotype space is large, and when there are selective interactions between the phenotypic components. Our results suggest that the perspective of evolution as a process with simple, predictable dynamics covers only a small fragment of long-term evolution. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Process of .sup.196 Hg enrichment
Grossman, Mark W.; Mellor, Charles E.
1993-01-01
A simple rate equation model shows that by increasing the length of the photochemical reactor and/or by increasing the photon intensity in said reactor, the feedstock utilization of .sup.196 Hg will be increased. Two preferred embodiments of the present invention are described, namely (1) long reactors using long photochemical lamps and vapor filters; and (2) quartz reactors with external UV reflecting films. These embodiments have each been constructed and operated, demonstrating the enhanced utilization process dictated by the mathematical model (also provided).
Process of [sup 196]Hg enrichment
Grossman, M.W.; Mellor, C.E.
1993-04-27
A simple rate equation model shows that by increasing the length of the photochemical reactor and/or by increasing the photon intensity in said reactor, the feedstock utilization of [sup 196]Hg will be increased. Two preferred embodiments of the present invention are described, namely (1) long reactors using long photochemical lamps and vapor filters; and (2) quartz reactors with external UV reflecting films. These embodiments have each been constructed and operated, demonstrating the enhanced utilization process dictated by the mathematical model (also provided).
A linear acoustic model for intake wave dynamics in IC engines
NASA Astrophysics Data System (ADS)
Harrison, M. F.; Stanev, P. T.
2004-01-01
In this paper, a linear acoustic model is described that has proven useful in obtaining a better understanding of the nature of acoustic wave dynamics in the intake system of an internal combustion (IC) engine. The model described has been developed alongside a set of measurements made on a Ricardo E6 single cylinder research engine. The simplified linear acoustic model reported here produces a calculation of the pressure time-history in the port of an IC engine that agrees fairly well with measured data obtained on the engine fitted with a simple intake system. The model has proved useful in identifying the role of pipe resonance in the intake process and has led to the development of a simple hypothesis to explain the structure of the intake pressure time history: the early stages of the intake process are governed by the instantaneous values of the piston velocity and the open area under the valve. Thereafter, resonant wave action dominates the process. The depth of the early depression caused by the moving piston governs the intensity of the wave action that follows. A pressure ratio across the valve that is favourable to inflow is maintained and maximized when the open period of the valve is such to allow at least, but no more than, one complete oscillation of the pressure at its resonant frequency to occur while the valve is open.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
Wagner, Peter J
2012-02-23
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution.
NASA Technical Reports Server (NTRS)
Nunes, Arthur C., Jr.
2008-01-01
Friction stir welding (FSW) is a solid state welding process invented in 1991 at The Welding Institute in the United Kingdom. A weld is made in the FSW process by translating a rotating pin along a weld seam so as to stir the sides of the seam together. FSW avoids deleterious effects inherent in melting and promises to be an important welding process for any industries where welds of optimal quality are demanded. This article provides an introduction to the FSW process. The chief concern is the physical effect of the tool on the weld metal: how weld seam bonding takes place, what kind of weld structure is generated, potential problems, possible defects for example, and implications for process parameters and tool design. Weld properties are determined by structure, and the structure of friction stir welds is determined by the weld metal flow field in the vicinity of the weld tool. Metal flow in the vicinity of the weld tool is explained through a simple kinematic flow model that decomposes the flow field into three basic component flows: a uniform translation, a rotating solid cylinder, and a ring vortex encircling the tool. The flow components, superposed to construct the flow model, can be related to particular aspects of weld process parameters and tool design; they provide a bridge to an understanding of a complex-at-first-glance weld structure. Torques and forces are also discussed. Some simple mathematical models of structural aspects, torques, and forces are included.
Universal scaling for second-class particles in a one-dimensional misanthrope process
NASA Astrophysics Data System (ADS)
Rákos, Attila
2010-06-01
We consider the one-dimensional Katz-Lebowitz-Spohn (KLS) model, which is a generalization of the totally asymmetric simple exclusion process (TASEP) with nearest neighbour interaction. Using a powerful mapping, the KLS model can be translated into a misanthrope process. In this model, for the repulsive case, it is possible to introduce second-class particles, the number of which is conserved. We study the distance distribution of second-class particles in this model numerically and find that for large distances it decreases as x-3/2. This agrees with a previous analytical result of Derrida et al (1993) for the TASEP, where the same asymptotic behaviour was found. We also study the dynamical scaling function of the distance distribution and find that it is universal within this family of models.
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
NASA Astrophysics Data System (ADS)
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
Velan, Hadas; Frost, Ram
2010-01-01
Recent studies suggest that basic effects which are markers of visual word recognition in Indo-European languages cannot be obtained in Hebrew or in Arabic. Although Hebrew has an alphabetic writing system, just like English, French, or Spanish, a series of studies consistently suggested that simple form-orthographic priming, or letter-transposition priming are not found in Hebrew. In four experiments, we tested the hypothesis that this is due to the fact that Semitic words have an underlying structure that constrains the possible alignment of phonemes and their respective letters. The experiments contrasted typical Semitic words which are root-derived, with Hebrew words of non-Semitic origin, which are morphologically simple and resemble base words in European languages. Using RSVP, TL priming, and form-priming manipulations, we show that Hebrew readers process Hebrew words which are morphologically simple similar to the way they process English words. These words indeed reveal the typical form-priming and TL priming effects reported in European languages. In contrast, words with internal structure are processed differently, and require a different code for lexical access. We discuss the implications of these findings for current models of visual word recognition. PMID:21163472
Reanalysis, compatibility and correlation in analysis of modified antenna structures
NASA Technical Reports Server (NTRS)
Levy, R.
1989-01-01
A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.
Inductive Reasoning about Causally Transmitted Properties
ERIC Educational Resources Information Center
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D.; Tenenbaum, Joshua B.
2008-01-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates'…
A Nondeterministic Resource Planning Model in Education
ERIC Educational Resources Information Center
Yoda, Koji
1977-01-01
Discusses a simple technique for stochastic resource planning that, when computerized, can assist educational managers in the process of quantifying the future uncertainty, thereby, helping them make better decisions. The example used is a school lunch program. (Author/IRT)
ERIC Educational Resources Information Center
Weisz, Victoria I.; Argibay, Pablo F.
2012-01-01
In contrast to models and theories that relate adult neurogenesis with the processes of learning and memory, almost no solid hypotheses have been formulated that involve a possible neurocomputational influence of adult neurogenesis on forgetting. Based on data from a previous study that implemented a simple but complete model of the main…
Hydrogen peroxide and caustic soda: Dancing with a dragon while bleaching
Peter W. Hart; Carl Houtman; Kolby Hirth
2013-01-01
When hydrogen peroxide is mixed with caustic soda, an auto-accelerating reaction can lead to generation of significant amounts of heat and oxygen. On the basis of experiments using typical pulp mill process concentration and temperatures, a relatively simple kinetic model has been developed. Evaluation of these model results reveals that hydrogen peroxide-caustic soda...
ERIC Educational Resources Information Center
Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.
2010-01-01
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
Spatial-temporal modeling of malware propagation in networks.
Chen, Zesheng; Ji, Chuanyi
2005-09-01
Network security is an important task of network management. One threat to network security is malware (malicious software) propagation. One type of malware is called topological scanning that spreads based on topology information. The focus of this work is on modeling the spread of topological malwares, which is important for understanding their potential damages, and for developing countermeasures to protect the network infrastructure. Our model is motivated by probabilistic graphs, which have been widely investigated in machine learning. We first use a graphical representation to abstract the propagation of malwares that employ different scanning methods. We then use a spatial-temporal random process to describe the statistical dependence of malware propagation in arbitrary topologies. As the spatial dependence is particularly difficult to characterize, the problem becomes how to use simple (i.e., biased) models to approximate the spatially dependent process. In particular, we propose the independent model and the Markov model as simple approximations. We conduct both theoretical analysis and extensive simulations on large networks using both real measurements and synthesized topologies to test the performance of the proposed models. Our results show that the independent model can capture temporal dependence and detailed topology information and, thus, outperforms the previous models, whereas the Markov model incorporates a certain spatial dependence and, thus, achieves a greater accuracy in characterizing both transient and equilibrium behaviors of malware propagation.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
Towards a Simple and Efficient Web Search Framework
2014-11-01
any useful information about the various aspects of a topic. For example, for the query “ raspberry pi ”, it covers topics such as “what is raspberry pi ...topics generated by the LDA topic model for query ” raspberry pi ”. One simple explanation is that web texts are too noisy and unfocused for the LDA process...making a rasp- berry pi ”. However, the topics generated based on the 10 top ranked documents do not make much sense to us in terms of their keywords
Monitoring and modeling of ultrasonic wave propagation in crystallizing mixtures
NASA Astrophysics Data System (ADS)
Marshall, T.; Challis, R. E.; Tebbutt, J. S.
2002-05-01
The utility of ultrasonic compression wave techniques for monitoring crystallization processes is investigated in a study of the seeded crystallization of copper II sulfate pentahydrate from aqueous solution. Simple models are applied to predict crystal yield, crystal size distribution and the changing nature of the continuous phase. A scattering model is used to predict the ultrasonic attenuation as crystallization proceeds. Experiments confirm that modeled attenuation is in agreement with measured results.
An equivalent circuit model of supercapacitors for applications in wireless sensor networks
NASA Astrophysics Data System (ADS)
Yang, Hengzhao; Zhang, Ying
2011-04-01
Energy harvesting technologies have been extensively researched to develop long-lived wireless sensor networks. To better utilize the harvested energy, various energy storage systems are proposed. A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance (VLR) to characterize the self-discharge process. The voltage and temperature dependence of the VLR values is also discussed. Results show that the VLR model is more accurate than the energy recursive equation (ERE) models for short term wireless sensor network applications.
An engineering approach to modelling, decision support and control for sustainable systems.
Day, W; Audsley, E; Frost, A R
2008-02-12
Engineering research and development contributes to the advance of sustainable agriculture both through innovative methods to manage and control processes, and through quantitative understanding of the operation of practical agricultural systems using decision models. This paper describes how an engineering approach, drawing on mathematical models of systems and processes, contributes new methods that support decision making at all levels from strategy and planning to tactics and real-time control. The ability to describe the system or process by a simple and robust mathematical model is critical, and the outputs range from guidance to policy makers on strategic decisions relating to land use, through intelligent decision support to farmers and on to real-time engineering control of specific processes. Precision in decision making leads to decreased use of inputs, less environmental emissions and enhanced profitability-all essential to sustainable systems.
NASA Astrophysics Data System (ADS)
Candra, S.; Batan, I. M. L.; Berata, W.; Pramono, A. S.
2017-11-01
This paper presents the mathematical approach of minimum blank holder force to prevent wrinkling in deep drawing process of the cylindrical cup. Based on the maximum of minor-major strain ratio, the slab method was applied to determine the modeling of minimum variable blank holder force (VBHF) and it compared to FE simulation. The Tin steel sheet of T4-CA grade, with the thickness of 0.2 mm was used in this study. The modeling of minimum VBHF can be used as a simple reference to prevent wrinkling in deep drawing.
Accelerating Drug Development: Antiviral Therapies for Emerging Viruses as a Model.
Everts, Maaike; Cihlar, Tomas; Bostwick, J Robert; Whitley, Richard J
2017-01-06
Drug discovery and development is a lengthy and expensive process. Although no one, simple, single solution can significantly accelerate this process, steps can be taken to avoid unnecessary delays. Using the development of antiviral therapies as a model, we describe options for acceleration that cover target selection, assay development and high-throughput screening, hit confirmation, lead identification and development, animal model evaluations, toxicity studies, regulatory issues, and the general drug discovery and development infrastructure. Together, these steps could result in accelerated timelines for bringing antiviral therapies to market so they can treat emerging infections and reduce human suffering.
A Hilbert Space Representation of Generalized Observables and Measurement Processes in the ESR Model
NASA Astrophysics Data System (ADS)
Sozzo, Sandro; Garola, Claudio
2010-12-01
The extended semantic realism ( ESR) model recently worked out by one of the authors embodies the mathematical formalism of standard (Hilbert space) quantum mechanics in a noncontextual framework, reinterpreting quantum probabilities as conditional instead of absolute. We provide here a Hilbert space representation of the generalized observables introduced by the ESR model that satisfy a simple physical condition, propose a generalization of the projection postulate, and suggest a possible mathematical description of the measurement process in terms of evolution of the compound system made up of the measured system and the measuring apparatus.
Homeopathic potentization based on nanoscale domains.
Czerlinski, George; Ypma, Tjalling
2011-12-01
The objectives of this study were to present a simple descriptive and quantitative model of how high potencies in homeopathy arise. The model begins with the mechanochemical production of hydrogen and hydroxyl radicals from water and the electronic stabilization of the resulting nanodomains of water molecules. The life of these domains is initially limited to a few days, but may extend to years when the electromagnetic characteristic of a homeopathic agent is copied onto the domains. This information is transferred between the original agent and the nanodomains, and also between previously imprinted nanodomains and new ones. The differential equations previously used to describe these processes are replaced here by exponential expressions, corresponding to simplified model mechanisms. Magnetic stabilization is also involved, since these long-lived domains apparently require the presence of the geomagnetic field. Our model incorporates this factor in the formation of the long-lived compound. Numerical simulation and graphs show that the potentization mechanism can be described quantitatively by a very simplified mechanism. The omitted factors affect only the fine structure of the kinetics. Measurements of pH changes upon absorption of different electromagnetic frequencies indicate that about 400 nanodomains polymerize to form one cooperating unit. Singlet excited states of some compounds lead to dramatic changes in their hydrogen ion dissociation constant, explaining this pH effect and suggesting that homeopathic information is imprinted as higher singlet excited states. A simple description is provided of the process of potentization in homeopathic dilutions. With the exception of minor details, this simple model replicates the results previously obtained from a more complex model. While excited states are short lived in isolated molecules, they become long lived in nanodomains that form coherent cooperative aggregates controlled by the geomagnetic field. These domains either slowly emit biophotons or perform specific biochemical work at their target.
ERIC Educational Resources Information Center
Suppes, Patrick; And Others
This report presents a theory of eye movement that accounts for main features of the stochastic behavior of eye-fixation durations and direction of movement of saccades in the process of solving arithmetic exercises of addition and subtraction. The best-fitting distribution of fixation durations with a relatively simple theoretical justification…
Action video games do not improve the speed of information processing in simple perceptual tasks.
van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan
2014-10-01
Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.
Action Video Games Do Not Improve the Speed of Information Processing in Simple Perceptual Tasks
van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U.; Ratcliff, Roger; Wagenmakers, Eric-Jan
2015-01-01
Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks. PMID:24933517
NASA Astrophysics Data System (ADS)
Yilmaz, Işık
2009-06-01
The purpose of this study is to compare the landslide susceptibility mapping methods of frequency ratio (FR), logistic regression and artificial neural networks (ANN) applied in the Kat County (Tokat—Turkey). Digital elevation model (DEM) was first constructed using GIS software. Landslide-related factors such as geology, faults, drainage system, topographical elevation, slope angle, slope aspect, topographic wetness index (TWI) and stream power index (SPI) were used in the landslide susceptibility analyses. Landslide susceptibility maps were produced from the frequency ratio, logistic regression and neural networks models, and they were then compared by means of their validations. The higher accuracies of the susceptibility maps for all three models were obtained from the comparison of the landslide susceptibility maps with the known landslide locations. However, respective area under curve (AUC) values of 0.826, 0.842 and 0.852 for frequency ratio, logistic regression and artificial neural networks showed that the map obtained from ANN model is more accurate than the other models, accuracies of all models can be evaluated relatively similar. The results obtained in this study also showed that the frequency ratio model can be used as a simple tool in assessment of landslide susceptibility when a sufficient number of data were obtained. Input process, calculations and output process are very simple and can be readily understood in the frequency ratio model, however logistic regression and neural networks require the conversion of data to ASCII or other formats. Moreover, it is also very hard to process the large amount of data in the statistical package.
NASA Astrophysics Data System (ADS)
Stockli, R.; Vidale, P. L.
2003-04-01
The importance of correctly including land surface processes in climate models has been increasingly recognized in the past years. Even on seasonal to interannual time scales land surface - atmosphere feedbacks can play a substantial role in determining the state of the near-surface climate. The availability of soil moisture for both runoff and evapotranspiration is dependent on biophysical processes occuring in plants and in the soil acting on a wide time-scale from minutes to years. Fluxnet site measurements in various climatic zones are used to drive three generations of LSM's (land surface models) in order to assess the level of complexity needed to represent vegetation processes at the local scale. The three models were the Bucket model (Manabe 1969), BATS 1E (Dickinson 1984) and SiB 2 (Sellers et al. 1996). Evapotranspiration and runoff processes simulated by these models range from simple one-layer soils and no-vegetation parameterizations to complex multilayer soils, including realistic photosynthesis-stomatal conductance models. The latter is driven by satellite remote sensing land surface parameters inheriting the spatiotemporal evolution of vegetation phenology. In addition a simulation with SiB 2 not only including vertical water fluxes but also lateral soil moisture transfers by downslope flow is conducted for a pre-alpine catchment in Switzerland. Preliminary results are presented and show that - depending on the climatic environment and on the season - a realistic representation of evapotranspiration processes including seasonally and interannually-varying state of vegetation is significantly improving the representation of observed latent and sensible heat fluxes on the local scale. Moreover, the interannual evolution of soil moisture availability and runoff is strongly dependent on the chosen model complexity. Biophysical land surface parameters from satellite allow to represent the seasonal changes in vegetation activity, which has great impact on the yearly budget of transpiration fluxes. For some sites, however, the hydrological cycle is simulated reasonably well even with simple land surface representations.
Application of a simple cerebellar model to geologic surface mapping
Hagens, A.; Doveton, J.H.
1991-01-01
Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.
On the bandwidth of the plenoptic function.
Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin
2012-02-01
The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE
Ciona as a Simple Chordate Model for Heart Development and Regeneration
Evans Anderson, Heather; Christiaen, Lionel
2016-01-01
Cardiac cell specification and the genetic determinants that govern this process are highly conserved among Chordates. Recent studies have established the importance of evolutionarily-conserved mechanisms in the study of congenital heart defects and disease, as well as cardiac regeneration. As a basal Chordate, the Ciona model system presents a simple scaffold that recapitulates the basic blueprint of cardiac development in Chordates. Here we will focus on the development and cellular structure of the heart of the ascidian Ciona as compared to other Chordates, principally vertebrates. Comparison of the Ciona model system to heart development in other Chordates presents great potential for dissecting the genetic mechanisms that underlie congenital heart defects and disease at the cellular level and might provide additional insight into potential pathways for therapeutic cardiac regeneration. PMID:27642586
A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations
Miller, Scot M.; Miller, Charles E.; Commane, Roisin; Chang, Rachel Y.-W.; Dinardo, Steven J.; Henderson, John M.; Karion, Anna; Lindaas, Jakob; Melton, Joe R.; Miller, John B.; Sweeney, Colm; Wofsy, Steven C.; Michalak, Anna M.
2016-01-01
Methane (CH4) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH4 fluxes across Alaska for 2012–2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH4 observations. In addition, we find that CH4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH4 (for May–Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH4 fluxes than in process estimates. Lastly, we find that the seasonality of CH4 fluxes varied during 2012–2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH4 fluxes from the state. PMID:28066129
A multi-year estimate of methane fluxes in Alaska from CARVE atmospheric observations.
Miller, Scot M; Miller, Charles E; Commane, Roisin; Chang, Rachel Y-W; Dinardo, Steven J; Henderson, John M; Karion, Anna; Lindaas, Jakob; Melton, Joe R; Miller, John B; Sweeney, Colm; Wofsy, Steven C; Michalak, Anna M
2016-10-01
Methane (CH 4 ) fluxes from Alaska and other arctic regions may be sensitive to thawing permafrost and future climate change, but estimates of both current and future fluxes from the region are uncertain. This study estimates CH 4 fluxes across Alaska for 2012-2014 using aircraft observations from the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) and a geostatistical inverse model (GIM). We find that a simple flux model based on a daily soil temperature map and a static map of wetland extent reproduces the atmospheric CH 4 observations at the state-wide, multi-year scale more effectively than global-scale, state-of-the-art process-based models. This result points to a simple and effective way of representing CH 4 flux patterns across Alaska. It further suggests that contemporary process-based models can improve their representation of key processes that control fluxes at regional scales, and that more complex processes included in these models cannot be evaluated given the information content of available atmospheric CH 4 observations. In addition, we find that CH 4 emissions from the North Slope of Alaska account for 24% of the total statewide flux of 1.74 ± 0.44 Tg CH 4 ( for May-Oct.). Contemporary global-scale process models only attribute an average of 3% of the total flux to this region. This mismatch occurs for two reasons: process models likely underestimate wetland area in regions without visible surface water, and these models prematurely shut down CH 4 fluxes at soil temperatures near 0°C. As a consequence, wetlands covered by vegetation and wetlands with persistently cold soils could be larger contributors to natural CH 4 fluxes than in process estimates. Lastly, we find that the seasonality of CH 4 fluxes varied during 2012-2014, but that total emissions did not differ significantly among years, despite substantial differences in soil temperature and precipitation; year-to-year variability in these environmental conditions did not affect obvious changes in total CH 4 fluxes from the state.
A simple, semi-prescriptive self-assessment model for TQM.
Warwood, Stephen; Antony, Jiju
2003-01-01
This article presents a simple, semi-prescriptive self-assessment model for use in industry as part of a continuous improvement program such as Total Quality Management (TQM). The process by which the model was constructed started with a review of the available literature in order to research TQM success factors. Next, postal surveys were conducted by sending questionnaires to the winning organisations of the Baldrige and European Quality Awards and to a preselected group of enterprising UK organisations. From the analysis of this data, the self-assessment model was constructed to help organisations in their quest for excellence. This work confirmed the findings from the literature, that there are key factors that contribute to the successful implementation of TQM and these have different levels of importance. These key factors, in order of importance, are: effective leadership, the impact of other quality-related programs, measurement systems, organisational culture, education and training, the use of teams, efficient communications, active empowerment of the workforce, and a systems infrastructure to support the business and customer-focused processes. This analysis, in turn, enabled the design of a self-assessment model that can be applied within any business setting. Further work should include the testing and review of this model to ascertain its suitability and effectiveness within industry today.
A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets
NASA Astrophysics Data System (ADS)
Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.
2009-12-01
The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone. Bayesian inversion is then applied to assign scaling factors that align the surface fluxes with the CO2 time series. Our project demonstrates how bottom-up and top-down techniques can be reconciled to arrive at a more robust and balanced spatial carbon budget. We will show how to evaluate existing flux products through regionally representative atmospheric observations, i.e. how well the underlying model assumptions represent processes on the regional scale. Adapting process model parameterizations sets for e.g. sub-regions, disturbance regimes, or land cover classes, in order to optimize the agreement between surface fluxes and atmospheric observations can lead to improved understanding of the underlying flux mechanisms, and reduces uncertainties in the regional carbon budgets.
Slow secondary relaxation in a free-energy landscape model for relaxation in glass-forming liquids
NASA Astrophysics Data System (ADS)
Diezemann, Gregor; Mohanty, Udayan; Oppenheim, Irwin
1999-02-01
Within the framework of a free-energy landscape model for the relaxation in supercooled liquids the primary (α) relaxation is modeled by transitions among different free-energy minima. The secondary (β) relaxation then corresponds to intraminima relaxation. We consider a simple model for the reorientational motions of the molecules associated with both processes and calculate the dielectric susceptibility as well as the spin-lattice relaxation times. The parameters of the model can be chosen in a way that both quantities show a behavior similar to that observed in experimental studies on supercooled liquids. In particular we find that it is not possible to obtain a crossing of the time scales associated with α and β relaxation. In our model these processes always merge at high temperatures and the α process remains above the merging temperature. The relation to other models is discussed.
On the modelling of shallow turbidity flows
NASA Astrophysics Data System (ADS)
Liapidevskii, Valery Yu.; Dutykh, Denys; Gisclon, Marguerite
2018-03-01
In this study we investigate shallow turbidity density currents and underflows from mechanical point of view. We propose a simple hyperbolic model for such flows. On one hand, our model is based on very basic conservation principles. On the other hand, the turbulent nature of the flow is also taken into account through the energy dissipation mechanism. Moreover, the mixing with the pure water along with sediments entrainment and deposition processes are considered, which makes the problem dynamically interesting. One of the main advantages of our model is that it requires the specification of only two modeling parameters - the rate of turbulent dissipation and the rate of the pure water entrainment. Consequently, the resulting model turns out to be very simple and self-consistent. This model is validated against several experimental data and several special classes of solutions (such as travelling, self-similar and steady) are constructed. Unsteady simulations show that some special solutions are realized as asymptotic long time states of dynamic trajectories.
Historical perspective on lead biokinetic models.
Rabinowitz, M
1998-01-01
A historical review of the development of biokinetic model of lead is presented. Biokinetics is interpreted narrowly to mean only physiologic processes happening within the body. Proceeding chronologically, for each epoch, the measurements of lead in the body are presented along with mathematical models in an attempt to trace the convergence of observations from two disparate fields--occupational medicine and radiologic health--into some unified models. Kehoe's early balance studies and the use of radioactive lead tracers are presented. The 1960s saw the joint application of radioactive lead techniques and simple compartmental kinetic models used to establish the exchange rates and residence times of lead in body pools. The applications of stable isotopes to questions of the magnitudes of respired and ingested inputs required the development of a simple three-pool model. During the 1980s more elaborate models were developed. One of their key goals was the establishment of the dose-response relationship between exposure to lead and biologic precursors of adverse health effects. PMID:9860905
Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard
2008-04-25
With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.
NASA Astrophysics Data System (ADS)
Sheaves, Marcus
2016-03-01
Predicting patterns of abundance and composition of biotic assemblages is essential to our understanding of key ecological processes, and our ability to monitor, evaluate and manage assemblages and ecosystems. Fish assemblages often vary from estuary to estuary in apparently unpredictable ways, making it challenging to develop a general understanding of the processes that determine assemblage composition. This makes it problematic to transfer understanding from one estuary situation to another and therefore difficult to assemble effective management plans or to assess the impacts of natural and anthropogenic disturbance. Although system-to-system variability is a common property of ecological systems, rather than being random it is the product of complex interactions of multiple causes and effects at a variety of spatial and temporal scales. I investigate the drivers of differences in estuary fish assemblages, to develop a simple model explaining the diversity and complexity of observed estuary-to-estuary differences, and explore its implications for management and conservation. The model attributes apparently unpredictable differences in fish assemblage composition from estuary to estuary to the interaction of species-specific, life history-specific and scale-specific processes. In explaining innate faunal differences among estuaries without the need to invoke complex ecological or anthropogenic drivers, the model provides a baseline against which the effects of additional natural and anthropogenic factors can be evaluated.
Using a crowdsourced approach for monitoring water level in a remote Kenyan catchment
NASA Astrophysics Data System (ADS)
Weeser, Björn; Jacobs, Suzanne; Rufino, Mariana; Breuer, Lutz
2017-04-01
Hydrological models or effective water management strategies only succeed if they are based on reliable data. Decreasing costs of technical equipment lower the barrier to create comprehensive monitoring networks and allow both spatial and temporal high-resolution measurements. However, these networks depend on specialised equipment, supervision, and maintenance producing high running expenses. This becomes particularly challenging for remote areas. Low income countries often do not have the capacity to run such networks. Delegating simple measurements to citizens living close to relevant monitoring points may reduce costs and increase the public awareness. Here we present our experiences of using a crowdsourced approach for monitoring water levels in remote catchments in Kenya. We established a low-cost system consisting of thirteen simple water level gauges and a Raspberry Pi based SMS-Server for data handling. Volunteers determine the water level and transmit their records using a simple text message. These messages are automatically processed and real-time feedback on the data quality is given. During the first year, more than 1200 valid records with high quality have been collected. In summary, the simple techniques for data collecting, transmitting and processing created an open platform that has the potential for reaching volunteers without the need for special equipment. Even though the temporal resolution of measurements cannot be controlled and peak flows might be missed, this data can still be considered as a valuable enhancement for developing management strategies or for hydrological modelling.
Single- and Dual-Process Models of Biased Contingency Detection
2016-01-01
Abstract. Decades of research in causal and contingency learning show that people’s estimations of the degree of contingency between two events are easily biased by the relative probabilities of those two events. If two events co-occur frequently, then people tend to overestimate the strength of the contingency between them. Traditionally, these biases have been explained in terms of relatively simple single-process models of learning and reasoning. However, more recently some authors have found that these biases do not appear in all dependent variables and have proposed dual-process models to explain these dissociations between variables. In the present paper we review the evidence for dissociations supporting dual-process models and we point out important shortcomings of this literature. Some dissociations seem to be difficult to replicate or poorly generalizable and others can be attributed to methodological artifacts. Overall, we conclude that support for dual-process models of biased contingency detection is scarce and inconclusive. PMID:27025532
Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Analysis of bacterial migration. 2: Studies with multiple attractant gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, I.; Frymier, P.D.; Hahn, C.M.
1995-02-01
Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less
A New Model for Inquiry: Is the Scientific Method Dead?
ERIC Educational Resources Information Center
Harwood, William S.
2004-01-01
There has been renewed discussion of the scientific method, with many voices arguing that it presents a very limited or even wholly incorrect image of the way science is really done. At the same time, the idea of a scientific method is pervasive. This article identifies the scientific method as a simple model for the process of scientific inquiry.…
IoGET: Internet of Geophysical and Environmental Things
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mudunuru, Maruti Kumar
The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.
Experimental and numerical modeling of shrub crown fire initiation
Watcharapong Tachajapong; Jesse Lozano; Shakar Mahalingam; Xiangyang Zhou; David Weise
2009-01-01
The transition of fire from dry surface fuels to wet shrub crown fuels was studied using laboratory experiments and a simple physical model to gain a better understanding of the transition process. In the experiments, we investigated the effects of varying vertical distances between surface and crown fuels (crown base height), and of the wind speed on crown fire...
Design and Training of Limited-Interconnect Architectures
1991-07-16
and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for
Context-dependent decision-making: a simple Bayesian model
Lloyd, Kevin; Leslie, David S.
2013-01-01
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101
Context-dependent decision-making: a simple Bayesian model.
Lloyd, Kevin; Leslie, David S
2013-05-06
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
Wagner, Peter J.
2012-01-01
Rate distributions are important considerations when testing hypotheses about morphological evolution or phylogeny. They also have implications about general processes underlying character evolution. Molecular systematists often assume that rates are Poisson processes with gamma distributions. However, morphological change is the product of multiple probabilistic processes and should theoretically be affected by hierarchical integration of characters. Both factors predict lognormal rate distributions. Here, a simple inverse modelling approach assesses the best single-rate, gamma and lognormal models given observed character compatibility for 115 invertebrate groups. Tests reject the single-rate model for nearly all cases. Moreover, the lognormal outperforms the gamma for character change rates and (especially) state derivation rates. The latter in particular is consistent with integration affecting morphological character evolution. PMID:21795266
Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R
2017-01-01
Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.
NASA Astrophysics Data System (ADS)
Tang, Guoping; Zheng, Jianqiu; Yang, Ziming; Graham, David; Gu, Baohua; Mayes, Melanie; Painter, Scott; Thornton, Peter
2016-04-01
Among the coupled thermal, hydrological, geochemical, and biological processes, redox processes play major roles in carbon and nutrient cycling and greenhouse gas (GHG) emission. Increasingly, mechanistic representation of redox processes is acknowledged as necessary for accurate prediction of GHG emission in the assessment of land-atmosphere interactions. Simple organic substrates, Fe reduction, microbial reactions, and the Windermere Humic Aqueous Model (WHAM) were added to a reaction network used in the land component of an Earth system model. In conjunction with this amended reaction network, various temperature response functions used in ecosystem models were assessed for their ability to describe experimental observations from incubation tests with arctic soils. Incorporation of Fe reduction reactions improves the prediction of the lag time between CO2 and CH4 accumulation. The inclusion of the WHAM model enables us to approximately simulate the initial pH drop due to organic acid accumulation and then a pH increase due to Fe reduction without parameter adjustment. The CLM4.0, CENTURY, and Ratkowsky temperature response functions better described the observations than the Q10 method, Arrhenius equation, and ROTH-C. As electron acceptors between O2 and CO2 (e.g., Fe(III), SO42-) are often involved, our results support inclusion of these redox reactions for accurate prediction of CH4 production and consumption. Ongoing work includes improving the parameterization of organic matter decomposition to produce simple organic substrates, examining the influence of redox potential on methanogenesis under thermodynamically favorable conditions, and refining temperature response representation near the freezing point by additional model-experiment iterations. We will use the model to describe observed GHG emission at arctic and tropical sites.
Kinetic Theory and Simulation of Single-Channel Water Transport
NASA Astrophysics Data System (ADS)
Tajkhorshid, Emad; Zhu, Fangqiang; Schulten, Klaus
Water translocation between various compartments of a system is a fundamental process in biology of all living cells and in a wide variety of technological problems. The process is of interest in different fields of physiology, physical chemistry, and physics, and many scientists have tried to describe the process through physical models. Owing to advances in computer simulation of molecular processes at an atomic level, water transport has been studied in a variety of molecular systems ranging from biological water channels to artificial nanotubes. While simulations have successfully described various kinetic aspects of water transport, offering a simple, unified model to describe trans-channel translocation of water turned out to be a nontrivial task.
NASA Astrophysics Data System (ADS)
LaManna, Joseph C.; Sun, Xiaoyan; Ivy, Andre D.; Ward, Nicole L.
We have used a relatively simple model of hypoxia that triggers adaptive structural changes in the cerebral microvasculature to study the process of physiological angiogenesis. This model can be used to obtain mechanistic data for the processes that probably underlie the dynamic structural changes that occur in learning and the control of oxygen availability to the neurovascular unit. These mechanisms are broadly involved in a wide variety of pathophysiological processes. This is the vascular component to CNS functional plasticity, supporting learning and adaptation. The angiogenic process may wane with age, contributing to the decreasing ability to survive metabolic stress and the diminution of neuronal plasticity.
Diffusion of Defaults Among Financial Institutions
NASA Astrophysics Data System (ADS)
Demange, Gabrielle
The paper proposes a simple unified model for the diffusion of defaults across financial institutions and presents some measures for evaluating the risk imposed by a bank on the system. So far the standard contagion processes might not incorporate some important features of financial contagion.
Interaction of a sodium ion with the water liquid-vapor interface
NASA Technical Reports Server (NTRS)
Wilson, M. A.; Pohorille, A.; Pratt, L. R.; MacElroy, R. D. (Principal Investigator)
1989-01-01
Molecular dynamics results are presented for the density profile of a sodium ion near the water liquid-vapor interface at 320 K. These results are compared with the predictions of a simple dielectric model for the interaction of a monovalent ion with this interface. The interfacial region described by the model profile is too narrow and the profile decreases too abruptly near the solution interface. Thus, the simple model does not provide a satisfactory description of the molecular dynamics results for ion positions within two molecular diameters from the solution interface where appreciable ion concentrations are observed. These results suggest that surfaces associated with dielectric models of ionic processes at aqueous solution interfaces should be located at least two molecular diameters inside the liquid phase. A free energy expense of about 2 kcal/mol is required to move the ion within two molecular layers of the free water liquid-vapor interface.
Konovalov, Arkady; Krajbich, Ian
2016-01-01
Organisms appear to learn and make decisions using different strategies known as model-free and model-based learning; the former is mere reinforcement of previously rewarded actions and the latter is a forward-looking strategy that involves evaluation of action-state transition probabilities. Prior work has used neural data to argue that both model-based and model-free learners implement a value comparison process at trial onset, but model-based learners assign more weight to forward-looking computations. Here using eye-tracking, we report evidence for a different interpretation of prior results: model-based subjects make their choices prior to trial onset. In contrast, model-free subjects tend to ignore model-based aspects of the task and instead seem to treat the decision problem as a simple comparison process between two differentially valued items, consistent with previous work on sequential-sampling models of decision making. These findings illustrate a problem with assuming that experimental subjects make their decisions at the same prescribed time. PMID:27511383
Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen
2014-01-01
very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.
Wide angle near-field optical probes by reverse tube etching.
Patanè, S; Cefalì, E; Arena, A; Gucciardi, P G; Allegrini, M
2006-04-01
We present a simple modification of the tube etching process for the fabrication of fiber probes for near-field optical microscopy. It increases the taper angle of the probe by a factor of two. The novelty is that the fiber is immersed in hydrofluoric acid and chemically etched in an upside-down geometry. The tip formation occurs inside the micrometer tube cavity formed by the polymeric jacket. By applying this approach, called reverse tube etching, to multimode fibers with 200/250 microm core/cladding diameter, we have fabricated tapered regions featuring high surface smoothness and average cone angles of approximately 30 degrees . A simple model based on the crucial role of the gravity in removing the etching products, explains the tip formation process.
Modeling Hidden Circuits: An Authentic Research Experience in One Lab Period
NASA Astrophysics Data System (ADS)
Moore, J. Christopher; Rubbo, Louis J.
2016-10-01
Two wires exit a black box that has three exposed light bulbs connected together in an unknown configuration. The task for students is to determine the circuit configuration without opening the box. In the activity described in this paper, we navigate students through the process of making models, developing and conducting experiments that can support or falsify models, and confronting ways of distinguishing between two different models that make similar predictions. We also describe a twist that forces students to confront new phenomena, requiring revision of their mental model of electric circuits. This activity is designed to mirror the practice of science by actual scientists and expose students to the "messy" side of science, where our simple explanations of reality often require expansion and/or revision based on new evidence. The purpose of this paper is to present a simple classroom activity within the context of electric circuits that supports students as they learn to test hypotheses and refine and revise models based on evidence.
Modeling Spacecraft Fuel Slosh at Embry-Riddle Aeronautical University
NASA Technical Reports Server (NTRS)
Schlee, Keith L.
2007-01-01
As a NASA-sponsored GSRP Fellow, I worked with other researchers and analysts at Embry-Riddle Aeronautical University and NASA's ELV Division to investigate the effect of spacecraft fuel slosh. NASA's research into the effects of fuel slosh includes modeling the response in full-sized tanks using equipment such as the Spinning Slosh Test Rig (SSTR), located at Southwest Research Institute (SwRI). NASA and SwRI engineers analyze data taken from SSTR runs and hand-derive equations of motion to identify model parameters and characterize the sloshing motion. With guidance from my faculty advisor, Dr. Sathya Gangadharan, and NASA flight controls analysts James Sudermann and Charles Walker, I set out to automate this parameter identification process by building a simple physical experimental setup to model free surface slosh in a spherical tank with a simple pendulum analog. This setup was then modeled using Simulink and SimMechanics. The Simulink Parameter Estimation Tool was then used to identify the model parameters.
Ono, Daiki; Bamba, Takeshi; Oku, Yuichi; Yonetani, Tsutomu; Fukusaki, Eiichiro
2011-09-01
In this study, we constructed prediction models by metabolic fingerprinting of fresh green tea leaves using Fourier transform near-infrared (FT-NIR) spectroscopy and partial least squares (PLS) regression analysis to objectively optimize of the steaming process conditions in green tea manufacture. The steaming process is the most important step for manufacturing high quality green tea products. However, the parameter setting of the steamer is currently determined subjectively by the manufacturer. Therefore, a simple and robust system that can be used to objectively set the steaming process parameters is necessary. We focused on FT-NIR spectroscopy because of its simple operation, quick measurement, and low running costs. After removal of noise in the spectral data by principal component analysis (PCA), PLS regression analysis was performed using spectral information as independent variables, and the steaming parameters set by experienced manufacturers as dependent variables. The prediction models were successfully constructed with satisfactory accuracy. Moreover, the results of the demonstrated experiment suggested that the green tea steaming process parameters could be predicted on a larger manufacturing scale. This technique will contribute to improvement of the quality and productivity of green tea because it can objectively optimize the complicated green tea steaming process and will be suitable for practical use in green tea manufacture. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Tree-Structured Digital Organisms Model
NASA Astrophysics Data System (ADS)
Suzuki, Teruhiko; Nobesawa, Shiho; Tahara, Ikuo
Tierra and Avida are well-known models of digital organisms. They describe a life process as a sequence of computation codes. A linear sequence model may not be the only way to describe a digital organism, though it is very simple for a computer-based model. Thus we propose a new digital organism model based on a tree structure, which is rather similar to the generic programming. With our model, a life process is a combination of various functions, as if life in the real world is. This implies that our model can easily describe the hierarchical structure of life, and it can simulate evolutionary computation through mutual interaction of functions. We verified our model by simulations that our model can be regarded as a digital organism model according to its definitions. Our model even succeeded in creating species such as viruses and parasites.
Mitigating stimulated scattering processes in gas-filled Hohlraums via external magnetic fields
NASA Astrophysics Data System (ADS)
Gong, Tao; Zheng, Jian; Li, Zhichao; Ding, Yongkun; Yang, Dong; Hu, Guangyue; Zhao, Bin
2015-09-01
A simple model, based on energy and pressure equilibrium, is proposed to deal with the effect of external magnetic fields on the plasma parameters inside the laser path, which shows that the electron temperature can be significantly enhanced as the intensity of the external magnetic fields increases. With the combination of this model and a 1D three-wave coupling code, the effect of external magnetic fields on the reflectivities of stimulated scattering processes is studied. The results indicate that a magnetic field with an intensity of tens of Tesla can decrease the reflectivities of stimulated scattering processes by several orders of magnitude.
Three dimensional hair model by means particles using Blender
NASA Astrophysics Data System (ADS)
Alvarez-Cedillo, Jesús Antonio; Almanza-Nieto, Roberto; Herrera-Lozada, Juan Carlos
2010-09-01
The simulation and modeling of human hair is a process whose computational complexity is very large, this due to the large number of factors that must be calculated to give a realistic appearance. Generally, the method used in the film industry to simulate hair is based on particle handling graphics. In this paper we present a simple approximation of how to model human hair using particles in Blender. [Figure not available: see fulltext.
Attentional gating models of object substitution masking.
Põder, Endel
2013-11-01
Di Lollo, Enns, and Rensink (2000) proposed the computational model of object substitution (CMOS) to explain their experimental results with sparse visual maskers. This model supposedly is based on reentrant hypotheses testing in the visual system, and the modeled experiments are believed to demonstrate these reentrant processes in human vision. In this study, I analyze the main assumptions of this model. I argue that CMOS is a version of the attentional gating model and that its relationship with reentrant processing is rather illusory. The fit of this model to the data indicates that reentrant hypotheses testing is not necessary for the explanation of object substitution masking (OSM). Further, the original CMOS cannot predict some important aspects of the experimental data. I test 2 new models incorporating an unselective processing (divided attention) stage; these models are more consistent with data from OSM experiments. My modeling shows that the apparent complexity of OSM can be reduced to a few simple and well-known mechanisms of perception and memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling
NASA Technical Reports Server (NTRS)
Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.;
2014-01-01
Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.
SIGKit: Software for Introductory Geophysics Toolkit
NASA Astrophysics Data System (ADS)
Kruse, S.; Bank, C. G.; Esmaeili, S.; Jazayeri, S.; Liu, S.; Stoikopoulos, N.
2017-12-01
The Software for Introductory Geophysics Toolkit (SIGKit) affords students the opportunity to create model data and perform simple processing of field data for various geophysical methods. SIGkit provides a graphical user interface built with the MATLAB programming language, but can run even without a MATLAB installation. At this time SIGkit allows students to pick first arrivals and match a two-layer model to seismic refraction data; grid total-field magnetic data, extract a profile, and compare this to a synthetic profile; and perform simple processing steps (subtraction of a mean trace, hyperbola fit) to ground-penetrating radar data. We also have preliminary tools for gravity, resistivity, and EM data representation and analysis. SIGkit is being built by students for students, and the intent of the toolkit is to provide an intuitive interface for simple data analysis and understanding of the methods, and act as an entrance to more sophisticated software. The toolkit has been used in introductory courses as well as field courses. First reactions from students are positive. Think-aloud observations of students using the toolkit have helped identify problems and helped shape it. We are planning to compare the learning outcomes of students who have used the toolkit in a field course to students in a previous course to test its effectiveness.
Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas
NASA Astrophysics Data System (ADS)
Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.
2003-04-01
Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.
A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion
Landy, Michael S.
2016-01-01
Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. SIGNIFICANCE STATEMENT Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. PMID:27807167
A Two-Stage Process Model of Sensory Discrimination: An Alternative to Drift-Diffusion.
Sun, Peng; Landy, Michael S
2016-11-02
Discrimination of the direction of motion of a noisy stimulus is an example of sensory discrimination under uncertainty. For stimuli that are extended in time, reaction time is quicker for larger signal values (e.g., discrimination of opposite directions of motion compared with neighboring orientations) and larger signal strength (e.g., stimuli with higher contrast or motion coherence, that is, lower noise). The standard model of neural responses (e.g., in lateral intraparietal cortex) and reaction time for discrimination is drift-diffusion. This model makes two clear predictions. (1) The effects of signal strength and value on reaction time should interact multiplicatively because the diffusion process depends on the signal-to-noise ratio. (2) If the diffusion process is interrupted, as in a cued-response task, the time to decision after the cue should be independent of the strength of accumulated sensory evidence. In two experiments with human participants, we show that neither prediction holds. A simple alternative model is developed that is consistent with the results. In this estimate-then-decide model, evidence is accumulated until estimation precision reaches a threshold value. Then, a decision is made with duration that depends on the signal-to-noise ratio achieved by the first stage. Sensory decision-making under uncertainty is usually modeled as the slow accumulation of noisy sensory evidence until a threshold amount of evidence supporting one of the possible decision outcomes is reached. Furthermore, it has been suggested that this accumulation process is reflected in neural responses, e.g., in lateral intraparietal cortex. We derive two behavioral predictions of this model and show that neither prediction holds. We introduce a simple alternative model in which evidence is accumulated until a sufficiently precise estimate of the stimulus is achieved, and then that estimate is used to guide the discrimination decision. This model is consistent with the behavioral data. Copyright © 2016 the authors 0270-6474/16/3611259-16$15.00/0.
Using the TSAR electromagnetic modeling system
NASA Astrophysics Data System (ADS)
Pennock, S. T.; Laguna, G. W.
1993-09-01
A new user, upon receipt of the TSAR EM modeling system, may be overwhelmed by the number of software packages to learn and the number of manuals associated with those packages. This is a document to describe the creation of a simple TSAR model, beginning with an MGED solid and continuing the process through final results from TSAR. It is not intended to be a complete description of all the parts of the TSAR package. Rather, it is intended simply to touch on all the steps in the modeling process and to take a new user through the system from start to finish. There are six basic parts to the TSAR package. The first, MGED, is part of the BRL-CAD package and is used to create a solid model. The second part, ANASTASIA, is the program used to sample the solid model and create a finite-difference mesh. The third program, IMAGE, lets the user view the mesh itself and verify its accuracy. If everything about the mesh is correct, the process continues to the fourth step, SETUP-TSAR, which creates the parameter files for compiling TSAR and the input file for running a particular simulation. The fifth step is actually running TSAR, the field modeling program. Finally, the output from TSAR is placed into SIG, B2RAS or another program for post-processing and plotting. Each of these steps will be described below. The best way to learn to use the TSAR software is to actually create and run a simple test problem. As an example of how to use the TSAR package, let's create a sphere with a rectangular internal cavity, with conical and cylindrical penetrations connecting the outside to the inside, and find the electric field inside the cavity when the object is exposed to a Gaussian plane wave. We will begin with the solid modeling software, MGED, a part of the BRL-CAD modeling release.
Using the TSAR Electromagnetic modeling system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennock, S.T.; Laguna, G.W.
1993-09-01
A new user, upon receipt of the TSAR EM modeling system, may be overwhelmed by the number of software packages to learn and the number of manuals associated with those packages. This is a document to describe the creation of a simple TSAR model, beginning with an MGED solid and continuing the process through final results from TSAR. It is not intended to be a complete description of all the parts of the TSAR package. Rather, it is intended simply to touch on all the steps in the modeling process and to take a new user through the system frommore » start to finish. There are six basic parts to the TSAR package. The first, MGED, is part of the BRL-CAD package and is used to create a solid model. The second part, ANASTASIA, is the program used to sample the solid model and create a finite -- difference mesh. The third program, IMAGE, lets the user view the mesh itself and verify its accuracy. If everything about the mesh is correct, the process continues to the fourth step, SETUP-TSAR, which creates the parameter files for compiling TSAR and the input file for running a particular simulation. The fifth step is actually running TSAR, the field modeling program. Finally, the output from TSAR is placed into SIG, B2RAS or another program for post-processing and plotting. Each of these steps will be described below. The best way to learn to use the TSAR software is to actually create and run a simple test problem. As an example of how to use the TSAR package, let`s create a sphere with a rectangular internal cavity, with conical and cylindrical penetrations connecting the outside to the inside, and find the electric field inside the cavity when the object is exposed to a Gaussian plane wave. We will begin with the solid modeling software, MGED, a part of the BRL-CAD modeling release.« less
A Simple and Accurate Rate-Driven Infiltration Model
NASA Astrophysics Data System (ADS)
Cui, G.; Zhu, J.
2017-12-01
In this study, we develop a novel Rate-Driven Infiltration Model (RDIMOD) for simulating infiltration into soils. Unlike traditional methods, RDIMOD avoids numerically solving the highly non-linear Richards equation or simply modeling with empirical parameters. RDIMOD employs infiltration rate as model input to simulate one-dimensional infiltration process by solving an ordinary differential equation. The model can simulate the evolutions of wetting front, infiltration rate, and cumulative infiltration on any surface slope including vertical and horizontal directions. Comparing to the results from the Richards equation for both vertical infiltration and horizontal infiltration, RDIMOD simply and accurately predicts infiltration processes for any type of soils and soil hydraulic models without numerical difficulty. Taking into account the accuracy, capability, and computational effectiveness and stability, RDIMOD can be used in large-scale hydrologic and land-atmosphere modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.; Moon, T.J.; Howell, J.R.
This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J
The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less
NASA Astrophysics Data System (ADS)
Li, Dongna; Li, Xudong; Dai, Jianfeng
2018-06-01
In this paper, two kinds of transient models, the viscoelastic model and the linear elastic model, are established to analyze the curing deformation of the thermosetting resin composites, and are calculated by COMSOL Multiphysics software. The two models consider the complicated coupling between physical and chemical changes during curing process of the composites and the time-variant characteristic of material performance parameters. Subsequently, the two proposed models are implemented respectively in a three-dimensional composite laminate structure, and a simple and convenient method of local coordinate system is used to calculate the development of residual stresses, curing shrinkage and curing deformation for the composite laminate. Researches show that the temperature, degree of curing (DOC) and residual stresses during curing process are consistent with the study in literature, so the curing shrinkage and curing deformation obtained on these basis have a certain referential value. Compared the differences between the two numerical results, it indicates that the residual stress and deformation calculated by the viscoelastic model are more close to the reference value than the linear elastic model.
Key issues, observations and goals for coupled, thermodynamic/geodynamic models
NASA Astrophysics Data System (ADS)
Kelemen, P. B.
2017-12-01
In coupled, thermodynamic/geodynamic models, focus should be on processes involving major rock forming minerals and simple fluid compositions, and parameters with first-order effects on likely dynamic processes: In a given setting, will fluid mass increase or decrease? How about solid density? Will flow become localized or diffuse? Will rocks flow or break? How do reactions affect global processes such as formation and evolution of the plates, plate boundary deformation, metamorphism, weathering, climate and geochemical cycles. Important reaction feedbacks in geodynamics include formation of dissolution channels and armored channels; divergence of flow and formation of permeability barriers due to crystallization in pore space; localization of fluid transport and ductile deformation in shear zones; reaction-driven cracking; mechanical channels granular media; shear heating; density instabilities; viscous fluid-weakening; fluid-induced frictional failure; and hydraulic fracture. Density instabilities often lead to melting, and there is an interesting dialectic between porous flow and diapirs. The best models provide a simple but comprehensive framework that can account for the general features in many or most of these phenomena. Ideally, calculations based on thermodynamic data and rheological observations alone should delineate the regimes in which each of these processes will occur and the boundaries between them. These often start with "toy models" and lab experiments on analog systems, with highly approximate scaling to simplified geological conditions and materials. Geologic observations provide the best constraints where `frozen' fluid transport pathways or deformation processes are preserved. Inferences about completed processes based on fluid or solid products alone is more challenging and less unique. Not all important processes have good examples in outcrop, so directed searches for specific phenomena may fail. A highly generalized approach provides a way forward, allowing serendipitous discoveries of iconic examples wherever they are best developed. These then constrain and inspire the overall "phase diagram" of geodynamic processes.
ERIC Educational Resources Information Center
Lu, Yonggang; Henning, Kevin S. S.
2013-01-01
Spurred by recent writings regarding statistical pragmatism, we propose a simple, practical approach to introducing students to a new style of statistical thinking that models nature through the lens of data-generating processes, not populations. (Contains 5 figures.)
Competition of simple and complex adoption on interdependent networks
NASA Astrophysics Data System (ADS)
Czaplicka, Agnieszka; Toral, Raul; San Miguel, Maxi
2016-12-01
We consider the competition of two mechanisms for adoption processes: a so-called complex threshold dynamics and a simple susceptible-infected-susceptible (SIS) model. Separately, these mechanisms lead, respectively, to first-order and continuous transitions between nonadoption and adoption phases. We consider two interconnected layers. While all nodes on the first layer follow the complex adoption process, all nodes on the second layer follow the simple adoption process. Coupling between the two adoption processes occurs as a result of the inclusion of some additional interconnections between layers. We find that the transition points and also the nature of the transitions are modified in the coupled dynamics. In the complex adoption layer, the critical threshold required for extension of adoption increases with interlayer connectivity whereas in the case of an isolated single network it would decrease with average connectivity. In addition, the transition can become continuous depending on the detailed interlayer and intralayer connectivities. In the SIS layer, any interlayer connectivity leads to the extension of the adopter phase. Besides, a new transition appears as a sudden drop of the fraction of adopters in the SIS layer. The main numerical findings are described by a mean-field type analytical approach appropriately developed for the threshold-SIS coupled system.
The dynamic nature of conflict in Wikipedia
NASA Astrophysics Data System (ADS)
Gandica, Y.; Sampaio dos Aidos, F.; Carvalho, J.
2014-10-01
The voluntary process of Wikipedia edition provides an environment in which the outcome is clearly a collective product of interactions involving a large number of people. We propose a simple agent-based model, developed from real data, to reproduce the collaborative process of Wikipedia edition. With a small number of simple ingredients, our model mimics several interesting features of real human behaviour, namely in the context of edit wars. We show that the level of conflict is determined by a tolerance parameter, which measures the editors' capability to accept different opinions and to change their own opinion. We propose to measure conflict with a parameter based on mutual reverts, which increases only in contentious situations. Using this parameter, we find a distribution for the inter-peace periods that is heavy tailed. The effects of wiki-robots in the conflict levels and in the edition patterns are also studied. Our findings are compared with previous parameters used to measure conflicts in edit wars.
Parallel constraint satisfaction in memory-based decisions.
Glöckner, Andreas; Hodges, Sara D
2011-01-01
Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.
Gravitational decoupling and the Picard-Lefschetz approach
NASA Astrophysics Data System (ADS)
Brown, Jon; Cole, Alex; Shiu, Gary; Cottrell, William
2018-01-01
In this work, we consider tunneling between nonmetastable states in gravitational theories. Such processes arise in various contexts, e.g., in inflationary scenarios where the inflaton potential involves multiple fields or multiple branches. They are also relevant for bubble wall nucleation in some cosmological settings. However, we show that the transition amplitudes computed using the Euclidean method generally do not approach the corresponding field theory limit as Mp→∞ . This implies that in the Euclidean framework, there is no systematic expansion in powers of GN for such processes. Such considerations also carry over directly to no-boundary scenarios involving Hawking-Turok instantons. In this note, we illustrate this failure of decoupling in the Euclidean approach with a simple model of axion monodromy and then argue that the situation can be remedied with a Lorentzian prescription such as the Picard-Lefschetz theory. As a proof of concept, we illustrate with a simple model how tunneling transition amplitudes can be calculated using the Picard-Lefschetz approach.
Image Discrimination Models for Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.
2000-01-01
This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.
Formal verification of automated teller machine systems using SPIN
NASA Astrophysics Data System (ADS)
Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam
2017-08-01
Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.
Pyroelectric effect in tryglicyne sulphate single crystals - Differential measurement method
NASA Astrophysics Data System (ADS)
Trybus, M.
2018-06-01
A simple mathematical model of the pyroelectric phenomenon was used to explain the electric response of the TGS (triglycine sulphate) samples in the linear heating process in ferroelectric and paraelectric phases. Experimental verification of mathematical model was realized. TGS single crystals were grown and four electrode samples were fabricated. Differential measurements of the pyroelectric response of two different regions of the samples were performed and the results were compared with data obtained from the model. Experimental results are in good agreement with model calculations.
DNA denaturation through a model of the partition points on a one-dimensional lattice
NASA Astrophysics Data System (ADS)
Mejdani, R.; Huseini, H.
1994-08-01
We have shown that by using a model of the partition points gas on a one-dimensional lattice, we can study, besides the saturation curves obtained before for the enzyme kinetics, also the denaturation process, i.e. the breaking of the hydrogen bonds connecting the two strands, under treatment by heat of DNA. We think that this model, as a very simple model and mathematically transparent, can be advantageous for pedagogic goals or other theoretical investigations in chemistry or modern biology.
Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods
NASA Technical Reports Server (NTRS)
Adams, G. F.
1980-01-01
The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.
Maximum efficiency of state-space models of nanoscale energy conversion devices
NASA Astrophysics Data System (ADS)
Einax, Mario; Nitzan, Abraham
2016-07-01
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
Maximum efficiency of state-space models of nanoscale energy conversion devices.
Einax, Mario; Nitzan, Abraham
2016-07-07
The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.
NASA Astrophysics Data System (ADS)
Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar
2018-04-01
We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, William
Over the 21 years of funding we have pursued several projects related to earthquakes, damage and nucleation. We developed simple models of earthquake faults which we studied to understand Gutenburg-Richter scaling, foreshocks and aftershocks, the effect of spatial structure of the faults and its interaction with underlying self organization and phase transitions. In addition we studied the formation of amorphous solids via the glass transition. We have also studied nucleation with a particular concentration on transitions in systems with a spatial symmetry change. In addition we investigated the nucleation process in models that mimic rock masses. We obtained the structuremore » of the droplet in both homogeneous and heterogeneous nucleation. We also investigated the effect of defects or asperities on the nucleation of failure in simple models of earthquake faults.« less
NASA Astrophysics Data System (ADS)
Palit, Sourav; Chakrabarti, Sandip Kumar; Pal, Sujay; Basak, Tamal
Extra ionization by X-rays during solar flares affects VLF signal propagation through D-region ionosphere. Ionization produced in the lower ionosphere due to X-ray spectra of solar flares are simulated with an efficient detector simulation program, GEANT4. The balancing between the ionization and loss processes, causing the lower ionosphere to settle back to its undisturbed state is handled with a simple chemical model consisting of four broad species of ion densities. Using the electron densities, modified VLF signal amplitude is then computed with LWPC code. VLF signal along NWC (Australia) to IERC/ICSP (India) propagation path is examined during a M and a X-type solar flares and observational deviations are compared with simulated results. The agreement is found to be excellent.
NASA Astrophysics Data System (ADS)
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
Pattern recognition analysis and classification modeling of selenium-producing areas
Naftz, D.L.
1996-01-01
Established chemometric and geochemical techniques were applied to water quality data from 23 National Irrigation Water Quality Program (NIWQP) study areas in the Western United States. These techniques were applied to the NIWQP data set to identify common geochemical processes responsible for mobilization of selenium and to develop a classification model that uses major-ion concentrations to identify areas that contain elevated selenium concentrations in water that could pose a hazard to water fowl. Pattern recognition modeling of the simple-salt data computed with the SNORM geochemical program indicate three principal components that explain 95% of the total variance. A three-dimensional plot of PC 1, 2 and 3 scores shows three distinct clusters that correspond to distinct hydrochemical facies denoted as facies 1, 2 and 3. Facies 1 samples are distinguished by water samples without the CaCO3 simple salt and elevated concentrations of NaCl, CaSO4, MgSO4 and Na2SO4 simple salts relative to water samples in facies 2 and 3. Water samples in facies 2 are distinguished from facies 1 by the absence of the MgSO4 simple salt and the presence of the CaCO3 simple salt. Water samples in facies 3 are similar to samples in facies 2, with the absence of both MgSO4 and CaSO4 simple salts. Water samples in facies 1 have the largest selenium concentration (10 ??gl-1), compared to a median concentration of 2.0 ??gl-1 and less than 1.0 ??gl-1 for samples in facies 2 and 3. A classification model using the soft independent modeling by class analogy (SIMCA) algorithm was constructed with data from the NIWQP study areas. The classification model was successful in identifying water samples with a selenium concentration that is hazardous to some species of water-fowl from a test data set comprised of 2,060 water samples from throughout Utah and Wyoming. Application of chemometric and geochemical techniques during data synthesis analysis of multivariate environmental databases from other national-scale environmental programs such as the NIWQP could also provide useful insights for addressing 'real world' environmental problems.
A psychological model of mental disorder.
Kinderman, Peter
2005-01-01
A coherent conceptualization of the role of psychological factors is of great importance in understanding mental disorder. Academic articles and professional reports alluding to psychological models of the etiology of mental disorder are becoming increasingly common, and there is evidence of a marked policy shift toward the provision of psychological therapies and interventions. This article discusses the relationship between biological, social, and psychological factors in the causation and treatment of mental disorder. It argues that simple biological reductionism is not scientifically justified, and also that the specific role of psychological processes within the biopsychosocial model requires further elaboration. The biopsychosocial model is usually interpreted as implying that biological, psychological, and social factors are co-equal partners in the etiology of mental disorder. The psychological model of mental disorder presented here suggests that disruption or dysfunction in psychological processes is a final common pathway in the development of mental disorder. These processes include, but are not limited to, cognitive processes. The model proposes that biological and social factors, together with a person's individual experiences, lead to mental disorder through their conjoint effects on those psychological processes. Implications for research, interventions, and policy are discussed.
ERIC Educational Resources Information Center
GLOVER, J.H.
THE CHIEF OBJECTIVE OF THIS STUDY OF SPEED-SKILL ACQUISITION WAS TO FIND A MATHEMATICAL MODEL CAPABLE OF SIMPLE GRAPHIC INTERPRETATION FOR INDUSTRIAL TRAINING AND PRODUCTION SCHEDULING AT THE SHOP FLOOR LEVEL. STUDIES OF MIDDLE SKILL DEVELOPMENT IN MACHINE AND VEHICLE ASSEMBLY, AIRCRAFT PRODUCTION, SPOOLMAKING AND THE MACHINING OF PARTS CONFIRMED…
Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard
2008-01-01
Background With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Methods Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. Conclusions As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way. PMID:18460173
A model for the space shuttle main engine high pressure oxidizer turbopump shaft seal system
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
1990-01-01
A simple static model is presented which solves for the flow properties of pressure, temperature, and mass flow in the Space Shuttle Main Engine pressure Oxidizer Turbopump Shaft Seal Systems. This system includes the primary and secondary turbine seals, the primary and secondary turbine drains, the helium purge seals and feed line, the primary oxygen drain, and the slinger/labyrinth oxygen seal pair. The model predicts the changes in flow variables that occur during and after failures of the various seals. Such information would be particularly useful in a post flight situation where processing of sensor information using this model could identify a particular seal that had experienced excessive wear. Most of the seals in the system are modeled using simple one dimensional equations which can be applied to almost any seal provided that the fluid is gaseous. A failure is modeled as an increase in the clearance between the shaft and the seal. Thus, the model does not attempt to predict how the failure process actually occurs (e.g., wear, seal crack initiation). The results presented were obtained using a FORTRAN implementation of the model running on a VAX computer. Solution for the seal system properties is obtained iteratively; however, a further simplified implementation (which does not include the slinger/labyrinth combination) was also developed which provides fast and reasonable results for most engine operating conditions. Results from the model compare favorably with the limited redline data available.
Dynamics of non-Markovian exclusion processes
NASA Astrophysics Data System (ADS)
Khoromskaia, Diana; Harris, Rosemary J.; Grosskinsky, Stefan
2014-12-01
Driven diffusive systems are often used as simple discrete models of collective transport phenomena in physics, biology or social sciences. Restricting attention to one-dimensional geometries, the asymmetric simple exclusion process (ASEP) plays a paradigmatic role to describe noise-activated driven motion of entities subject to an excluded volume interaction and many variants have been studied in recent years. While in the standard ASEP the noise is Poissonian and the process is therefore Markovian, in many applications the statistics of the activating noise has a non-standard distribution with possible memory effects resulting from internal degrees of freedom or external sources. This leads to temporal correlations and can significantly affect the shape of the current-density relation as has been studied recently for a number of scenarios. In this paper we report a general framework to derive the fundamental diagram of ASEPs driven by non-Poissonian noise by using effectively only two simple quantities, viz., the mean residual lifetime of the jump distribution and a suitably defined temporal correlation length. We corroborate our results by detailed numerical studies for various noise statistics under periodic boundary conditions and discuss how our approach can be applied to more general driven diffusive systems.
The time course of corticospinal excitability during a simple reaction time task.
Kennefick, Michael; Maslovat, Dana; Carlsen, Anthony N
2014-01-01
The production of movement in a simple reaction time task can be separated into two time periods: the foreperiod, which is thought to include preparatory processes, and the reaction time interval, which includes initiation processes. To better understand these processes, transcranial magnetic stimulation has been used to probe corticospinal excitability at various time points during response preparation and initiation. Previous research has shown that excitability decreases prior to the "go" stimulus and increases following the "go"; however these two time frames have been examined independently. The purpose of this study was to measure changes in CE during both the foreperiod and reaction time interval in a single experiment, relative to a resting baseline level. Participants performed a button press movement in a simple reaction time task and excitability was measured during rest, the foreperiod, and the reaction time interval. Results indicated that during the foreperiod, excitability levels quickly increased from baseline with the presentation of the warning signal, followed by a period of stable excitability leading up to the "go" signal, and finally a rapid increase in excitability during the reaction time interval. This excitability time course is consistent with neural activation models that describe movement preparation and response initiation.
A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs
NASA Astrophysics Data System (ADS)
Li, Lei; Zhou, Wanting; Liu, Huihua
2012-12-01
This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.
Rezende-Filho, Flávio Moura; da Fonseca, Lucas José Sá; Nunes-Souza, Valéria; Guedes, Glaucevane da Silva; Rabelo, Luiza Antas
2014-09-15
Teaching physiology, a complex and constantly evolving subject, is not a simple task. A considerable body of knowledge about cognitive processes and teaching and learning methods has accumulated over the years, helping teachers to determine the most efficient way to teach, and highlighting student's active participation as a means to improve learning outcomes. In this context, this paper describes and qualitatively analyzes an experience of a student-centered teaching-learning methodology based on the construction of physiological-physical models, focusing on their possible application in the practice of teaching physiology. After having Physiology classes and revising the literature, students, divided in small groups, built physiological-physical models predominantly using low-cost materials, for studying different topics in Physiology. Groups were followed by monitors and guided by teachers during the whole process, finally presenting the results in a Symposium on Integrative Physiology. Along the proposed activities, students were capable of efficiently creating physiological-physical models (118 in total) highly representative of different physiological processes. The implementation of the proposal indicated that students successfully achieved active learning and meaningful learning in Physiology while addressing multiple learning styles. The proposed method has proved to be an attractive, accessible and relatively simple approach to facilitate the physiology teaching-learning process, while facing difficulties imposed by recent requirements, especially those relating to the use of experimental animals and professional training guidelines. Finally, students' active participation in the production of knowledge may result in a holistic education, and possibly, better professional practices.
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Kleidon, A.
2010-01-01
The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion. PMID:20368248
Kleidon, A
2010-05-12
The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.
Improved Analysis of Earth System Models and Observations using Simple Climate Models
NASA Astrophysics Data System (ADS)
Nadiga, B. T.; Urban, N. M.
2016-12-01
Earth system models (ESM) are the most comprehensive tools we have to study climate change and develop climate projections. However, the computational infrastructure required and the cost incurred in running such ESMs precludes direct use of such models in conjunction with a wide variety of tools that can further our understanding of climate. Here we are referring to tools that range from dynamical systems tools that give insight into underlying flow structure and topology to tools that come from various applied mathematical and statistical techniques and are central to quantifying stability, sensitivity, uncertainty and predictability to machine learning tools that are now being rapidly developed or improved. Our approach to facilitate the use of such models is to analyze output of ESM experiments (cf. CMIP) using a range of simpler models that consider integral balances of important quantities such as mass and/or energy in a Bayesian framework.We highlight the use of this approach in the context of the uptake of heat by the world oceans in the ongoing global warming. Indeed, since in excess of 90% of the anomalous radiative forcing due greenhouse gas emissions is sequestered in the world oceans, the nature of ocean heat uptake crucially determines the surface warming that is realized (cf. climate sensitivity). Nevertheless, ESMs themselves are never run long enough to directly assess climate sensitivity. So, we consider a range of models based on integral balances--balances that have to be realized in all first-principles based models of the climate system including the most detailed state-of-the art climate simulations. The models range from simple models of energy balance to those that consider dynamically important ocean processes such as the conveyor-belt circulation (Meridional Overturning Circulation, MOC), North Atlantic Deep Water (NADW) formation, Antarctic Circumpolar Current (ACC) and eddy mixing. Results from Bayesian analysis of such models using both ESM experiments and actual observations are presented. One such result points to the importance of direct sequestration of heat below 700 m, a process that is not allowed for in the simple models that have been traditionally used to deduce climate sensitivity.
Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity
NASA Astrophysics Data System (ADS)
Thual, S.; Majda, A.; Chen, N.
2017-12-01
The El Niño-Southern Oscillation (ENSO) has significant impact on global climate and seasonal prediction. Recently, a simple ENSO model was developed that automatically captures the ENSO diversity and intermittency in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean-atmosphere processes that are otherwise deterministic, linear and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase-locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model's atmosphere dynamics with no ad-hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, both the eastern Pacific moderate and super El Niño, the central Pacific El Niño as well as La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.
Minimal model for the secondary structures and conformational conversions in proteins
NASA Astrophysics Data System (ADS)
Imamura, Hideo
Better understanding of protein folding process can provide physical insights on the function of proteins and makes it possible to benefit from genetic information accumulated so far. Protein folding process normally takes place in less than seconds but even seconds are beyond reach of current computational power for simulations on a system of all-atom detail. Hence, to model and explore protein folding process it is crucial to construct a proper model that can adequately describe the physical process and mechanism for the relevant time scale. We discuss the reduced off-lattice model that can express _-helix and ?-hairpin conformations defined solely by a given sequence in order to investigate a protein folding mechanism of conformations such as a ?-hairpin and also to investigate conformational conversions in proteins. The first two chapters introduce and review essential concepts in protein folding modelling physical interaction in proteins, various simple models, and also review computational methods, in particular, the Metropolis Monte Carlo method, its dynamic interpretation and thermodynamic Monte Carlo algorithms. Chapter 3 describes the minimalist model that represents both _-helix and ?-sheet conformations using simple potentials. The native conformation can be specified by the sequence without particular conformational biases to a reference state. In Chapter 4, the model is used to investigate the folding mechanism of ?-hairpins exhaustively using the dynamic Monte Carlo and a thermodynamic Monte Carlo method an effcient combination of the multicanonical Monte Carlo and the weighted histogram analysis method. We show that the major folding pathways and folding rate depend on the location of a hydrophobic. The conformational conversions between _-helix and ?-sheet conformations are examined in Chapter 5 and 6. First, the conformational conversion due to mutation in a non-hydrophobic system and then the conformational conversion due to mutation with a hydrophobic pair at a different position at various temperatures are examined.
On the ``Matrix Approach'' to Interacting Particle Systems
NASA Astrophysics Data System (ADS)
de Sanctis, L.; Isopi, M.
2004-04-01
Derrida et al. and Schütz and Stinchcombe gave algebraic formulas for the correlation functions of the partially asymmetric simple exclusion process. Here we give a fairly general recipe of how to get these formulas and extend them to the whole time evolution (starting from the generator of the process), for a certain class of interacting systems. We then analyze the algebraic relations obtained to show that the matrix approach does not work with some models such as the voter and the contact processes.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Robert R. Ziemer
1979-01-01
For years, the principal objective of evapotranspiration research has been to calculate the loss of water under varying conditions of climate, soil, and vegetation. The early simple empirical methods have generally been replaced by more detailed models which more closely represent the physical and biological processes involved. Monteith's modification of the...
Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru
NASA Astrophysics Data System (ADS)
Manzanas, R.; Gutiérrez, J. M.
2018-05-01
This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.
Deciding the liveness for a subclass of weighted Petri nets based on structurally circular wait
NASA Astrophysics Data System (ADS)
Liu, GuanJun; Chen, LiJing
2016-05-01
Weighted Petri nets as a kind of formal language are widely used to model and verify discrete event systems related to resource allocation like flexible manufacturing systems. System of Simple Sequential Processes with Multi-Resources (S3PMR, a subclass of weighted Petri nets and an important extension to the well-known System of Simple Sequential Processes with Resources, can model many discrete event systems in which (1) multiple processes may run in parallel and (2) each execution step of each process may use multiple units from multiple resource types. This paper gives a necessary and sufficient condition for the liveness of S3PMR. A new structural concept called Structurally Circular Wait (SCW) is proposed for S3PMR. Blocking Marking (BM) associated with an SCW is defined. It is proven that a marked S3PMR is live if and only if each SCW has no BM. We use an example of multi-processor system-on-chip to show that SCW and BM can precisely characterise the (partial) deadlocks for S3PMR. Simultaneously, two examples are used to show the advantages of SCW in preventing deadlocks of S3PMR. These results are significant for the further research on dealing with the deadlock problem.
Simulations of Quantum Dot Growth on Semiconductor Surfaces: Morphological Design of Sensor Concepts
2008-12-01
size equalization can be clearly illustrated during the growth process. In this work we develop a fast multiscale 3D kinetic Monte Carlo ( KMC ) QD...model will provide an attractive means for producing predictably ordered nanostructures. MODEL DESCRIPTION The 3D layer-by-layer KMC growth model...Voter, 2001) and KMC simulation experience (Pan et al., 2004; Pan et al., 2006; Meixner et al, 2003) in 2D, we therefore propose the following simple
The noisy edge of traveling waves
Hallatschek, Oskar
2011-01-01
Traveling waves are ubiquitous in nature and control the speed of many important dynamical processes, including chemical reactions, epidemic outbreaks, and biological evolution. Despite their fundamental role in complex systems, traveling waves remain elusive because they are often dominated by rare fluctuations in the wave tip, which have defied any rigorous analysis so far. Here, we show that by adjusting nonlinear model details, noisy traveling waves can be solved exactly. The moment equations of these tuned models are closed and have a simple analytical structure resembling the deterministic approximation supplemented by a nonlocal cutoff term. The peculiar form of the cutoff shapes the noisy edge of traveling waves and is critical for the correct prediction of the wave speed and its fluctuations. Our approach is illustrated and benchmarked using the example of fitness waves arising in simple models of microbial evolution, which are highly sensitive to number fluctuations. We demonstrate explicitly how these models can be tuned to account for finite population sizes and determine how quickly populations adapt as a function of population size and mutation rates. More generally, our method is shown to apply to a broad class of models, in which number fluctuations are generated by branching processes. Because of this versatility, the method of model tuning may serve as a promising route toward unraveling universal properties of complex discrete particle systems. PMID:21187435
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
Understanding the complex dynamics of stock markets through cellular automata
NASA Astrophysics Data System (ADS)
Qiu, G.; Kandhai, D.; Sloot, P. M. A.
2007-04-01
We present a cellular automaton (CA) model for simulating the complex dynamics of stock markets. Within this model, a stock market is represented by a two-dimensional lattice, of which each vertex stands for a trader. According to typical trading behavior in real stock markets, agents of only two types are adopted: fundamentalists and imitators. Our CA model is based on local interactions, adopting simple rules for representing the behavior of traders and a simple rule for price updating. This model can reproduce, in a simple and robust manner, the main characteristics observed in empirical financial time series. Heavy-tailed return distributions due to large price variations can be generated through the imitating behavior of agents. In contrast to other microscopic simulation (MS) models, our results suggest that it is not necessary to assume a certain network topology in which agents group together, e.g., a random graph or a percolation network. That is, long-range interactions can emerge from local interactions. Volatility clustering, which also leads to heavy tails, seems to be related to the combined effect of a fast and a slow process: the evolution of the influence of news and the evolution of agents’ activity, respectively. In a general sense, these causes of heavy tails and volatility clustering appear to be common among some notable MS models that can confirm the main characteristics of financial markets.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
Unifying Model-Based and Reactive Programming within a Model-Based Executive
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)
1999-01-01
Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.
Modeling and characterization of supercapacitors for wireless sensor network applications
NASA Astrophysics Data System (ADS)
Zhang, Ying; Yang, Hengzhao
A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.
Characteristic time scales for diffusion processes through layers and across interfaces
NASA Astrophysics Data System (ADS)
Carr, Elliot J.
2018-04-01
This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.
Characteristic time scales for diffusion processes through layers and across interfaces.
Carr, Elliot J
2018-04-01
This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.
NASA Astrophysics Data System (ADS)
Alfarano, Simone; Lux, Thomas; Wagner, Friedrich
2006-10-01
Following Alfarano et al. [Estimation of agent-based models: the case of an asymmetric herding model, Comput. Econ. 26 (2005) 19-49; Excess volatility and herding in an artificial financial market: analytical approach and estimation, in: W. Franz, H. Ramser, M. Stadler (Eds.), Funktionsfähigkeit und Stabilität von Finanzmärkten, Mohr Siebeck, Tübingen, 2005, pp. 241-254], we consider a simple agent-based model of a highly stylized financial market. The model takes Kirman's ant process [A. Kirman, Epidemics of opinion and speculative bubbles in financial markets, in: M.P. Taylor (Ed.), Money and Financial Markets, Blackwell, Cambridge, 1991, pp. 354-368; A. Kirman, Ants, rationality, and recruitment, Q. J. Econ. 108 (1993) 137-156] of mimetic contagion as its starting point, but allows for asymmetry in the attractiveness of both groups. Embedding the contagion process into a standard asset-pricing framework, and identifying the abstract groups of the herding model as chartists and fundamentalist traders, a market with periodic bubbles and bursts is obtained. Taking stock of the availability of a closed-form solution for the stationary distribution of returns for this model, we can estimate its parameters via maximum likelihood. Expanding our earlier work, this paper presents pertinent estimates for the Australian dollar/US dollar exchange rate and the Australian stock market index. As it turns out, our model indicates dominance of fundamentalist behavior in both the stock and foreign exchange market.
Calibration of a simple and a complex model of global marine biogeochemistry
NASA Astrophysics Data System (ADS)
Kriest, Iris
2017-11-01
The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.
HIA, the next step: Defining models and roles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Putters, Kim
If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less
Mathematical Modeling for Scrub Typhus and Its Implications for Disease Control.
Min, Kyung Duk; Cho, Sung Il
2018-03-19
The incidence rate of scrub typhus has been increasing in the Republic of Korea. Previous studies have suggested that this trend may have resulted from the effects of climate change on the transmission dynamics among vectors and hosts, but a clear explanation of the process is still lacking. In this study, we applied mathematical models to explore the potential factors that influence the epidemiology of tsutsugamushi disease. We developed mathematical models of ordinary differential equations including human, rodent and mite groups. Two models, including simple and complex models, were developed, and all parameters employed in the models were adopted from previous articles that represent epidemiological situations in the Republic of Korea. The simulation results showed that the force of infection at the equilibrium state under the simple model was 0.236 (per 100,000 person-months), and that in the complex model was 26.796 (per 100,000 person-months). Sensitivity analyses indicated that the most influential parameters were rodent and mite populations and contact rate between them for the simple model, and trans-ovarian transmission for the complex model. In both models, contact rate between humans and mites is more influential than morality rate of rodent and mite group. The results indicate that the effect of controlling either rodents or mites could be limited, and reducing the contact rate between humans and mites is more practical and effective strategy. However, the current level of control would be insufficient relative to the growing mite population. © 2018 The Korean Academy of Medical Sciences.
Lunar exploration for resource utilization
NASA Technical Reports Server (NTRS)
Duke, Michael B.
1992-01-01
The strategy for developing resources on the Moon depends on the stage of space industrialization. A case is made for first developing the resources needed to provide simple materials required in large quantities for space operations. Propellants, shielding, and structural materials fall into this category. As the enterprise grows, it will be feasible to develop additional sources - those more difficult to obtain or required in smaller quantities. Thus, the first materials processing on the Moon will probably take the abundant lunar regolith, extract from it major mineral or glass species, and do relatively simple chemical processing. We need to conduct a lunar remote sensing mission to determine the global distribution of features, geophysical properties, and composition of the Moon, information which will serve as the basis for detailed models of and engineering decisions about a lunar mine.
Application of digital control techniques for satellite medium power DC-DC converters
NASA Astrophysics Data System (ADS)
Skup, Konrad R.; Grudzinski, Pawel; Nowosielski, Witold; Orleanski, Piotr; Wawrzaszek, Roman
2010-09-01
The objective of this paper is to present a work concerning a digital control loop system for satellite medium power DC-DC converters that is done in Space Research Centre. The whole control process of a described power converter bases on a high speed digital signal processing. The paper presents a development of a FPGA digital controller for voltage mode stabilization that was implemented using VHDL. The described controllers are a classical digital PID controller and a bang-bang controller. The used converter for testing is a simple model of 5-20 W, 200 kHz buck power converter. A high resolution digital PWM approach is presented. Additionally a simple and effective solution of filtering of an analog-to-digital converter output is presented.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
The use of models to predict potential contamination aboard orbital vehicles
NASA Technical Reports Server (NTRS)
Boraas, Martin E.; Seale, Dianne B.
1989-01-01
A model of fungal growth on air-exposed, nonnutritive solid surfaces, developed for utilization aboard orbital vehicles is presented. A unique feature of this testable model is that the development of a fungal mycelium can facilitate its own growth by condensation of water vapor from its environment directly onto fungal hyphae. The fungal growth rate is limited by the rate of supply of volatile nutrients and fungal biomass is limited by either the supply of nonvolatile nutrients or by metabolic loss processes. The model discussed is structurally simple, but its dynamics can be quite complex. Biofilm accumulation can vary from a simple linear increase to sustained exponential growth, depending on the values of the environmental variable and model parameters. The results of the model are consistent with data from aquatic biofilm studies, insofar as the two types of systems are comparable. It is shown that the model presented is experimentally testable and provides a platform for the interpretation of observational data that may be directly relevant to the question of growth of organisms aboard the proposed Space Station.
Model Calibration in Watershed Hydrology
NASA Technical Reports Server (NTRS)
Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh
2009-01-01
Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
Physics-based interactive volume manipulation for sharing surgical process.
Nakao, Megumi; Minato, Kotaro
2010-05-01
This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.
Although hydraulic redistribution of soil water (HR) by roots is a widespread phenomenon, the processes governing spatial and temporal patterns of HR are not well understood. We incorporated soil/plant biophysical properties into a simple model based on Darcy's law to predict sea...
Definitions: Health, Fitness, and Physical Activity.
ERIC Educational Resources Information Center
Corbin, Charles B.; Pangrazi, Robert P.; Franks, B. Don
2000-01-01
This paper defines a variety of fitness components, using a simple multidimensional hierarchical model that is consistent with recent definitions in the literature. It groups the definitions into two broad categories: product and process. Products refer to states of being such as physical fitness, health, and wellness. They are commonly referred…
Speed and Accuracy in the Processing of False Statements About Semantic Information.
ERIC Educational Resources Information Center
Ratcliff, Roger
1982-01-01
A standard reaction time procedure and a response signal procedure were used on data from eight experiments on semantic verifications. Results suggest that simple models of the semantic verification task that assume a single yes/no dimension on which discrimination is made are not correct. (Author/PN)
A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers
USDA-ARS?s Scientific Manuscript database
Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...
USDA-ARS?s Scientific Manuscript database
Despite the enormous relevance of zoonotic infections to world-wide public health, and despite much effort in modeling individual zoonoses, a fundamental understanding of the disease dynamics and the nature of outbreaks emanating from such a complex system is still lacking. We introduce a simple sto...
Representation in development: from a model system to some general processes.
Montuori, Luke M; Honey, Robert C
2015-03-01
The view that filial imprinting might serve as a useful model system for studying the neurobiological basis of memory was inspired, at least in part, by a simple idea: acquired filial preferences reflect the formation of a memory or representation of the imprinting object itself, as opposed to the change in the efficacy of stimulus-response pathways, for example. We provide a synthesis of the evidence that supports this idea; and show that the processes of memory formation observed in filial imprinting find surprisingly close counterparts in other species, including our own. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Grugel, Richard N,; Tewari, Surendra; Rajamure, R. S.; Erdman, Robert; Poirier, David
2012-01-01
Primary dendrite arm spacings of Al-7 wt% Si alloy directionally solidified in low gravity environment of space (MICAST-6 and MICAST-7: Thermal gradient approx. 19 to 26 K/cm, Growth speeds varying from 5 to 50 microns/s show good agreement with the Hunt-Lu model. Primary dendrite trunk diameters of the ISS processed samples show a good fit with a simple analytical model based on Kirkwood s approach, proposed here. Natural convection, a) decreases primary dendrite arm spacing. b) appears to increase primary dendrite trunk diameter.
1993-12-01
0~0 S* NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC ELECTE THESIS S APR 11 1994DU A SIMPLE, LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR...A SIMPLE. LOW OVERHEAD DATA COMPRESSION ALGORITHM FOR CONVERTING LOSSY COMPRESSION PROCESSES TO LOSSLESS. 6. AUTHOR(S) Abbott, Walter D., III 7...Approved for public release; distribution is unlimited. A Simple, Low Overhead Data Compression Algorithm for Converting Lossy Processes to Lossless by
CELLFS: TAKING THE "DMA" OUT OF CELL PROGRAMMING
DOE Office of Scientific and Technical Information (OSTI.GOV)
IONKOV, LATCHESAR A.; MIRTCHOVSKI, ANDREY A.; NYRHINEN, AKI M.
In this paper we present a new programming model for the Cell BE architecture of scalar multiprocessors. They call this programming model CellFS. CellFS aims at simplifying the task of managing I/O between the local store of the processing units and main memory. The CellFS support library provides the means for transferring data via simple file I/O operations between the PPU and the SPU.
Zeng, Guang-Ming; Zhang, Shuo-Fu; Qin, Xiao-Sheng; Huang, Guo-He; Li, Jian-Bing
2003-05-01
The paper establishes the relationship between the settling efficiency and the sizes of the sedimentation tank through the process of numerical simulation, which is taken as one of the constraints to set up a simple optimum designing model of sedimentation tank. The feasibility and advantages of this model based on numerical calculation are verified through the application of practical case.
Successful photoresist removal: incorporating chemistry, conditions, and equipment
NASA Astrophysics Data System (ADS)
Moore, John C.
2002-07-01
The material make-up of photoresists span a wide polarity range and chemistry. Resists contain reactive components which are photochemically triggered to convert and condense to forms that result in a solubility change. When designing a cleaning process, a knowledge of the resist chemistry is fundamental. A DNQ/novolak system may follow a simple dissolution model under normal conditions. However, when the same resist is sent through a dry etch process, crosslinking and metallic impregnation occurs to form a residue that is insoluble by simple dissolution. The same applies for negative-tone resists, where bonds must be broken and a high chemical interaction is needed to facilitate solvent penetration. Negative resists of different chemistry, such as the benzoin/acrylic, trazine/novolak, and azide/isoprene, must be addressed separately for specific polarity and reactant requirements. When dissolving and removing these crosslinked systems, benefits in formulated chemistries such as GenSolveTM and GenCleanTM are immediately observed. Once the chemistry is identified, conditions can be optimized with process design using temperature, agitation, and rinsing to achieve a robust process with a wide process latitude.
Pre-Launch Tasks Proposed in our Contract of December 1991
NASA Technical Reports Server (NTRS)
1998-01-01
We propose, during the pre-EOS phase to: (1) develop, with other MODIS Team Members, a means of discriminating different major biome types with NDVI and other AVHRR-based data; (2) develop a simple ecosystem process model for each of these biomes, BIOME-BGC; (3) relate the seasonal trend of weekly composite NDVI to vegetation phenology and temperature limits to develop a satellite defined growing season for vegetation; and (4) define physiologically based energy to mass conversion factors for carbon and water for each biome. Our final core at-launch product will be simplified, completely satellite driven biome specific models for net primary production. We will build these biome specific satellite driven algorithms using a family of simple ecosystem process models as calibration models, collectively called BIOME-BGC, and establish coordination with an existing network of ecological study sites in order to test and validate these products. Field datasets will then be available for both BIOME-BGC development and testing, use for algorithm developments of other MODIS Team Members, and ultimately be our first test point for MODIS land vegetation products upon launch. We will use field sites from the National Science Foundation Long-Term Ecological Research network, and develop Glacier National Park as a major site for intensive validation.
Random noise effects in pulse-mode digital multilayer neural networks.
Kim, Y C; Shanblatt, M A
1995-01-01
A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.
Pre-Launch Tasks Proposed in our Contract of December 1991
NASA Technical Reports Server (NTRS)
Running, Steven W.; Nemani, Ramakrishna R.; Glassy, Joseph
1997-01-01
We propose, during the pre-EOS phase to: (1) develop, with other MODIS Team Members, a means of discriminating different major biome types with NDVI and other AVHRR-based data. (2) develop a simple ecosystem process model for each of these biomes, BIOME-BGC (3) relate the seasonal trend of weekly composite NDVI to vegetation phenology and temperature limits to develop a satellite defined growing season for vegetation; and (4) define physiologically based energy to mass conversion factors for carbon and water for each biome. Our final core at-launch product will be simplified, completely satellite driven biome specific models for net primary production. We will build these biome specific satellite driven algorithms using a family of simple ecosystem process models as calibration models, collectively called BIOME-BGC, and establish coordination with an existing network of ecological study sites in order to test and validate these products. Field datasets will then be available for both BIOME-BGC development and testing, use for algorithm developments of other MODIS Team Members, and ultimately be our first test point for MODIS land vegetation products upon launch. We will use field sites from the National Science Foundation Long-Term Ecological Research network, and develop Glacier National Park as a major site for intensive validation.
On a Possible Unified Scaling Law for Volcanic Eruption Durations
Cannavò, Flavio; Nunnari, Giuseppe
2016-01-01
Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour. PMID:26926425
On a Possible Unified Scaling Law for Volcanic Eruption Durations.
Cannavò, Flavio; Nunnari, Giuseppe
2016-03-01
Volcanoes constitute dissipative systems with many degrees of freedom. Their eruptions are the result of complex processes that involve interacting chemical-physical systems. At present, due to the complexity of involved phenomena and to the lack of precise measurements, both analytical and numerical models are unable to simultaneously include the main processes involved in eruptions thus making forecasts of volcanic dynamics rather unreliable. On the other hand, accurate forecasts of some eruption parameters, such as the duration, could be a key factor in natural hazard estimation and mitigation. Analyzing a large database with most of all the known volcanic eruptions, we have determined that the duration of eruptions seems to be described by a universal distribution which characterizes eruption duration dynamics. In particular, this paper presents a plausible global power-law distribution of durations of volcanic eruptions that holds worldwide for different volcanic environments. We also introduce a new, simple and realistic pipe model that can follow the same found empirical distribution. Since the proposed model belongs to the family of the self-organized systems it may support the hypothesis that simple mechanisms can lead naturally to the emergent complexity in volcanic behaviour.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
The impact of temporal sampling resolution on parameter inference for biological transport models.
Harrison, Jonathan U; Baker, Ruth E
2018-06-25
Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin; Zurek, Wojciech H.
2006-06-01
We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of “singly branching” states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment’s size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or “nonredundant,” information.
Applications of the trilinear Hamiltonian with three trapped ions
NASA Astrophysics Data System (ADS)
Hablutzel Marrero, Roland Esteban; Ding, Shiqian; Maslennikov, Gleb; Gan, Jaren; Nimmrichter, Stefan; Roulet, Alexandre; Dai, Jibo; Scarani, Valerio; Matsukevich, Dzmitry
2017-04-01
The trilinear Hamiltonian a† bc + ab†c† , which describes a nonlinear interaction between harmonic oscillators, can be implemented to study different phenomena ranging from simple quantum models to quantum thermodynamics. We engineer this coupling between three modes of motion of three trapped 171Yb+ ions, where the interaction arises naturally from their mutual (anharmonic) Coulomb repulsion. By tuning our trapping parameters we are able to turn on / off resonant exchange of energy between the modes on demand. We present applications of this Hamiltonian for simulations of the parametric down conversion process in the regime of depleted pump, a simple model of Hawking radiation, and the Tavis-Cummings model. We also discuss the implementation of the quantum absorption refrigerator in such system and experimentally study effects of quantum coherence on its performance. This research is supported by the National Research Foundation, Prime Minister's Office, Singapore and the Ministry of Education, Singapore under the Research Centres of Excellence programme.
NASA Technical Reports Server (NTRS)
Clancy, Edward A.; Smith, Joseph M.; Cohen, Richard J.
1991-01-01
Recent evidence has shown that a subtle alternation in the surface ECG (electrical alternans) may be correlated with the susceptibility to ventricular fibrillation. In the present work, the author presents evidence that a mechanical alternation in the heartbeat (mechanical alternans) generally accompanies electrical alternans. A simple finite-element computer model which emulates both the electrical and the mechanical activity of the heart is presented. A pilot animal study is also reported. The computer model and the animal study both found that (1) there exists a regime of combined electrical-mechanical alternans during the transition from a normal rhythm towards a fibrillatory rhythm, (2) the detected degree of alternation is correlated with the relative instability of the rhythm, and (3) the electrical and mechanical alternans may result from a dispersion in local electrical properties leading to a spatial-temporal alternation in the electrical conduction process.
NASA Astrophysics Data System (ADS)
Diller, Christian; Karic, Sarah; Oberding, Sarah
2017-06-01
The topic of this article ist the question, in which phases oft he political planning process planners apply their methodological set of tools. That for the results of a research-project are presented, which were gained by an examination of planning-cases in learned journals. Firstly it is argued, which model oft he planning-process is most suitable to reflect the regarded cases and how it is positioned to models oft he political process. Thereafter it is analyzed, which types of planning methods are applied in the several stages oft he planning process. The central findings: Although complex, many planning processes can be thouroughly pictured by a linear modell with predominantly simple feedback loops. Even in times of he communicative turn, concerning their set of tools, planners should pay attention to apply not only communicative methods but as well the classical analytical-rational methods. They are helpful especially for the understanding of the political process before and after the actual planning phase.
Degradation data analysis based on a generalized Wiener process subject to measurement error
NASA Astrophysics Data System (ADS)
Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar
2017-09-01
Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155
Sukumaran, Jeet; Knowles, L Lacey
2018-06-01
The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Holzmann, Hubert; Massmann, Carolina
2015-04-01
A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.
Modeling of ultrasonic processes utilizing a generic software framework
NASA Astrophysics Data System (ADS)
Bruns, P.; Twiefel, J.; Wallaschek, J.
2017-06-01
Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.
Li, Mingjie; Zhou, Ping; Zhao, Zhicheng; Zhang, Jinggang
2016-03-01
Recently, fractional order (FO) processes with dead-time have attracted more and more attention of many researchers in control field, but FO-PID controllers design techniques available for the FO processes with dead-time suffer from lack of direct systematic approaches. In this paper, a simple design and parameters tuning approach of two-degree-of-freedom (2-DOF) FO-PID controller based on internal model control (IMC) is proposed for FO processes with dead-time, conventional one-degree-of-freedom control exhibited the shortcoming of coupling of robustness and dynamic response performance. 2-DOF control can overcome the above weakness which means it realizes decoupling of robustness and dynamic performance from each other. The adjustable parameter η2 of FO-PID controller is directly related to the robustness of closed-loop system, and the analytical expression is given between the maximum sensitivity specification Ms and parameters η2. In addition, according to the dynamic performance requirement of the practical system, the parameters η1 can also be selected easily. By approximating the dead-time term of the process model with the first-order Padé or Taylor series, the expressions for 2-DOF FO-PID controller parameters are derived for three classes of FO processes with dead-time. Moreover, compared with other methods, the proposed method is simple and easy to implement. Finally, the simulation results are given to illustrate the effectiveness of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A Conceptual Model of the Cognitive Processing of Environmental Distance Information
NASA Astrophysics Data System (ADS)
Montello, Daniel R.
I review theories and research on the cognitive processing of environmental distance information by humans, particularly that acquired via direct experience in the environment. The cognitive processes I consider for acquiring and thinking about environmental distance information include working-memory, nonmediated, hybrid, and simple-retrieval processes. Based on my review of the research literature, and additional considerations about the sources of distance information and the situations in which it is used, I propose an integrative conceptual model to explain the cognitive processing of distance information that takes account of the plurality of possible processes and information sources, and describes conditions under which particular processes and sources are likely to operate. The mechanism of summing vista distances is identified as widely important in situations with good visual access to the environment. Heuristics based on time, effort, or other information are likely to play their most important role when sensory access is restricted.
Coupling surface and mantle dynamics: A novel experimental approach
NASA Astrophysics Data System (ADS)
Kiraly, Agnes; Faccenna, Claudio; Funiciello, Francesca; Sembroni, Andrea
2015-05-01
Recent modeling shows that surface processes, such as erosion and deposition, may drive the deformation of the Earth's surface, interfering with deeper crustal and mantle signals. To investigate the coupling between the surface and deep process, we designed a three-dimensional laboratory apparatus, to analyze the role of erosion and sedimentation, triggered by deep mantle instability. The setup is constituted and scaled down to natural gravity field using a thin viscous sheet model, with mantle and lithosphere simulated by Newtonian viscous glucose syrup and silicon putty, respectively. The surface process is simulated assuming a simple erosion law producing the downhill flow of a thin viscous material away from high topography. The deep mantle upwelling is triggered by the rise of a buoyant sphere. The results of these models along with the parametric analysis show how surface processes influence uplift velocity and topography signals.
The Friction Force Determination of Large-Sized Composite Rods in Pultrusion
NASA Astrophysics Data System (ADS)
Grigoriev, S. N.; Krasnovskii, A. N.; Kazakov, I. A.
2014-08-01
Nowadays, the simple pull-force models of pultrusion process are not suitable for large sized rods because they are not considered a chemical shrinkage and thermal expansion acting in cured material inside the die. But the pulling force of the resin-impregnated fibers as they travels through the heated die is essential factor in the pultrusion process. In order to minimize the number of trial-and-error experiments a new mathematical approach to determine the frictional force is presented. The governing equations of the model are stated in general terms and various simplifications are implemented in order to obtain solutions without extensive numerical efforts. The influence of different pultrusion parameters on the frictional force value is investigated. The results obtained by the model can establish a foundation by which process control parameters are selected to achieve an appropriate pull-force and can be used for optimization pultrusion process.
Epidemic spreading in time-varying community networks.
Ren, Guangming; Wang, Xingyuan
2014-06-01
The spreading processes of many infectious diseases have comparable time scale as the network evolution. Here, we present a simple networks model with time-varying community structure, and investigate susceptible-infected-susceptible epidemic spreading processes in this model. By both theoretic analysis and numerical simulations, we show that the efficiency of epidemic spreading in this model depends intensively on the mobility rate q of the individuals among communities. We also find that there exists a mobility rate threshold qc. The epidemic will survive when q > qc and die when q < qc. These results can help understanding the impacts of human travel on the epidemic spreading in complex networks with community structure.
NASA Technical Reports Server (NTRS)
Nathan, Terrence R.; Yarger, Douglas N.
1989-01-01
The research is comprised of the following tasks: use of simple analytical and numerical models of a coupled troposphere-stratosphere system to examine the effects of radiation and ozone on planetary wave dynamics and the tropospheric circulation; use of satellite data obtained from the Nimbus 7 Limb Infrared Monitor of the Stratosphere (LIMS) instrument and Solar Backscattered Ultraviolet (SBUV) experiment, in conjunction with National Meteorological Center (NMC) data, to determine the planetary wave vertical structures, dominant wave spectra, ozone spectra, and time variations in diabatic heating rate; and synthesis of the modeling and observational results to provide a better understanding of the effects that stratospheric processes have on tropospheric dynamics.
Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide
2014-06-01
The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.
A simple rule for the costs of vigilance: empirical evidence from a social forager.
Cowlishaw, Guy; Lawes, Michael J.; Lightbody, Margaret; Martin, Alison; Pettifor, Richard; Rowcliffe, J. Marcus
2004-01-01
It is commonly assumed that anti-predator vigilance by foraging animals is costly because it interrupts food searching and handling time, leading to a reduction in feeding rate. When food handling does not require visual attention, however, a forager may handle food while simultaneously searching for the next food item or scanning for predators. We present a simple model of this process, showing that when the length of such compatible handling time Hc is long relative to search time S, specifically Hc/S > 1, it is possible to perform vigilance without a reduction in feeding rate. We test three predictions of this model regarding the relationships between feeding rate, vigilance and the Hc/S ratio, with data collected from a wild population of social foragers (samango monkeys, Cercopithecus mitis erythrarchus). These analyses consistently support our model, including our key prediction: as Hc/S increases, the negative relationship between feeding rate and the proportion of time spent scanning becomes progressively shallower. This pattern is more strongly driven by changes in median scan duration than scan frequency. Our study thus provides a simple rule that describes the extent to which vigilance can be expected to incur a feeding rate cost. PMID:15002768
Adaptive Neural Networks for Automatic Negotiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakas, D. P.; Vlachos, D. S.; Simos, T. E.
The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.
NASA Astrophysics Data System (ADS)
Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong
2018-05-01
This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.
Microeconomics of 300-mm process module control
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.
2001-08-01
Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Gravitational orientation of the orbital complex, Salyut-6--Soyuz
NASA Technical Reports Server (NTRS)
Grecho, G. M.; Sarychev, V. A.; Legostayev, V. P.; Sazonov, V. V.; Gansvind, I. N.
1983-01-01
A simple mathematical model is proposed for the Salyut-6-Soyuz orbital complex motion with respect to the center of mass under the one-axis gravity-gradient orientation regime. This model was used for processing the measurements of the orbital complex motion parameters when the above orientation region was implemented. Some actual satellite motions are simulated and the satellite's aerodynamic parameters are determined. Estimates are obtained for the accuracy of measurements as well as that of the mathematical model.
Fischer, Thomas; Fischer, Susanne; Himmel, Wolfgang; Kochen, Michael M; Hummers-Pradier, Eva
2008-01-01
The influence of patient characteristics on family practitioners' (FPs') diagnostic decision making has mainly been investigated using indirect methods such as vignettes or questionnaires. Direct observation-borrowed from social and cultural anthropology-may be an alternative method for describing FPs' real-life behavior and may help in gaining insight into how FPs diagnose respiratory tract infections, which are frequent in primary care. To clarify FPs' diagnostic processes when treating patients suffering from symptoms of respiratory tract infection. This direct observation study was performed in 30 family practices using a checklist for patient complaints, history taking, physical examination, and diagnoses. The influence of patients' symptoms and complaints on the FPs' physical examination and diagnosis was calculated by logistic regression analyses. Dummy variables based on combinations of symptoms and complaints were constructed and tested against saturated (full) and backward regression models. In total, 273 patients (median age 37 years, 51% women) were included. The median number of symptoms described was 4 per patient, and most information was provided at the patients' own initiative. Multiple logistic regression analysis showed a strong association between patients' complaints and the physical examination. Frequent diagnoses were upper respiratory tract infection (URTI)/common cold (43%), bronchitis (26%), sinusitis (12%), and tonsillitis (11%). There were no significant statistical differences between "simple heuristic'' models and saturated regression models in the diagnoses of bronchitis, sinusitis, and tonsillitis, indicating that simple heuristics are probably used by the FPs, whereas "URTI/common cold'' was better explained by the full model. FPs tended to make their diagnosis based on a few patient symptoms and a limited physical examination. Simple heuristic models were almost as powerful in explaining most diagnoses as saturated models. Direct observation allowed for the study of decision making under real conditions, yielding both quantitative data and "qualitative'' information about the FPs' performance. It is important for investigators to be aware of the specific disadvantages of the method (e.g., a possible observer effect).
Applying Knowledge Discovery in Databases in Public Health Data Set: Challenges and Concerns
Volrathongchia, Kanittha
2003-01-01
In attempting to apply Knowledge Discovery in Databases (KDD) to generate a predictive model from a health care dataset that is currently available to the public, the first step is to pre-process the data to overcome the challenges of missing data, redundant observations, and records containing inaccurate data. This study will demonstrate how to use simple pre-processing methods to improve the quality of input data. PMID:14728545
Laser Desorption Mass Spectrometry. II. Applications to Structural Analysis.
1982-02-02
the various Processes are shown in rigure 2. Ions Produced directly in the region of the laser pulse (V will be generated only while the laser • ,J...of the laser pulse , which frequently has not been considered in wavelength dependence studies. Although the time-orofie of the laser pulse is a simple...dominate (10). Models of Volatilization/Ionization - There are at least five processes to be considered when discussing volatilization/ionization by
VisTrails SAHM: visualization and workflow management for species habitat modeling
Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.
2013-01-01
The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.
Kobayashi, Seiji
2002-05-10
A point-spread function (PSF) is commonly used as a model of an optical disk readout channel. However, the model given by the PSF does not contain the quadratic distortion generated by the photo-detection process. We introduce a model for calculating an approximation of the quadratic component of a signal. We show that this model can be further simplified when a read-only-memory (ROM) disk is assumed. We introduce an edge-spread function by which a simple nonlinear model of an optical ROM disk readout channel is created.
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Towards a simple representation of chalk hydrology in land surface modelling
NASA Astrophysics Data System (ADS)
Rahman, Mostaquimur; Rosolem, Rafael
2017-01-01
Modelling and monitoring of hydrological processes in the unsaturated zone of chalk, a porous medium with fractures, is important to optimize water resource assessment and management practices in the United Kingdom (UK). However, incorporating the processes governing water movement through a chalk unsaturated zone in a numerical model is complicated mainly due to the fractured nature of chalk that creates high-velocity preferential flow paths in the subsurface. In general, flow through a chalk unsaturated zone is simulated using the dual-porosity concept, which often involves calibration of a relatively large number of model parameters, potentially undermining applications to large regions. In this study, a simplified parameterization, namely the Bulk Conductivity (BC) model, is proposed for simulating hydrology in a chalk unsaturated zone. This new parameterization introduces only two additional parameters (namely the macroporosity factor and the soil wetness threshold parameter for fracture flow activation) and uses the saturated hydraulic conductivity from the chalk matrix. The BC model is implemented in the Joint UK Land Environment Simulator (JULES) and applied to a study area encompassing the Kennet catchment in the southern UK. This parameterization is further calibrated at the point scale using soil moisture profile observations. The performance of the calibrated BC model in JULES is assessed and compared against the performance of both the default JULES parameterization and the uncalibrated version of the BC model implemented in JULES. Finally, the model performance at the catchment scale is evaluated against independent data sets (e.g. runoff and latent heat flux). The results demonstrate that the inclusion of the BC model in JULES improves simulated land surface mass and energy fluxes over the chalk-dominated Kennet catchment. Therefore, the simple approach described in this study may be used to incorporate the flow processes through a chalk unsaturated zone in large-scale land surface modelling applications.
47 CFR 52.36 - Standard data fields for simple port order processing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Standard data fields for simple port order processing. 52.36 Section 52.36 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) NUMBERING Number Portability § 52.36 Standard data fields for simple port order processing. (a) A telecommunications...
Selb, Melissa; Gimigliano, Francesca; Prodinger, Birgit; Stucki, Gerold; Pestelli, Germano; Iocco, Maurizio; Boldrini, Paolo
2017-04-01
As part of international efforts to develop and implement national models including the specification of ICF-based clinical data collection tools, the Italian rehabilitation community initiated a project to develop simple, intuitive descriptions of the ICF Rehabilitation Set, highlighting the core concept of each category in user-friendly language. This paper outlines the Italian experience in developing simple, intuitive descriptions of the ICF Rehabilitation Set as an ICF-based clinical data collection tool for Italy. Consensus process. Expert conference. Multidisciplinary group of rehabilitation professionals. The first of a two-stage consensus process involved developing an initial proposal for simple, intuitive descriptions of each ICF Rehabilitation Set category based on descriptions generated in a similar process in China. Stage two involved a consensus conference. Divided into three working groups, participants discussed and voted (vote A) whether the initially proposed descriptions of each ICF Rehabilitation Set category was simple and intuitive enough for use in daily practice. Afterwards the categories with descriptions considered ambiguous i.e. not simple and intuitive enough, were divided among the working groups, who were asked to propose a new description for the allocated categories. These proposals were then voted (vote B) on in a plenary session. The last step of the consensus conference required each working group to develop a new proposal for each and the same categories with descriptions still considered ambiguous. Participants then voted (final vote) for which of the three proposed descriptions they preferred. Nineteen clinicians from diverse rehabilitation disciplines from various regions of Italy participated in the consensus process. Three ICF categories already achieved consensus in vote A, while 20 ICF categories were accepted in vote B. The remaining 7 categories were decided in the final vote. The findings were discussed in light of current efforts toward developing strategies for ICF implementation, specifically for the application of an ICF-based clinical data collection tool, not only for Italy but also for the rest of Europe. Promising as minimal standards for monitoring the impact of interventions and for standardized reporting of functioning as a relevant outcome in rehabilitation.
Kim, Y S; Balland, V; Limoges, B; Costentin, C
2017-07-21
Cyclic voltammetry is a particularly useful tool for characterizing charge accumulation in conductive materials. A simple model is presented to evaluate proton transport effects on charge storage in conductive materials associated with a redox process coupled with proton insertion in the bulk material from an aqueous buffered solution, a situation frequently encountered in metal oxide materials. The interplay between proton transport inside and outside the materials is described using a formulation of the problem through introduction of dimensionless variables that allows defining the minimum number of parameters governing the cyclic voltammetry response with consideration of a simple description of the system geometry. This approach is illustrated by analysis of proton insertion in a mesoporous TiO 2 film.
Large deviation analysis of a simple information engine
NASA Astrophysics Data System (ADS)
Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.
2015-11-01
Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A simple model for remineralization of subsurface lesions in tooth enamel
NASA Astrophysics Data System (ADS)
Christoffersen, J.; Christoffersen, M. R.; Arends, J.
1982-12-01
A model for remineralization of subsurface lesions in tooth enamel is presented. The important assumption on which the model is based is that the rate-controlling process is the crystal surface process by which ions are incorporated in the crystallites; that is, the transport of ions through small holes in the so-called intact surface layer does not influence the rate of mineral uptake at the crystal surface. Further, the density of mineral in the lesion is assumed to increase down the lesion, when the remineralization process is started. It is shown that the dimension of the initial holes in the enamel surface layer must be larger than the dimension of the individual crystallites in order to prevent the formation of arrested lesions. Theoretical expressions for the progress of remineralization are given. The suggested model emphasizes the need for measurements of mineral densities in the lesion, prior to, and during the lesion repair.
Chill Down Process of Hydrogen Transport Pipelines
NASA Technical Reports Server (NTRS)
Mei, Renwei; Klausner, James
2006-01-01
A pseudo-steady model has been developed to predict the chilldown history of pipe wall temperature in the horizontal transport pipeline for cryogenic fluids. A new film boiling heat transfer model is developed by incorporating the stratified flow structure for cryogenic chilldown. A modified nucleate boiling heat transfer correlation for cryogenic chilldown process inside a horizontal pipe is proposed. The efficacy of the correlations is assessed by comparing the model predictions with measured values of wall temperature in several azimuthal positions in a well controlled experiment by Chung et al. (2004). The computed pipe wall temperature histories match well with the measured results. The present model captures important features of thermal interaction between the pipe wall and the cryogenic fluid, provides a simple and robust platform for predicting pipe wall chilldown history in long horizontal pipe at relatively low computational cost, and builds a foundation to incorporate the two-phase hydrodynamic interaction in the chilldown process.
Correlation of recent fission product release data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kress, T.S.; Lorenz, R.A.; Nakamura, T.
For the calculation of source terms associated with severe accidents, it is necessary to model the release of fission products from fuel as it heats and melts. Perhaps the most definitive model for fission product release is that of the FASTGRASS computer code developed at Argonne National Laboratory. There is persuasive evidence that these processes, as well as additional chemical and gas phase mass transport processes, are important in the release of fission products from fuel. Nevertheless, it has been found convenient to have simplified fission product release correlations that may not be as definitive as models like FASTGRASS butmore » which attempt in some simple way to capture the essence of the mechanisms. One of the most widely used such correlation is called CORSOR-M which is the present fission product/aerosol release model used in the NRC Source Term Code Package. CORSOR has been criticized as having too much uncertainty in the calculated releases and as not accurately reproducing some experimental data. It is currently believed that these discrepancies between CORSOR and the more recent data have resulted because of the better time resolution of the more recent data compared to the data base that went into the CORSOR correlation. This document discusses a simple correlational model for use in connection with NUREG risk uncertainty exercises. 8 refs., 4 figs., 1 tab.« less
A cognitive-consistency based model of population wide attitude change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lakkaraju, Kiran; Speed, Ann Elizabeth
Attitudes play a significant role in determining how individuals process information and behave. In this paper we have developed a new computational model of population wide attitude change that captures the social level: how individuals interact and communicate information, and the cognitive level: how attitudes and concept interact with each other. The model captures the cognitive aspect by representing each individuals as a parallel constraint satisfaction network. The dynamics of this model are explored through a simple attitude change experiment where we vary the social network and distribution of attitudes in a population.
Evaluating the cost effectiveness of environmental projects: Case studies in aerospace and defense
NASA Technical Reports Server (NTRS)
Shunk, James F.
1995-01-01
Using the replacement technology of high pressure waterjet decoating systems as an example, a simple methodology is presented for developing a cost effectiveness model. The model uses a four-step process to formulate an economic justification designed for presentation to decision makers as an assessment of the value of the replacement technology over conventional methods. Three case studies from major U.S. and international airlines are used to illustrate the methodology and resulting model. Tax and depreciation impacts are also presented as potential additions to the model.
Nonthermal model for ultrafast laser-induced plasma generation around a plasmonic nanorod
NASA Astrophysics Data System (ADS)
Labouret, Timothée; Palpant, Bruno
2016-12-01
The excitation of plasmonic gold nanoparticles by ultrashort laser pulses can trigger interesting electron-based effects in biological media such as production of reactive oxygen species or cell membrane optoporation. In order to better understand the optical and thermal processes at play, we modeled the interaction of a subpicosecond, near-infrared laser pulse with a gold nanorod in water. A nonthermal model is used and compared to a simple two-temperature thermal approach. For both models, the computation of the transient optical response reveals strong plasmon damping. Electron emission from the metal into the water is also calculated in a specific way for each model. The dynamics of the resulting local plasma in water is assessed by a rate equation model. While both approaches provide similar results for the transient optical properties, the simple thermal one is unable to properly describe electron emission and plasma generation. The latter is shown to mostly originate from electron-electron thermionic emission and photoemission from the metal. Taking into account the transient optical response is mandatory to properly calculate both electron emission and local plasma dynamics in water.
Que, Jianwen
2016-01-01
The esophagus and trachea are tubular organs that initially share a single common lumen in the anterior foregut. Several models have been proposed to explain how this single-lumen developmental intermediate generates two tubular organs. However, new evidence suggests that these models are not comprehensive. I will first briefly review these models and then propose a novel ‘splitting and extension’ model based on our in vitro modeling of the foregut separation process. Signaling molecules (e.g., SHHs, WNTs, BMPs) and transcription factors (e.g., NKX2.1 and SOX2) are critical for the separation of the foregut. Intriguingly, some of these molecules continue to play essential roles during the transition of simple columnar into stratified squamous epithelium in the developing esophagus, and they are also closely involved in epithelial maintenance in the adults. Alterations in the levels of these molecules have been associated with the initiation and progression of several esophageal diseases and cancer in adults. PMID:25727889
A flowgraph model for bladder carcinoma
2014-01-01
Background Superficial bladder cancer has been the subject of numerous studies for many years, but the evolution of the disease still remains not well understood. After the tumor has been surgically removed, it may reappear at a similar level of malignancy or progress to a higher level. The process may be reasonably modeled by means of a Markov process. However, in order to more completely model the evolution of the disease, this approach is insufficient. The semi-Markov framework allows a more realistic approach, but calculations become frequently intractable. In this context, flowgraph models provide an efficient approach to successfully manage the evolution of superficial bladder carcinoma. Our aim is to test this methodology in this particular case. Results We have built a successful model for a simple but representative case. Conclusion The flowgraph approach is suitable for modeling of superficial bladder cancer. PMID:25080066
A neural network model of foraging decisions made under predation risk.
Coleman, Scott L; Brown, Vincent R; Levine, Daniel S; Mellgren, Roger L
2005-12-01
This article develops the cognitive-emotional forager (CEF) model, a novel application of a neural network to dynamical processes in foraging behavior. The CEF is based on a neural network known as the gated dipole, introduced by Grossberg, which is capable of representing short-term affective reactions in a manner similar to Solomon and Corbit's (1974) opponent process theory. The model incorporates a trade-off between approach toward food and avoidance of predation under varying levels of motivation induced by hunger. The results of simulations in a simple patch selection paradigm, using a lifetime fitness criterion for comparison, indicate that the CEF model is capable of nearly optimal foraging and outperforms a run-of-luck rule-of-thumb model. Models such as the one presented here can illuminate the underlying cognitive and motivational components of animal decision making.
Collins, Anne G. E.; Frank, Michael J.
2012-01-01
Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. PMID:22487033
Lee, Jin-Woong; Chung, Jiyong; Cho, Min-Young; Timilsina, Suman; Sohn, Keemin; Kim, Ji Sik; Sohn, Kee-Sun
2018-06-20
An extremely simple bulk sheet made of a piezoresistive carbon nanotube (CNT)-Ecoflex composite can act as a smart keypad that is portable, disposable, and flexible enough to be carried crushed inside the pocket of a pair of trousers. Both a rigid-button-imbedded, rollable (or foldable) pad and a patterned flexible pad have been introduced for use as portable keyboards. Herein, we suggest a bare, bulk, macroscale piezoresistive sheet as a replacement for these complex devices that are achievable only through high-cost fabrication processes such as patterning-based coating, printing, deposition, and mounting. A deep-learning technique based on deep neural networks (DNN) enables this extremely simple bulk sheet to play the role of a smart keypad without the use of complicated fabrication processes. To develop this keypad, instantaneous electrical resistance change was recorded at several locations on the edge of the sheet along with the exact information on the touch position and pressure for a huge number of random touches. The recorded data were used for training a DNN model that could eventually act as a brain for a simple sheet-type keypad. This simple sheet-type keypad worked perfectly and outperformed all of the existing portable keypads in terms of functionality, flexibility, disposability, and cost.
Moustafa, Ahmed A; Kéri, Szabolcs; Somlai, Zsuzsanna; Balsdon, Tarryn; Frydecka, Dorota; Misiak, Blazej; White, Corey
2015-09-15
In this study, we tested reward- and punishment learning performance using a probabilistic classification learning task in patients with schizophrenia (n=37) and healthy controls (n=48). We also fit subjects' data using a Drift Diffusion Model (DDM) of simple decisions to investigate which components of the decision process differ between patients and controls. Modeling results show between-group differences in multiple components of the decision process. Specifically, patients had slower motor/encoding time, higher response caution (favoring accuracy over speed), and a deficit in classification learning for punishment, but not reward, trials. The results suggest that patients with schizophrenia adopt a compensatory strategy of favoring accuracy over speed to improve performance, yet still show signs of a deficit in learning based on negative feedback. Our data highlights the importance of applying fitting models (particularly drift diffusion models) to behavioral data. The implications of these findings are discussed relative to theories of schizophrenia and cognitive processing. Copyright © 2015 Elsevier B.V. All rights reserved.
Correlation Imaging Reveals Specific Crowding Dynamics of Kinesin Motor Proteins
NASA Astrophysics Data System (ADS)
Miedema, Daniël M.; Kushwaha, Vandana S.; Denisov, Dmitry V.; Acar, Seyda; Nienhuis, Bernard; Peterman, Erwin J. G.; Schall, Peter
2017-10-01
Molecular motor proteins fulfill the critical function of transporting organelles and other building blocks along the biopolymer network of the cell's cytoskeleton, but crowding effects are believed to crucially affect this motor-driven transport due to motor interactions. Physical transport models, like the paradigmatic, totally asymmetric simple exclusion process (TASEP), have been used to predict these crowding effects based on simple exclusion interactions, but verifying them in experiments remains challenging. Here, we introduce a correlation imaging technique to precisely measure the motor density, velocity, and run length along filaments under crowding conditions, enabling us to elucidate the physical nature of crowding and test TASEP model predictions. Using the kinesin motor proteins kinesin-1 and OSM-3, we identify crowding effects in qualitative agreement with TASEP predictions, and we achieve excellent quantitative agreement by extending the model with motor-specific interaction ranges and crowding-dependent detachment probabilities. These results confirm the applicability of basic nonequilibrium models to the intracellular transport and highlight motor-specific strategies to deal with crowding.
Inference of mantle viscosity for depth resolutions of GIA observations
NASA Astrophysics Data System (ADS)
Nakada, Masao; Okuno, Jun'ichi
2016-11-01
Inference of the mantle viscosity from observations for glacial isostatic adjustment (GIA) process has usually been conducted through the analyses based on the simple three-layer viscosity model characterized by lithospheric thickness, upper- and lower-mantle viscosities. Here, we examine the viscosity structures for the simple three-layer viscosity model and also for the two-layer lower-mantle viscosity model defined by viscosities of η670,D (670-D km depth) and ηD,2891 (D-2891 km depth) with D-values of 1191, 1691 and 2191 km. The upper-mantle rheological parameters for the two-layer lower-mantle viscosity model are the same as those for the simple three-layer one. For the simple three-layer viscosity model, rate of change of degree-two zonal harmonics of geopotential due to GIA process (GIA-induced J˙2) of -(6.0-6.5) × 10-11 yr-1 provides two permissible viscosity solutions for the lower mantle, (7-20) × 1021 and (5-9) × 1022 Pa s, and the analyses with observational constraints of the J˙2 and Last Glacial Maximum (LGM) sea levels at Barbados and Bonaparte Gulf indicate (5-9) × 1022 Pa s for the lower mantle. However, the analyses for the J˙2 based on the two-layer lower-mantle viscosity model only require a viscosity layer higher than (5-10) × 1021 Pa s for a depth above the core-mantle boundary (CMB), in which the value of (5-10) × 1021 Pa s corresponds to the solution of (7-20) × 1021 Pa s for the simple three-layer one. Moreover, the analyses with the J˙2 and LGM sea level constraints for the two-layer lower-mantle viscosity model indicate two viscosity solutions: η670,1191 > 3 × 1021 and η1191,2891 ˜ (5-10) × 1022 Pa s, and η670,1691 > 1022 and η1691,2891 ˜ (5-10) × 1022 Pa s. The inferred upper-mantle viscosity for such solutions is (1-4) × 1020 Pa s similar to the estimate for the simple three-layer viscosity model. That is, these analyses require a high viscosity layer of (5-10) × 1022 Pa s at least in the deep mantle, and suggest that the GIA-based lower-mantle viscosity structure should be treated carefully in discussing the mantle dynamics related to the viscosity jump at ˜670 km depth. We also preliminarily put additional constraints on these viscosity solutions by examining typical relative sea level (RSL) changes used to infer the lower-mantle viscosity. The viscosity solution inferred from the far-field RSL changes in the Australian region is consistent with those for the J˙2 and LGM sea levels, and the analyses for RSL changes at Southport and Bermuda in the intermediate region for the North American ice sheets suggest the solution of η670,D > 1022, ηD,2891 ˜ (5-10) × 1022 Pa s (D = 1191 or 1691 km) and upper-mantle viscosity higher than 6 × 1020 Pa s.
NASA Astrophysics Data System (ADS)
Peleshko, V. A.
2016-06-01
The deviator constitutive relation of the proposed theory of plasticity has a three-term form (the stress, stress rate, and strain rate vectors formed from the deviators are collinear) and, in the specialized (applied) version, in addition to the simple loading function, contains four dimensionless constants of the material determined from experiments along a two-link strain trajectory with an orthogonal break. The proposed simple mechanism is used to calculate the constants of themodel for four metallic materials that significantly differ in the composition and in the mechanical properties; the obtained constants do not deviate much from their average values (over the four materials). The latter are taken as universal constants in the engineering version of the model, which thus requires only one basic experiment, i. e., a simple loading test. If the material exhibits the strengthening property in cyclic circular deformation, then the model contains an additional constant determined from the experiment along a strain trajectory of this type. (In the engineering version of the model, the cyclic strengthening effect is not taken into account, which imposes a certain upper bound on the difference between the length of the strain trajectory arc and the module of the strain vector.) We present the results of model verification using the experimental data available in the literature about the combined loading along two- and multi-link strain trajectories with various lengths of links and angles of breaks, with plane curvilinear segments of various constant and variable curvature, and with three-dimensional helical segments of various curvature and twist. (All in all, we use more than 80 strain programs; the materials are low- andmedium-carbon steels, brass, and stainless steel.) These results prove that the model can be used to describe the process of arbitrary active (in the sense of nonnegative capacity of the shear) combine loading and final unloading of originally quasi-isotropic elastoplastic materials. In practical calculations, in the absence of experimental data about the properties of a material under combined loading, the use of the engineering version of the model is quite acceptable. The simple identification, wide verifiability, and the availability of a software implementation of the method for solving initial-boundary value problems permit treating the proposed theory as an applied theory.
The role of strength defects in shaping impact crater planforms
NASA Astrophysics Data System (ADS)
Watters, W. A.; Geiger, L. M.; Fendrock, M.; Gibson, R.; Hundal, C. B.
2017-04-01
High-resolution imagery and digital elevation models (DEMs) were used to measure the planimetric shapes of well-preserved impact craters. These measurements were used to characterize the size-dependent scaling of the departure from circular symmetry, which provides useful insights into the processes of crater growth and modification. For example, we characterized the dependence of the standard deviation of radius (σR) on crater diameter (D) as σR ∼ Dm. For complex craters on the Moon and Mars, m ranges from 0.9 to 1.2 among strong and weak target materials. For the martian simple craters in our data set, m varies from 0.5 to 0.8. The value of m tends toward larger values in weak materials and modified craters, and toward smaller values in relatively unmodified craters as well as craters in high-strength targets, such as young lava plains. We hypothesize that m ≈ 1 for planforms shaped by modification processes (slumping and collapse), whereas m tends toward ∼ 1/2 for planforms shaped by an excavation flow that was influenced by strength anisotropies. Additional morphometric parameters were computed to characterize the following planform properties: the planform aspect ratio or ellipticity, the deviation from a fitted ellipse, and the deviation from a convex shape. We also measured the distribution of crater shapes using Fourier decomposition of the planform, finding a similar distribution for simple and complex craters. By comparing the strength of small and large circular harmonics, we confirmed that lunar and martian complex craters are more polygonal at small sizes. Finally, we have used physical and geometrical principles to motivate scaling arguments and simple Monte Carlo models for generating synthetic planforms, which depend on a characteristic length scale of target strength defects. One of these models can be used to generate populations of synthetic planforms which are very similar to the measured population of well-preserved simple craters on Mars.
Microeconomics of advanced process window control for 50-nm gates
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.; Chen, Xuemei; Falessi, Georges; Garvin, Craig; Hankinson, Matt; Lev, Amir; Levy, Ady; Slessor, Michael D.
2002-07-01
Fundamentally, advanced process control enables accelerated design-rule reduction, but simple microeconomic models that directly link the effects of advanced process control to profitability are rare or non-existent. In this work, we derive these links using a simplified model for the rate of profit generated by the semiconductor manufacturing process. We use it to explain why and how microprocessor manufacturers strive to avoid commoditization by producing only the number of dies required to satisfy the time-varying demand in each performance segment. This strategy is realized using the tactic known as speed binning, the deliberate creation of an unnatural distribution of microprocessor performance that varies according to market demand. We show that the ability of APC to achieve these economic objectives may be limited by variability in the larger manufacturing context, including measurement delays and process window variation.
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
Dynamic self-assembly of charged colloidal strings and walls in simple fluid flows.
Abe, Yu; Zhang, Bo; Gordillo, Leonardo; Karim, Alireza Mohammad; Francis, Lorraine F; Cheng, Xiang
2017-02-22
Colloidal particles can self-assemble into various ordered structures in fluid flows that have potential applications in biomedicine, materials synthesis and encryption. These dynamic processes are also of fundamental interest for probing the general principles of self-assembly under non-equilibrium conditions. Here, we report a simple microfluidic experiment, where charged colloidal particles self-assemble into flow-aligned 1D strings with regular particle spacing near a solid boundary. Using high-speed confocal microscopy, we systematically investigate the influence of flow rates, electrostatics and particle polydispersity on the observed string structures. By studying the detailed dynamics of stable flow-driven particle pairs, we quantitatively characterize interparticle interactions. Based on the results, we construct a simple model that explains the intriguing non-equilibrium self-assembly process. Our study shows that the colloidal strings arise from a delicate balance between attractive hydrodynamic coupling and repulsive electrostatic interaction between particles. Finally, we demonstrate that, with the assistance of transverse electric fields, a similar mechanism also leads to the formation of 2D colloidal walls.
Working Memory Load Strengthens Reward Prediction Errors.
Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David
2017-04-19
Reinforcement learning (RL) in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors (RPEs) are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we investigated how working memory (WM) and incremental RL processes interact to guide human learning. WM load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive WM process together with slower RL. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to RPE, as shown previously, but, critically, these signals were reduced when the learning problem was within capacity of WM. The degree of this neural interaction related to individual differences in the use of WM to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning. SIGNIFICANCE STATEMENT Reinforcement learning (RL) theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning and other mechanisms such as prefrontal cortex working memory also play a key role. Our results also show that these other players interact with the dopaminergic RL system, interfering with its key computation of reward prediction errors. Copyright © 2017 the authors 0270-6474/17/374332-11$15.00/0.
Validating a model that predicts daily growth and feed quality of New Zealand dairy pastures.
Woodward, S J
2001-09-01
The Pasture Quality (PQ) model is a simple, mechanistic, dynamical system model that was designed to capture the essential biological processes in grazed grass-clover pasture, and to be optimised to derive improved grazing strategies for New Zealand dairy farms. While the individual processes represented in the model (photosynthesis, tissue growth, flowering, leaf death, decomposition, worms) were based on experimental data, this did not guarantee that the assembled model would accurately predict the behaviour of the system as a whole (i.e., pasture growth and quality). Validation of the whole model was thus a priority, since any strategy derived from the model could impact a farm business in the order of thousands of dollars per annum if adopted. This paper describes the process of defining performance criteria for the model, obtaining suitable data to test the model, and carrying out the validation analysis. The validation process highlighted a number of weaknesses in the model, which will lead to the model being improved. As a result, the model's utility will be enhanced. Furthermore, validation was found to have an unexpected additional benefit, in that despite the model's poor initial performance, support was generated for the model among field scientists involved in the wider project.
Auditory Power-Law Activation Avalanches Exhibit a Fundamental Computational Ground State
NASA Astrophysics Data System (ADS)
Stoop, Ruedi; Gomez, Florian
2016-07-01
The cochlea provides a biological information-processing paradigm that we are only beginning to understand in its full complexity. Our work reveals an interacting network of strongly nonlinear dynamical nodes, on which even a simple sound input triggers subnetworks of activated elements that follow power-law size statistics ("avalanches"). From dynamical systems theory, power-law size distributions relate to a fundamental ground state of biological information processing. Learning destroys these power laws. These results strongly modify the models of mammalian sound processing and provide a novel methodological perspective for understanding how the brain processes information.
Advanced multivariable control of a turboexpander plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altena, D.; Howard, M.; Bullin, K.
1998-12-31
This paper describes an application of advanced multivariable control on a natural gas plant and compares its performance to the previous conventional feed-back control. This control algorithm utilizes simple models from existing plant data and/or plant tests to hold the process at the desired operating point in the presence of disturbances and changes in operating conditions. The control software is able to accomplish this due to effective handling of process variable interaction, constraint avoidance and feed-forward of measured disturbances. The economic benefit of improved control lies in operating closer to the process constraints while avoiding significant violations. The South Texasmore » facility where this controller was implemented experienced reduced variability in process conditions which increased liquids recovery because the plant was able to operate much closer to the customer specified impurity constraint. An additional benefit of this implementation of multivariable control is the ability to set performance criteria beyond simple setpoints, including process variable constraints, relative variable merit and optimizing use of manipulated variables. The paper also details the control scheme applied to the complex turboexpander process and some of the safety features included to improve reliability.« less
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.
Sheppard, C W.
1969-03-01
A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.
Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun
2015-09-01
In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.
Principal process analysis of biological models.
Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc
2018-06-14
Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.
Optimization of MLS receivers for multipath environments
NASA Technical Reports Server (NTRS)
Mcalpine, G. A.; Highfill, J. H., III
1976-01-01
The design of a microwave landing system (MLS) aircraft receiver, capable of optimal performance in multipath environments found in air terminal areas, is reported. Special attention was given to the angle tracking problem of the receiver and includes tracking system design considerations, study and application of locally optimum estimation involving multipath adaptive reception and then envelope processing, and microcomputer system design. Results show processing is competitive in this application with i-f signal processing performance-wise and is much more simple and cheaper. A summary of the signal model is given.
Farrell, Patrick; Sun, Jacob; Gao, Meg; Sun, Hong; Pattara, Ben; Zeiser, Arno; D'Amore, Tony
2012-08-17
A simple approach to the development of an aerobic scaled-down fermentation model is presented to obtain more consistent process performance during the scale-up of recombinant protein manufacture. Using a constant volumetric oxygen mass transfer coefficient (k(L)a) for the criterion of a scale-down process, the scaled-down model can be "tuned" to match the k(L)a of any larger-scale target by varying the impeller rotational speed. This approach is demonstrated for a protein vaccine candidate expressed in recombinant Escherichia coli, where process performance is shown to be consistent among 2-L, 20-L, and 200-L scales. An empirical correlation for k(L)a has also been employed to extrapolate to larger manufacturing scales. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.
Hutchinson, John M C; Gigerenzer, Gerd
2005-05-31
The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.
Multi-Purpose Enrollment Projections: A Comparative Analysis of Four Approaches
ERIC Educational Resources Information Center
Allen, Debra Mary
2013-01-01
Providing support for institutional planning is central to the function of institutional research. Necessary for the planning process are accurate enrollment projections. The purpose of the present study was to develop a short-term enrollment model simple enough to be understood by those who rely on it, yet sufficiently complex to serve varying…
ERIC Educational Resources Information Center
Leu, Donald J.
The author provides an overview and conceptualization of the total educational and educational facility planning process. The presentation attempts to provide a simple practical outline for local planners, so they may actively engage in relevant educational facility planning, and a common conceptual base, so the various components of Project…
Conceptual Frameworks in Undergraduate Nursing Curricula: Report of a National Survey.
ERIC Educational Resources Information Center
McEwen, Melanie; Brown, Sandra C.
2002-01-01
Responses from 300 accredited nursing schools indicated that they used eclectic conceptual frameworks for curriculum; the most common component was the nursing process. Associate degree programs were more likely to use simple-to-complex organization. Diploma programs were more likely to use the medical model than baccalaureate programs. Frameworks…
The Future of Humanities Labor
ERIC Educational Resources Information Center
Bauerlein, Mark
2008-01-01
"Publish or perish" has long been the formula of academic labor at research universities, but for many humanities professors that imperative has decayed into a simple rule of production. The publish-or-perish model assumed a peer-review process that maintained quality, but more and more it is the bare volume of printed words that counts. When…
Choosing the Optimum Mix of Duration and Effort in Education.
ERIC Educational Resources Information Center
Oosterbeek, Hessel
1995-01-01
Employs a simple economic model to analyze determinants of Dutch college students' expected study duration and weekly effort. Findings show that the duration/effort ratio is determined by the relative prices of these inputs into the learning process. A higher socioeconomic status increases the duration/effort ratio. Higher ability levels decrease…
Why do disk galaxies present a common gas-phase metallicity gradient?
NASA Astrophysics Data System (ADS)
Chang, R.; Zhang, Shuhui; Shen, Shiyin; Yin, Jun; Hou, Jinliang
2017-03-01
CALIFA data show that isolated disk galaxies present a common gas-phase metallicity gradient, with a characteristic slope of -0.1dex/re between 0.3 and 2 disk effective radius re (Sanchez et al. 2014). Here we construct a simple model to investigate which processes regulate the formation and evolution.
Due to complex population dynamics and source-sink metapopulation processes, animal fitness sometimes varies across landscapes in ways that cannot be deduced from simple density patterns. In this study, we examine spatial patterns in fitness using a combination of intensive fiel...
Probing Clouds in Planets with a Simple Radiative Transfer Model: The Jupiter Case
ERIC Educational Resources Information Center
Mendikoa, Inigo; Perez-Hoyos, Santiago; Sanchez-Lavega, Agustin
2012-01-01
Remote sensing of planets evokes using expensive on-orbit satellites and gathering complex data from space. However, the basic properties of clouds in planetary atmospheres can be successfully estimated with small telescopes even from an urban environment using currently available and affordable technology. This makes the process accessible for…
Used Jmol to Help Students Better Understand Fluxional Processes
ERIC Educational Resources Information Center
Coleman, William F.; Fedosky, Edward W.
2006-01-01
This new WebWare combines instructional text and Jmol interactive, animated illustrations that help students visualize the mechanism. It is concluded that by animating the fluxional behavior of a simple model for chiral metal catalyst Sn(amidinate)[subscript 2], in which axial/equatorial exchange within the amidinate rings occurs through a Berry…
USDA-ARS?s Scientific Manuscript database
The fuzzy logic algorithm has the ability to describe knowledge in a descriptive human-like manner in the form of simple rules using linguistic variables, and provides a new way of modeling uncertain or naturally fuzzy hydrological processes like non-linear rainfall-runoff relationships. Fuzzy infe...
Gravitational spreading of Danu, Freyja and Maxwell Montes, Venus
NASA Astrophysics Data System (ADS)
Smrekar, Suzanne E.; Solomon, Sean C.
1991-06-01
The potential energy of elevated terrain tends to drive the collapse of the topography. This process of gravitational spreading is likely to be more important on Venus than on Earth because the higher surface temperature weakens the crust. The highest topography on Venus is Ishtar Terra. The high plateau of Lakshmi Planum has an average elevation of 3 km above mean planetary radius, and is surrounded by mountain belts. Freyja, Danu, and Maxwell Montes rise, on average, an additional 3, 0.5, and 5 km above the plateau, respectively. Recent high resolution Magellan radar images of this area, east of approx. 330 deg E, reveal widespread evidence for gravity spreading. Some observational evidence is described for gravity spreading and the implications are discussed in terms of simple mechanical models. Several simple models predict that gravity spreading should be an important process on Venus. One difficulty in using remote observations to infer interior properties is that the observed features may not have formed in response to stresses which are still active. Several causes of surface topography are briefly examined.
Design of flat pneumatic artificial muscles
NASA Astrophysics Data System (ADS)
Wirekoh, Jackson; Park, Yong-Lae
2017-03-01
Pneumatic artificial muscles (PAMs) have gained wide use in the field of robotics due to their ability to generate linear forces and motions with a simple mechanism, while remaining lightweight and compact. However, PAMs are limited by their traditional cylindrical form factors, which must increase radially to improve contraction force generation. Additionally, this form factor results in overly complicated fabrication processes when embedded fibers and sensor elements are required to provide efficient actuation and control of the PAMs while minimizing the bulkiness of the overall robotic system. In order to overcome these limitations, a flat two-dimensional PAM capable of being fabricated using a simple layered manufacturing process was created. Furthermore, a theoretical model was developed using Von Karman’s formulation for large deformations and the energy methods. Experimental characterizations of two different types of PAMs, a single-cell unit and a multi-cell unit, were performed to measure the maximum contraction lengths and forces at input pressures ranging from 0 to 150 kPa. Experimental data were then used to verify the fidelity of the theoretical model.
Donoso-Bravo, A; Retamal, C; Carballa, M; Ruiz-Filippi, G; Chamy, R
2009-01-01
The effect of temperature on the kinetic parameters involved in the main reactions of the anaerobic digestion process was studied. Batch tests with starch, glucose and acetic acid as substrates for hydrolysis, acidogenesis and methanogenesis, respectively, were performed in a temperature range between 15 and 45 degrees C. First order kinetics was assumed to determine the hydrolysis rate constant, while Monod and Haldane kinetics were considered for acidogenesis and methanogenesis, respectively. The results obtained showed that the anaerobic process is strongly influenced by temperature, with acidogenesis exerting the highest effect. The Cardinal Temperature Model 1 with an inflection point (CTM1) fitted properly the experimental data in the whole temperature range, except for the maximum degradation rate of acidogenesis. A simple case-study assessing the effect of temperature on an anaerobic CSTR performance indicated that with relatively simple substrates, like starch, the limiting reaction would change depending on temperature. However, when more complex substrates are used (e.g. sewage sludge), the hydrolysis might become more quickly into the limiting step.
Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser
NASA Technical Reports Server (NTRS)
Monson, D. J.
1977-01-01
The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Energy modelling in sensor networks
NASA Astrophysics Data System (ADS)
Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.
2007-06-01
Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert
2016-01-01
A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
A scalable multi-process model of root nitrogen uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Anthony P.
This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less
A scalable multi-process model of root nitrogen uptake
Walker, Anthony P.
2018-02-28
This article is a Commentary on McMurtrie & Näsholm et al., 218: 119–130. Roots are represented in Terrestrial Ecosystem Models (TEMs) in much less detail than their equivalent above-ground resource acquisition organs – leaves. Often roots in TEMs are simply resource sinks, and below-ground resource acquisition is commonly simulated without any relationship to root dynamics at all, though there are exceptions (e.g. Zaehle & Friend, 2010). The representation of roots as carbon (C) and nitrogen (N) sinks without complementary source functions can lead to strange sensitivities in a model. For example, reducing root lifespans in the Community Land Model (versionmore » 4.5) increases plant production as N cycles more rapidly through the ecosystem without loss of plant function (D. M. Ricciuto, unpublished). The primary reasons for the poorer representation of roots compared with leaves in TEMs are three-fold: (1) data are much harder won, especially in the field; (2) no simple mechanistic models of root function are available; and (3) scaling root function from an individual root to a root system lags behind methods of scaling leaf function to a canopy. Here in this issue of New Phytologist, McMurtrie & Näsholm (pp. 119–130) develop a relatively simple model for root N uptake that mechanistically accounts for processes of N supply (mineralization and transport by diffusion and mass flow) and N demand (root uptake and microbial immobilization).« less
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
Effect of the Environment and Environmental Uncertainty on Ship Routes
2012-06-01
models consisting of basic differential equations simulating the fluid dynamic process and physics of the environment. Based on Newton’s second law of...Charles and Hazel Hall, for their unconditional love and support. They were there for me during this entire process , as they have been throughout...A simple transit across the Atlantic Ocean can easily become a rough voyage if the ship encounters high winds, which in turn will cause a high sea
NASA Technical Reports Server (NTRS)
North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.
1980-01-01
An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.
NASA Technical Reports Server (NTRS)
North, G. R.; Cahalan, R. F.; Coakley, J. A., Jr.
1981-01-01
An introductory survey of the global energy balance climate models is presented with an emphasis on analytical results. A sequence of increasingly complicated models involving ice cap and radiative feedback processes are solved, and the solutions and parameter sensitivities are studied. The model parameterizations are examined critically in light of many current uncertainties. A simple seasonal model is used to study the effects of changes in orbital elements on the temperature field. A linear stability theorem and a complete nonlinear stability analysis for the models are developed. Analytical solutions are also obtained for the linearized models driven by stochastic forcing elements. In this context the relation between natural fluctuation statistics and climate sensitivity is stressed.
Reference Models for Structural Technology Assessment and Weight Estimation
NASA Technical Reports Server (NTRS)
Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd
2005-01-01
Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.
Lahera, Guillermo; Ruiz, Alicia; Brañas, Antía; Vicens, María; Orozco, Arantxa
Previous studies have linked processing speed with social cognition and functioning of patients with schizophrenia. A discriminant analysis is needed to determine the different components of this neuropsychological construct. This paper analyzes the impact of processing speed, reaction time and sustained attention on social functioning. 98 outpatients between 18 and 65 with DSM-5 diagnosis of schizophrenia, with a period of 3 months of clinical stability, were recruited. Sociodemographic and clinical data were collected, and the following variables were measured: processing speed (Trail Making Test [TMT], symbol coding [BACS], verbal fluency), simple and elective reaction time, sustained attention, recognition of facial emotions and global functioning. Processing speed (measured only through the BACS), sustained attention (CPT) and elective reaction time (but not simple) were associated with functioning. Recognizing facial emotions (FEIT) correlated significantly with scores on measures of processing speed (BACS, Animals, TMT), sustained attention (CPT) and reaction time. The linear regression model showed a significant relationship between functioning, emotion recognition (P=.015) and processing speed (P=.029). A deficit in processing speed and facial emotion recognition are associated with worse global functioning in patients with schizophrenia. Copyright © 2017 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Topographic evolution of orogens: The long term perspective
NASA Astrophysics Data System (ADS)
Robl, Jörg; Hergarten, Stefan; Prasicek, Günther
2017-04-01
The landscape of mountain ranges reflects the competition of tectonics and climate, that build up and destroy topography, respectively. While there is a broad consensus on the acting processes, there is a vital debate whether the topography of individual orogens reflects stages of growth, steady-state or decay. This debate is fuelled by the million-year time scales hampering direct observations on landscape evolution in mountain ranges, the superposition of various process patterns and the complex interactions among different processes. In this presentation we focus on orogen-scale landscape evolution based on time-dependent numerical models and explore model time series to constrain the development of mountain range topography during an orogenic cycle. The erosional long term response of rivers and hillslopes to uplift can be mathematically formalised by the stream power and mass diffusion equations, respectively, which enables us to describe the time-dependent evolution of topography in orogens. Based on a simple one-dimensional model consisting of two rivers separated by a watershed we explain the influence of uplift rate and rock erodibility on steady-state channel profiles and show the time-dependent development of the channel - drainage divide system. The effect of dynamic drainage network reorganization adds additional complexity and its effect on topography is explored on the basis of two-dimensional models. Further complexity is introduced by coupling a mechanical model (thin viscous sheet approach) describing continental collision, crustal thickening and topography formation with a stream power-based landscape evolution model. Model time series show the impact of crustal deformation on drainage networks and consequently on the evolution of mountain range topography (Robl et al., in review). All model outcomes, from simple one-dimensional to coupled two dimensional models are presented as movies featuring a high spatial and temporal resolution. Robl, J., S. Hergarten, and G. Prasicek (in review), The topographic state of mountain ranges, Earth Science Reviews.
NASA Astrophysics Data System (ADS)
Karandish, Fatemeh; Šimůnek, Jiří
2016-12-01
Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.
Capillarity Guided Patterning of Microliquids.
Kang, Myeongwoo; Park, Woohyun; Na, Sangcheol; Paik, Sang-Min; Lee, Hyunjae; Park, Jae Woo; Kim, Ho-Young; Jeon, Noo Li
2015-06-01
Soft lithography and other techniques have been developed to investigate biological and chemical phenomena as an alternative to photolithography-based patterning methods that have compatibility problems. Here, a simple approach for nonlithographic patterning of liquids and gels inside microchannels is described. Using a design that incorporates strategically placed microstructures inside the channel, microliquids or gels can be spontaneously trapped and patterned when the channel is drained. The ability to form microscale patterns inside microfluidic channels using simple fluid drain motion offers many advantages. This method is geometrically analyzed based on hydrodynamics and verified with simulation and experiments. Various materials (i.e., water, hydrogels, and other liquids) are successfully patterned with complex shapes that are isolated from each other. Multiple cell types are patterned within the gels. Capillarity guided patterning (CGP) is fast, simple, and robust. It is not limited by pattern shape, size, cell type, and material. In a simple three-step process, a 3D cancer model that mimics cell-cell and cell-extracellular matrix interactions is engineered. The simplicity and robustness of the CGP will be attractive for developing novel in vitro models of organ-on-a-chip and other biological experimental platforms amenable to long-term observation of dynamic events using advanced imaging and analytical techniques. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.