A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
Modeling How, When, and What Is Learned in a Simple Fault-Finding Task
ERIC Educational Resources Information Center
Ritter, Frank E.; Bibby, Peter A.
2008-01-01
We have developed a process model that learns in multiple ways while finding faults in a simple control panel device. The model predicts human participants' learning through its own learning. The model's performance was systematically compared to human learning data, including the time course and specific sequence of learned behaviors. These…
A simple dynamic engine model for use in a real-time aircraft simulation with thrust vectoring
NASA Technical Reports Server (NTRS)
Johnson, Steven A.
1990-01-01
A simple dynamic engine model was developed at the NASA Ames Research Center, Dryden Flight Research Facility, for use in thrust vectoring control law development and real-time aircraft simulation. The simple dynamic engine model of the F404-GE-400 engine (General Electric, Lynn, Massachusetts) operates within the aircraft simulator. It was developed using tabular data generated from a complete nonlinear dynamic engine model supplied by the manufacturer. Engine dynamics were simulated using a throttle rate limiter and low-pass filter. Included is a description of a method to account for axial thrust loss resulting from thrust vectoring. In addition, the development of the simple dynamic engine model and its incorporation into the F-18 high alpha research vehicle (HARV) thrust vectoring simulation. The simple dynamic engine model was evaluated at Mach 0.2, 35,000 ft altitude and at Mach 0.7, 35,000 ft altitude. The simple dynamic engine model is within 3 percent of the steady state response, and within 25 percent of the transient response of the complete nonlinear dynamic engine model.
A Simple Treatment of the Liquidity Trap for Intermediate Macroeconomics Courses
ERIC Educational Resources Information Center
Buttet, Sebastien; Roy, Udayan
2014-01-01
Several leading undergraduate intermediate macroeconomics textbooks now include a simple reduced-form New Keynesian model of short-run dynamics (alongside the IS-LM model). Unfortunately, there is no accompanying description of how the zero lower bound on nominal interest rates affects the model. In this article, the authors show how the…
Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection
NASA Astrophysics Data System (ADS)
Harwati
2017-06-01
Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.
Fun with maths: exploring implications of mathematical models for malaria eradication.
Eckhoff, Philip A; Bever, Caitlin A; Gerardin, Jaline; Wenger, Edward A
2014-12-11
Mathematical analyses and modelling have an important role informing malaria eradication strategies. Simple mathematical approaches can answer many questions, but it is important to investigate their assumptions and to test whether simple assumptions affect the results. In this note, four examples demonstrate both the effects of model structures and assumptions and also the benefits of using a diversity of model approaches. These examples include the time to eradication, the impact of vaccine efficacy and coverage, drug programs and the effects of duration of infections and delays to treatment, and the influence of seasonality and migration coupling on disease fadeout. An excessively simple structure can miss key results, but simple mathematical approaches can still achieve key results for eradication strategy and define areas for investigation by more complex models.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
pyhector: A Python interface for the simple climate model Hector
Willner, Sven N.; Hartin, Corinne; Gieseke, Robert
2017-04-01
Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less
Simplified aeroelastic modeling of horizontal axis wind turbines
NASA Technical Reports Server (NTRS)
Wendell, J. H.
1982-01-01
Certain aspects of the aeroelastic modeling and behavior of the horizontal axis wind turbine (HAWT) are examined. Two simple three degree of freedom models are described in this report, and tools are developed which allow other simple models to be derived. The first simple model developed is an equivalent hinge model to study the flap-lag-torsion aeroelastic stability of an isolated rotor blade. The model includes nonlinear effects, preconing, and noncoincident elastic axis, center of gravity, and aerodynamic center. A stability study is presented which examines the influence of key parameters on aeroelastic stability. Next, two general tools are developed to study the aeroelastic stability and response of a teetering rotor coupled to a flexible tower. The first of these tools is an aeroelastic model of a two-bladed rotor on a general flexible support. The second general tool is a harmonic balance solution method for the resulting second order system with periodic coefficients. The second simple model developed is a rotor-tower model which serves to demonstrate the general tools. This model includes nacelle yawing, nacelle pitching, and rotor teetering. Transient response time histories are calculated and compared to a similar model in the literature. Agreement between the two is very good, especially considering how few harmonics are used. Finally, a stability study is presented which examines the effects of support stiffness and damping, inflow angle, and preconing.
SimpleBox 4.0: Improving the model while keeping it simple….
Hollander, Anne; Schoorl, Marian; van de Meent, Dik
2016-04-01
Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
An investigation of the astronomical theory of the ice ages using a simple climate-ice sheet model
NASA Technical Reports Server (NTRS)
Pollard, D.
1978-01-01
The astronomical theory of the Quaternary ice ages is incorporated into a simple climate model for global weather; important features of the model include the albedo feedback, topography and dynamics of the ice sheets. For various parameterizations of the orbital elements, the model yields realistic assessments of the northern ice sheet. Lack of a land-sea heat capacity contrast represents one of the chief difficulties of the model.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
A simple rain attenuation model for earth-space radio links operating at 10-35 GHz
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Yon, K. M.
1986-01-01
The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.
Model for Predicting Passage of Invasive Fish Species Through Culverts
NASA Astrophysics Data System (ADS)
Neary, V.
2010-12-01
Conservation efforts to promote or inhibit fish passage include the application of simple fish passage models to determine whether an open channel flow allows passage of a given fish species. Derivations of simple fish passage models for uniform and nonuniform flow conditions are presented. For uniform flow conditions, a model equation is developed that predicts the mean-current velocity threshold in a fishway, or velocity barrier, which causes exhaustion at a given maximum distance of ascent. The derivation of a simple expression for this exhaustion-threshold (ET) passage model is presented using kinematic principles coupled with fatigue curves for threatened and endangered fish species. Mean current velocities at or above the threshold predict failure to pass. Mean current velocities below the threshold predict successful passage. The model is therefore intuitive and easily applied to predict passage or exclusion. The ET model’s simplicity comes with limitations, however, including its application only to uniform flow, which is rarely found in the field. This limitation is addressed by deriving a model that accounts for nonuniform conditions, including backwater profiles and drawdown curves. Comparison of these models with experimental data from volitional swimming studies of fish indicates reasonable performance, but limitations are still present due to the difficulty in predicting fish behavior and passage strategies that can vary among individuals and different fish species.
A simple 2D biofilm model yields a variety of morphological features.
Hermanowicz, S W
2001-01-01
A two-dimensional biofilm model was developed based on the concept of cellular automata. Three simple, generic processes were included in the model: cell growth, internal and external mass transport and cell detachment (erosion). The model generated a diverse range of biofilm morphologies (from dense layers to open, mushroom-like forms) similar to those observed in real biofilm systems. Bulk nutrient concentration and external mass transfer resistance had a large influence on the biofilm structure.
A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor
NASA Technical Reports Server (NTRS)
Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.
2009-01-01
This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.
Including resonances in the multiperipheral model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinsky, S.S.; Snider, D.R.; Thomas, G.H.
1973-10-01
A simple generalization of the multiperipheral model (MPM) and the Mueller--Regge Model (MRM) is given which has improved phenomenological capabilities by explicitly incorporating resonance phenomena, and still is simple enough to be an important theoretical laboratory. The model is discussed both with and without charge. In addition, the one channel, two channel, three channel and N channel cases are explicitly treated. Particular attention is paid to the constraints of charge conservation and positivity in the MRM. The recently proven equivalence between the MRM and MPM is extended to this model, and is used extensively. (auth)
Simple Benchmark Specifications for Space Radiation Protection
NASA Technical Reports Server (NTRS)
Singleterry, Robert C. Jr.; Aghara, Sukesh K.
2013-01-01
This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.
Dense simple plasmas as high-temperature liquid simple metals
NASA Technical Reports Server (NTRS)
Perrot, F.
1990-01-01
The thermodynamic properties of dense plasmas considered as high-temperature liquid metals are studied. An attempt is made to show that the neutral pseudoatom picture of liquid simple metals may be extended for describing plasmas in ranges of densities and temperatures where their electronic structure remains 'simple'. The primary features of the model when applied to plasmas include the temperature-dependent self-consistent calculation of the electron charge density and the determination of a density and temperature-dependent ionization state.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Barlow, Paul M.
1997-01-01
Steady-state, two- and three-dimensional, ground-water-flow models coupled with particle tracking were evaluated to determine their effectiveness in delineating contributing areas of wells pumping from stratified-drift aquifers of Cape Cod, Massachusetts. Several contributing areas delineated by use of the three-dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of two-dimensional analytical and numerical modeling techniques and included discontinuous areas of the water table.
Cloud-Based Tools to Support High-Resolution Modeling (Invited)
NASA Astrophysics Data System (ADS)
Jones, N.; Nelson, J.; Swain, N.; Christensen, S.
2013-12-01
The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482
Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model
NASA Astrophysics Data System (ADS)
Wilton, Darren C.
The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).
System-level modeling of acetone-butanol-ethanol fermentation.
Liao, Chen; Seo, Seung-Oh; Lu, Ting
2016-05-01
Acetone-butanol-ethanol (ABE) fermentation is a metabolic process of clostridia that produces bio-based solvents including butanol. It is enabled by an underlying metabolic reaction network and modulated by cellular gene regulation and environmental cues. Mathematical modeling has served as a valuable strategy to facilitate the understanding, characterization and optimization of this process. In this review, we highlight recent advances in system-level, quantitative modeling of ABE fermentation. We begin with an overview of integrative processes underlying the fermentation. Next we survey modeling efforts including early simple models, models with a systematic metabolic description, and those incorporating metabolism through simple gene regulation. Particular focus is given to a recent system-level model that integrates the metabolic reactions, gene regulation and environmental cues. We conclude by discussing the remaining challenges and future directions towards predictive understanding of ABE fermentation. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Spatial surplus production modeling of Atlantic tunas and billfish.
Carruthers, Thomas R; McAllister, Murdoch K; Taylor, Nathan G
2011-10-01
We formulate and simulation-test a spatial surplus production model that provides a basis with which to undertake multispecies, multi-area, stock assessment. Movement between areas is parameterized using a simple gravity model that includes a "residency" parameter that determines the degree of stock mixing among areas. The model is deliberately simple in order to (1) accommodate nontarget species that typically have fewer available data and (2) minimize computational demand to enable simulation evaluation of spatial management strategies. Using this model, we demonstrate that careful consideration of spatial catch and effort data can provide the basis for simple yet reliable spatial stock assessments. If simple spatial dynamics can be assumed, tagging data are not required to reliably estimate spatial distribution and movement. When applied to eight stocks of Atlantic tuna and billfish, the model tracks regional catch data relatively well by approximating local depletions and exchange among high-abundance areas. We use these results to investigate and discuss the implications of using spatially aggregated stock assessment for fisheries in which the distribution of both the population and fishing vary over time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willner, Sven N.; Hartin, Corinne; Gieseke, Robert
Here, pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary productionmore » and respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system. The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2.« less
Slow-Slip Phenomena Represented by the One-Dimensional Burridge-Knopoff Model of Earthquakes
NASA Astrophysics Data System (ADS)
Kawamura, Hikaru; Yamamoto, Maho; Ueda, Yushi
2018-05-01
Slow-slip phenomena, including afterslips and silent earthquakes, are studied using a one-dimensional Burridge-Knopoff model that obeys the rate-and-state dependent friction law. By varying only a few model parameters, this simple model allows reproducing a variety of seismic slips within a single framework, including main shocks, precursory nucleation processes, afterslips, and silent earthquakes.
A simple dynamic subgrid-scale model for LES of particle-laden turbulence
NASA Astrophysics Data System (ADS)
Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz
2017-04-01
In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.
Multiphase flow in geometrically simple fracture intersections
Basagaoglu, H.; Meakin, P.; Green, C.T.; Mathew, M.; ,
2006-01-01
A two-dimensional lattice Boltzmann (LB) model with fluid-fluid and solid-fluid interaction potentials was used to study gravity-driven flow in geometrically simple fracture intersections. Simulated scenarios included fluid dripping from a fracture aperture, two-phase flow through intersecting fractures and thin-film flow on smooth and undulating solid surfaces. Qualitative comparisons with recently published experimental findings indicate that for these scenarios the LB model captured the underlying physics reasonably well.
Hartin, Corinne A.; Patel, Pralit L.; Schwarber, Adria; ...
2015-04-01
Simple climate models play an integral role in the policy and scientific communities. They are used for climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe Hector v1.0, an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global-scale earth system processes. Hector has a three-part main carbon cycle: a one-pool atmosphere, land, and ocean. The model's terrestrial carbon cycle includes primary production and respiration fluxes, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. Hector actively solves the inorganicmore » carbon system in the surface ocean, directly calculating air–sea fluxes of carbon and ocean pH. Hector reproduces the global historical trends of atmospheric [CO 2], radiative forcing, and surface temperatures. The model simulates all four Representative Concentration Pathways (RCPs) with equivalent rates of change of key variables over time compared to current observations, MAGICC (a well-known simple climate model), and models from the 5th Coupled Model Intercomparison Project. Hector's flexibility, open-source nature, and modular design will facilitate a broad range of research in various areas.« less
Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders
2007-01-01
Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…
pyomocontrib_simplemodel v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William
2017-03-02
Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This library extends the API of Pyomo to include a simple modeling representation: a list of objectives and constraints.
Modeling Impact of Urbanization in US Cities Using Simple Biosphere Model SiB2
NASA Technical Reports Server (NTRS)
Zhang, Ping; Bounoua, Lahouari; Thome, Kurtis; Wolfe, Robert
2016-01-01
We combine Landsat- and the Moderate Resolution Imaging Spectroradiometer (MODIS)-based products, as well as climate drivers from Phase 2 of the North American Land Data Assimilation System (NLDAS-2) in a Simple Biosphere land surface model (SiB2) to assess the impact of urbanization in continental USA (excluding Alaska and Hawaii). More than 300 cities and their surrounding suburban and rural areas are defined in this study to characterize the impact of urbanization on surface climate including surface energy, carbon budget, and water balance. These analyses reveal an uneven impact of urbanization across the continent that should inform upon policy options for improving urban growth including heat mitigation and energy use, carbon sequestration and flood prevention.
Extension of the ADC Charge-Collection Model to Include Multiple Junctions
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
2011-01-01
The ADC model is a charge-collection model derived for simple p-n junction silicon diodes having a single reverse-biased p-n junction at one end and an ideal substrate contact at the other end. The present paper extends the model to include multiple junctions, and the goal is to estimate how collected charge is shared by the different junctions.
A simple physical model for X-ray burst sources
NASA Technical Reports Server (NTRS)
Joss, P. C.; Rappaport, S.
1977-01-01
In connection with information considered by Illarianov and Sunyaev (1975) and van den Heuvel (1975), a simple physical model for an X-ray burst source in the galactic disk is proposed. The model includes an unevolved OB star with a relatively weak stellar wind and a compact object in a close binary system. For some reason, the stellar wind from the OB star is unable to accrete steadily on to the compact object. When the stellar wind is sufficiently weak, the compact object accretes irregularly, leading to X-ray bursts.
Cellular Automata with Anticipation: Examples and Presumable Applications
NASA Astrophysics Data System (ADS)
Krushinsky, Dmitry; Makarenko, Alexander
2010-11-01
One of the most prospective new methodologies for modelling is the so-called cellular automata (CA) approach. According to this paradigm, the models are built from simple elements connected into regular structures with local interaction between neighbours. The patterns of connections usually have a simple geometry (lattices). As one of the classical examples of CA we mention the game `Life' by J. Conway. This paper presents two examples of CA with anticipation property. These examples include a modification of the game `Life' and a cellular model of crowd movement.
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
NASA Astrophysics Data System (ADS)
Baird, M. E.; Walker, S. J.; Wallace, B. B.; Webster, I. T.; Parslow, J. S.
2003-03-01
A simple model of estuarine eutrophication is built on biomechanical (or mechanistic) descriptions of a number of the key ecological processes in estuaries. Mechanistically described processes include the nutrient uptake and light capture of planktonic and benthic autotrophs, and the encounter rates of planktonic predators and prey. Other more complex processes, such as sediment biogeochemistry, detrital processes and phosphate dynamics, are modelled using empirical descriptions from the Port Phillip Bay Environmental Study (PPBES) ecological model. A comparison is made between the mechanistically determined rates of ecological processes and the analogous empirically determined rates in the PPBES ecological model. The rates generally agree, with a few significant exceptions. Model simulations were run at a range of estuarine depths and nutrient loads, with outputs presented as the annually averaged biomass of autotrophs. The simulations followed a simple conceptual model of eutrophication, suggesting a simple biomechanical understanding of estuarine processes can provide a predictive tool for ecological processes in a wide range of estuarine ecosystems.
Speededness and Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.; Xiong, Xinhui
2013-01-01
Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…
Simple Models for Tough Concepts
ERIC Educational Resources Information Center
Cavagnoi, Richard M.; Barnett, Thomas
1976-01-01
Describes the construction of teaching models made from a variety of materials such as poker chips and cardboard that illustrate many chemical phenomena, including subatomic particles, molecular structure, solvation and dissociation, and enzyme-substrate interactions. (MLH)
FARSITE: Fire Area Simulator-model development and evaluation
Mark A. Finney
1998-01-01
A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.
Directed Bak-Sneppen Model for Food Chains
NASA Astrophysics Data System (ADS)
Stauffer, D.; Jan, N.
A modification of the Bak-Sneppen model to include simple elements of Darwinian evolution is used to check the survival of prey and predators in long food chains. Mutations, selection, and starvation resulting from depleted prey are incorporated in this model.
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
A Simple Model of Nitrogen Concentration, Throughput, and Denitrification in Estuaries
The Estuary Nitrogen Model (ENM) is a mass balance model that includes calculation of nitrogen losses within bays and estuaries using system flushing time. The model has been used to demonstrate the dependence of throughput and denitrification of nitrogen in bays and estuaries on...
NASA Technical Reports Server (NTRS)
Stordal, Frode; Garcia, Rolando R.
1987-01-01
The 1-1/2-D model of Holton (1986), which is actually a highly truncated two-dimensional model, describes latitudinal variations of tracer mixing ratios in terms of their projections onto second-order Legendre polynomials. The present study extends the work of Holton by including tracers with photochemical production in the stratosphere (O3 and NOy). It also includes latitudinal variations in the photochemical sources and sinks, improving slightly the calculated global mean profiles for the long-lived tracers studied by Holton and improving substantially the latitudinal behavior of ozone. Sensitivity tests of the dynamical parameters in the model are performed, showing that the response of the model to changes in vertical residual meridional winds and horizontal diffusion coefficients is similar to that of a full two-dimensional model. A simple ozone perturbation experiment shows the model's ability to reproduce large-scale latitudinal variations in total ozone column depletions as well as ozone changes in the chemically controlled upper stratosphere.
From Brown-Peterson to continual distractor via operation span: A SIMPLE account of complex span.
Neath, Ian; VanWormer, Lisa A; Bireta, Tamra J; Surprenant, Aimée M
2014-09-01
Three memory tasks-Brown-Peterson, complex span, and continual distractor-all alternate presentation of a to-be-remembered item and a distractor activity, but each task is associated with a different memory system, short-term memory, working memory, and long-term memory, respectively. SIMPLE, a relative local distinctiveness model, has previously been fit to data from both the Brown-Peterson and continual distractor tasks; here we use the same version of the model to fit data from a complex span task. Despite the many differences between the tasks, including unpredictable list length, SIMPLE fit the data well. Because SIMPLE posits a single memory system, these results constitute yet another demonstration that performance on tasks originally thought to tap different memory systems can be explained without invoking multiple memory systems.
Experimental Evaluation of Balance Prediction Models for Sit-to-Stand Movement in the Sagittal Plane
Pena Cabra, Oscar David; Watanabe, Takashi
2013-01-01
Evaluation of balance control ability would become important in the rehabilitation training. In this paper, in order to make clear usefulness and limitation of a traditional simple inverted pendulum model in balance prediction in sit-to-stand movements, the traditional simple model was compared to an inertia (rotational radius) variable inverted pendulum model including multiple-joint influence in the balance predictions. The predictions were tested upon experimentation with six healthy subjects. The evaluation showed that the multiple-joint influence model is more accurate in predicting balance under demanding sit-to-stand conditions. On the other hand, the evaluation also showed that the traditionally used simple inverted pendulum model is still reliable in predicting balance during sit-to-stand movement under non-demanding (normal) condition. Especially, the simple model was shown to be effective for sit-to-stand movements with low center of mass velocity at the seat-off. Moreover, almost all trajectories under the normal condition seemed to follow the same control strategy, in which the subjects used extra energy than the minimum one necessary for standing up. This suggests that the safety considerations come first than the energy efficiency considerations during a sit to stand, since the most energy efficient trajectory is close to the backward fall boundary. PMID:24187580
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
ERIC Educational Resources Information Center
Bonifacci, Paola; Tobia, Valentina
2017-01-01
The present study evaluated which components within the simple view of reading model better predicted reading comprehension in a sample of bilingual language-minority children exposed to Italian, a highly transparent language, as a second language. The sample included 260 typically developing bilingual children who were attending either the first…
The time-dependent response of 3- and 5-layer sandwich beams
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Oleksuk, L. S. S.; Bowles, D. E.
1992-01-01
Simple sandwich beam models have been developed to study the effect of the time-dependent constitutive properties of fiber-reinforced polymer matrix composites, considered for use in orbiting precision segmented reflectors, on the overall deformations. The 3- and 5-layer beam models include layers representing the face sheets, the core, and the adhesive. The static elastic deformation response of the sandwich beam models to a midspan point load is studied using the principle of stationary potential energy. In addition to quantitative conclusions, several assumptions are discussed which simplify the analysis for the case of more complicated material models. It is shown that the simple three-layer model is sufficient in many situations.
A model for proton-irradiated GaAs solar cells
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Walker, G. H.; Outlaw, R. A.; Stock, L. V.
1982-01-01
A simple model for proton radiation damage in GaAs heteroface solar cells is developed. The model includes the effects of spatial nonuniformity of low energy proton damage. Agreement between the model and experimental proton damage data for GaAs heteroface solar cells is satisfactory. An extension of the model to include angular isotropy, as is appropriate for protons in space, is shown to result in significantly less cell damage than for normal proton incidence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J
The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less
On determinant representations of scalar products and form factors in the SoV approach: the XXX case
NASA Astrophysics Data System (ADS)
Kitanine, N.; Maillet, J. M.; Niccoli, G.; Terras, V.
2016-03-01
In the present article we study the form factors of quantum integrable lattice models solvable by the separation of variables (SoVs) method. It was recently shown that these models admit universal determinant representations for the scalar products of the so-called separate states (a class which includes in particular all the eigenstates of the transfer matrix). These results permit to obtain simple expressions for the matrix elements of local operators (form factors). However, these representations have been obtained up to now only for the completely inhomogeneous versions of the lattice models considered. In this article we give a simple algebraic procedure to rewrite the scalar products (and hence the form factors) for the SoV related models as Izergin or Slavnov type determinants. This new form leads to simple expressions for the form factors in the homogeneous and thermodynamic limits. To make the presentation of our method clear, we have chosen to explain it first for the simple case of the XXX Heisenberg chain with anti-periodic boundary conditions. We would nevertheless like to stress that the approach presented in this article applies as well to a wide range of models solved in the SoV framework.
WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL
The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...
Disordered Supersolids in the Extended Bose-Hubbard Model
Lin, Fei; Maier, T. A.; Scarola, V. W.
2017-10-06
The extended Bose-Hubbard model captures the essential properties of a wide variety of physical systems including ultracold atoms and molecules in optical lattices, Josephson junction arrays, and certain narrow band superconductors. It exhibits a rich phase diagram including a supersolid phase where a lattice solid coexists with a superfluid. We use quantum Monte Carlo to study the supersolid part of the phase diagram of the extended Bose-Hubbard model on the simple cubic lattice. We add disorder to the extended Bose-Hubbard model and find that the maximum critical temperature for the supersolid phase tends to be suppressed by disorder. But wemore » also find a narrow parameter window in which the supersolid critical temperature is enhanced by disorder. Our results show that supersolids survive a moderate amount of spatial disorder and thermal fluctuations in the simple cubic lattice.« less
A Course for All Students: Foundations of Modern Engineering
ERIC Educational Resources Information Center
Best, Charles L.
1971-01-01
Describes a course for non-engineering students at Lafayette College which includes the design process in a project. Also included are the study of modeling, optimization, simulation, computer application, and simple feedback controls. (Author/TS)
Simple model to estimate the contribution of atmospheric CO2 to the Earth's greenhouse effect
NASA Astrophysics Data System (ADS)
Wilson, Derrek J.; Gea-Banacloche, Julio
2012-04-01
We show how the CO2 contribution to the Earth's greenhouse effect can be estimated from relatively simple physical considerations and readily available spectroscopic data. In particular, we present a calculation of the "climate sensitivity" (that is, the increase in temperature caused by a doubling of the concentration of CO2) in the absence of feedbacks. Our treatment highlights the important role played by the frequency dependence of the CO2 absorption spectrum. For pedagogical purposes, we provide two simple models to visualize different ways in which the atmosphere might return infrared radiation back to the Earth. The more physically realistic model, based on the Schwarzschild radiative transfer equations, uses as input an approximate form of the atmosphere's temperature profile, and thus includes implicitly the effect of heat transfer mechanisms other than radiation.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
The Dairy Greenhouse Gas Emission Model: Reference Manual
USDA-ARS?s Scientific Manuscript database
The Dairy Greenhouse Gas Model (DairyGHG) is a software tool for estimating the greenhouse gas emissions and carbon footprint of dairy production systems. A relatively simple process-based model is used to predict the primary greenhouse gas emissions, which include the net emission of carbon dioxide...
ERIC Educational Resources Information Center
Wood, Gordon W.
1975-01-01
Describes exercises using simple ball and stick models which students with no chemistry background can solve in the context of the original discovery. Examples include the tartaric acid and benzene problems. (GS)
Relativistic Corrections to the Bohr Model of the Atom
ERIC Educational Resources Information Center
Kraft, David W.
1974-01-01
Presents a simple means for extending the Bohr model to include relativistic corrections using a derivation similar to that for the non-relativistic case, except that the relativistic expressions for mass and kinetic energy are employed. (Author/GS)
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Fei; Maier, T. A.; Scarola, V. W.
The extended Bose-Hubbard model captures the essential properties of a wide variety of physical systems including ultracold atoms and molecules in optical lattices, Josephson junction arrays, and certain narrow band superconductors. It exhibits a rich phase diagram including a supersolid phase where a lattice solid coexists with a superfluid. We use quantum Monte Carlo to study the supersolid part of the phase diagram of the extended Bose-Hubbard model on the simple cubic lattice. We add disorder to the extended Bose-Hubbard model and find that the maximum critical temperature for the supersolid phase tends to be suppressed by disorder. But wemore » also find a narrow parameter window in which the supersolid critical temperature is enhanced by disorder. Our results show that supersolids survive a moderate amount of spatial disorder and thermal fluctuations in the simple cubic lattice.« less
Smart Swarms of Bacteria-Inspired Agents with Performance Adaptable Interactions
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-01-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment – by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots. PMID:21980274
Smart swarms of bacteria-inspired agents with performance adaptable interactions.
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-09-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.
Modeling Smoke Plume-Rise and Dispersion from Southern United States Prescribed Burns with Daysmoke
G L Achtemeier; S L Goodrick; Y Liu; F Garcia-Menendez; Y Hu; M. Odman
2011-01-01
We present Daysmoke, an empirical-statistical plume rise and dispersion model for simulating smoke from prescribed burns. Prescribed fires are characterized by complex plume structure including multiple-core updrafts which makes modeling with simple plume models difficult. Daysmoke accounts for plume structure in a three-dimensional veering/sheering atmospheric...
A simple model of the effect of ocean ventilation on ocean heat uptake
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadiga, Balasubramanya T.; Urban, Nathan Mark
Presentation includes slides on Earth System Models vs. Simple Climate Models; A Popular SCM: Energy Balance Model of Anomalies; On calibrating against one ESM experiment, the SCM correctly captures that ESM's surface warming response with other forcings; Multi-Model Analysis: Multiple ESMs, Single SCM; Posterior Distributions of ECS; However In Excess of 90% of TOA Energy Imbalance is Sequestered in the World Oceans; Heat Storage in the Two Layer Model; Heat Storage in the Two Layer Model; Including TOA Rad. Imbalance and Ocean Heat in Calibration Improves Repr., but Significant Errors Persist; Improved Vertical Resolution Does Not Fix Problem; A Seriesmore » of Expts. Confirms That Anomaly-Diffusing Models Cannot Properly Represent Ocean Heat Uptake; Physics of the Thermocline; Outcropping Isopycnals and Horizontally-Averaged Layers; Local interactions between outcropping isopycnals leads to non-local interactions between horizontally-averaged layers; Both Surface Warming and Ocean Heat are Well Represented With Just 4 Layers; A Series of Expts. Confirms That When Non-Local Interactions are Allowed, the SCMs Can Represent Both Surface Warming and Ocean Heat Uptake; and Summary and Conclusions.« less
CDP++.Italian: Modelling Sublexical and Supralexical Inconsistency in a Shallow Orthography
Perry, Conrad; Ziegler, Johannes C.; Zorzi, Marco
2014-01-01
Most models of reading aloud have been constructed to explain data in relatively complex orthographies like English and French. Here, we created an Italian version of the Connectionist Dual Process Model of Reading Aloud (CDP++) to examine the extent to which the model could predict data in a language which has relatively simple orthography-phonology relationships but is relatively complex at a suprasegmental (word stress) level. We show that the model exhibits good quantitative performance and accounts for key phenomena observed in naming studies, including some apparently contradictory findings. These effects include stress regularity and stress consistency, both of which have been especially important in studies of word recognition and reading aloud in Italian. Overall, the results of the model compare favourably to an alternative connectionist model that can learn non-linear spelling-to-sound mappings. This suggests that CDP++ is currently the leading computational model of reading aloud in Italian, and that its simple linear learning mechanism adequately captures the statistical regularities of the spelling-to-sound mapping both at the segmental and supra-segmental levels. PMID:24740261
Mathematical Modeling for Scrub Typhus and Its Implications for Disease Control.
Min, Kyung Duk; Cho, Sung Il
2018-03-19
The incidence rate of scrub typhus has been increasing in the Republic of Korea. Previous studies have suggested that this trend may have resulted from the effects of climate change on the transmission dynamics among vectors and hosts, but a clear explanation of the process is still lacking. In this study, we applied mathematical models to explore the potential factors that influence the epidemiology of tsutsugamushi disease. We developed mathematical models of ordinary differential equations including human, rodent and mite groups. Two models, including simple and complex models, were developed, and all parameters employed in the models were adopted from previous articles that represent epidemiological situations in the Republic of Korea. The simulation results showed that the force of infection at the equilibrium state under the simple model was 0.236 (per 100,000 person-months), and that in the complex model was 26.796 (per 100,000 person-months). Sensitivity analyses indicated that the most influential parameters were rodent and mite populations and contact rate between them for the simple model, and trans-ovarian transmission for the complex model. In both models, contact rate between humans and mites is more influential than morality rate of rodent and mite group. The results indicate that the effect of controlling either rodents or mites could be limited, and reducing the contact rate between humans and mites is more practical and effective strategy. However, the current level of control would be insufficient relative to the growing mite population. © 2018 The Korean Academy of Medical Sciences.
Simple Climate Model Evaluation Using Impulse Response Tests
NASA Astrophysics Data System (ADS)
Schwarber, A.; Hartin, C.; Smith, S. J.
2017-12-01
Simple climate models (SCMs) are central tools used to incorporate climate responses into human-Earth system modeling. SCMs are computationally inexpensive, making them an ideal tool for a variety of analyses, including consideration of uncertainty. Despite their wide use, many SCMs lack rigorous testing of their fundamental responses to perturbations. Here, following recommendations of a recent National Academy of Sciences report, we compare several SCMs (Hector-deoclim, MAGICC 5.3, MAGICC 6.0, and the IPCC AR5 impulse response function) to diagnose model behavior and understand the fundamental system responses within each model. We conduct stylized perturbations (emissions and forcing/concentration) of three different chemical species: CO2, CH4, and BC. We find that all 4 models respond similarly in terms of overall shape, however, there are important differences in the timing and magnitude of the responses. For example, the response to a BC pulse differs over the first 20 years after the pulse among the models, a finding that is due to differences in model structure. Such perturbation experiments are difficult to conduct in complex models due to internal model noise, making a direct comparison with simple models challenging. We can, however, compare the simplified model response from a 4xCO2 step experiment to the same stylized experiment carried out by CMIP5 models, thereby testing the ability of SCMs to emulate complex model results. This work allows an assessment of how well current understanding of Earth system responses are incorporated into multi-model frameworks by way of simple climate models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-09-01
This document presents a modeling and control study of the Fluid Bed Gasification (FBG) unit at the Morgantown Energy Technology Center (METC). The work is performed under contract no. DE-FG21-94MC31384. The purpose of this study is to generate a simple FBG model from process data, and then use the model to suggest an improved control scheme which will improve operation of the gasifier. The work first developes a simple linear model of the gasifier, then suggests an improved gasifier pressure and MGCR control configuration, and finally suggests the use of a multivariable control strategy for the gasifier.
Models of globular proteins in aqueous solutions
NASA Astrophysics Data System (ADS)
Wentzel, Nathaniel James
Protein crystallization is a continuing area of research. Currently, there is no universal theory for the conditions required to crystallize proteins. A better understanding of protein crystallization will be helpful in determining protein structure and preventing and treating certain diseases. In this thesis, we will extend the understanding of globular proteins in aqueous solutions by analyzing various models for protein interactions. Experiments have shown that the liquid-liquid phase separation curves for lysozyme in solution with salt depend on salt type and salt concentration. We analyze a simple square well model for this system whose well depth depends on salt type and salt concentration, to determine the phase coexistence surfaces from experimental data. The surfaces, calculated from a single Monte Carlo simulation and a simple scaling argument, are shown as a function of temperature, salt concentration and protein concentration for two typical salts. Urate Oxidase from Asperigillus flavus is a protein used for studying the effects of polymers on the crystallization of large proteins. Experiments have determined some aspects of the phase diagram. We use Monte Carlo techniques and perturbation theory to predict the phase diagram for a model of urate oxidase in solution with PEG. The model used includes an electrostatic interaction, van der Waals attraction, and a polymerinduced depletion interaction. The results agree quantitatively with experiments. Anisotropy plays a role in globular protein interactions, including the formation of hemoglobin fibers in sickle cell disease. Also, the solvent conditions have been shown to play a strong role in the phase behavior of some aqueous protein solutions. Each has previously been treated separately in theoretical studies. Here we propose and analyze a simple, combined model that treats both anisotropy and solvent effects. We find that this model qualitatively explains some phase behavior, including the existence of a lower critical point under certain conditions.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M
2017-06-01
Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
SIMPL Systems, or: Can We Design Cryptographic Hardware without Secret Key Information?
NASA Astrophysics Data System (ADS)
Rührmair, Ulrich
This paper discusses a new cryptographic primitive termed SIMPL system. Roughly speaking, a SIMPL system is a special type of Physical Unclonable Function (PUF) which possesses a binary description that allows its (slow) public simulation and prediction. Besides this public key like functionality, SIMPL systems have another advantage: No secret information is, or needs to be, contained in SIMPL systems in order to enable cryptographic protocols - neither in the form of a standard binary key, nor as secret information hidden in random, analog features, as it is the case for PUFs. The cryptographic security of SIMPLs instead rests on (i) a physical assumption on their unclonability, and (ii) a computational assumption regarding the complexity of simulating their output. This novel property makes SIMPL systems potentially immune against many known hardware and software attacks, including malware, side channel, invasive, or modeling attacks.
Design Considerations for Heavily-Doped Cryogenic Schottky Diode Varactor Multipliers
NASA Technical Reports Server (NTRS)
Schlecht, E.; Maiwald, F.; Chattopadhyay, G.; Martin, S.; Mehdi, I.
2001-01-01
Diode modeling for Schottky varactor frequency multipliers above 500 GHz is presented with special emphasis placed on simple models and fitted equations for rapid circuit design. Temperature- and doping-dependent mobility, resistivity, and avalanche current multiplication and breakdown are presented. Next is a discussion of static junction current, including the effects of tunneling as well as thermionic emission. These results have been compared to detailed measurements made down to 80 K on diodes fabricated at JPL, followed by a discussion of the effect on multiplier efficiency. Finally, a simple model of current saturation in the undepleted active layer suitable for inclusion in harmonic balance simulators is derived.
Gastroschisis Simulation Model: Pre-surgical Management Technical Report.
Rosen, Orna; Angert, Robert M
2017-03-22
This technical report describes the creation of a gastroschisis model for a newborn. This is a simple, low-cost task trainer that provides the opportunity for Neonatology providers, including fellows, residents, nurse practitioners, physician assistants, and nurses, to practice the management of a baby with gastroschisis after birth and prior to surgery. Included is a suggested checklist with which the model can be employed. The details can be modified to suit different learning objectives.
Testing the Simple Biosphere model (SiB) using point micrometeorological and biophysical data
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Dorman, J. L.
1987-01-01
The suitability of the Simple Biosphere (SiB) model of Sellers et al. (1986) for calculation of the surface fluxes for use within general circulation models is assessed. The structure of the SiB model is described, and its performance is evaluated in terms of its ability to realistically and accurately simulate biophysical processes over a number of test sites, including Ruthe (Germany), South Carolina (U.S.), and Central Wales (UK), for which point biophysical and micrometeorological data were available. The model produced simulations of the energy balances of barley, wheat, maize, and Norway Spruce sites over periods ranging from 1 to 40 days. Generally, it was found that the model reproduced time series of latent, sensible, and ground-heat fluxes and surface radiative temperature comparable with the available data.
Brenner, M H
1983-01-01
This paper discusses a first-stage analysis of the link of unemployment rates, as well as other economic, social and environmental health risk factors, to mortality rates in postwar Britain. The results presented represent part of an international study of the impact of economic change on mortality patterns in industrialized countries. The mortality patterns examined include total and infant mortality and (by cause) cardiovascular (total), cerebrovascular and heart disease, cirrhosis of the liver, and suicide, homicide and motor vehicle accidents. Among the most prominent factors that beneficially influence postwar mortality patterns in England/Wales and Scotland are economic growth and stability and health service availability. A principal detrimental factor to health is a high rate of unemployment. Additional factors that have an adverse influence on mortality rates are cigarette consumption and heavy alcohol use and unusually cold winter temperatures (especially in Scotland). The model of mortality that includes both economic changes and behavioral and environmental risk factors was successfully applied to infant mortality rates in the interwar period. In addition, the "simple" economic change model of mortality (using only economic indicators) was applied to other industrialized countries. In Canada, the United States, the United Kingdom, and Sweden, the simple version of the economic change model could be successfully applied only if the analysis was begun before World War II; for analysis beginning in the postwar era, the more sophisticated economic change model, including behavioral and environmental risk factors, was required. In France, West Germany, Italy, and Spain, by contrast, some success was achieved using the simple economic change model.
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
Combustion of Nitramine Propellants
1983-03-01
through development of a comprehensive analytical model. The ultimate goals are to enable prediction of deflagration rate over a wide pressure range...superior in burn rate prediction , both simple models fail in correlating existing temperature- sensitivity data. (2) In the second part, a...auxiliary condition to enable independent burn rate prediction ; improved melt phase model including decomposition-gas bubbles; model for far-field
ERIC Educational Resources Information Center
Yolles, Maurice
2005-01-01
Purpose: Seeks to explore the notion of organisational intelligence as a simple extension of the notion of the idea of collective intelligence. Design/methodology/approach: Discusses organisational intelligence using previous research, which includes the Purpose, Properties and Practice model of Dealtry, and the Viable Systems model. Findings: The…
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
A Model of Object-Identities and Values
1990-02-23
integrity constraints in its construct, which provides the natural integration of the logical database model and the object-oriented database model. 20...portions are integrated by a simple commutative diagram of modeling functions. The formalism includes the expression of integrity constraints in its ...38 .5.2.2 The (Concept Model and Its Semantics .. .. .. .. ... .... ... .. 40 5.2.3 Two K%.inds of Predicates
USDA-ARS?s Scientific Manuscript database
A simple hourly infection model was used for a risk assessment of citrus black spot (CBS) caused by Phyllosticta citricarpa. The infection model contained a temperature-moisture response function and also included functions to simulate ascospore release and dispersal of pycnidiospores. A validatio...
Dynamics of Zika virus outbreaks: an overview of mathematical modeling approaches.
Wiratsudakul, Anuwat; Suparit, Parinya; Modchang, Charin
2018-01-01
The Zika virus was first discovered in 1947. It was neglected until a major outbreak occurred on Yap Island, Micronesia, in 2007. Teratogenic effects resulting in microcephaly in newborn infants is the greatest public health threat. In 2016, the Zika virus epidemic was declared as a Public Health Emergency of International Concern (PHEIC). Consequently, mathematical models were constructed to explicitly elucidate related transmission dynamics. In this review article, two steps of journal article searching were performed. First, we attempted to identify mathematical models previously applied to the study of vector-borne diseases using the search terms "dynamics," "mathematical model," "modeling," and "vector-borne" together with the names of vector-borne diseases including chikungunya, dengue, malaria, West Nile, and Zika. Then the identified types of model were further investigated. Second, we narrowed down our survey to focus on only Zika virus research. The terms we searched for were "compartmental," "spatial," "metapopulation," "network," "individual-based," "agent-based" AND "Zika." All relevant studies were included regardless of the year of publication. We have collected research articles that were published before August 2017 based on our search criteria. In this publication survey, we explored the Google Scholar and PubMed databases. We found five basic model architectures previously applied to vector-borne virus studies, particularly in Zika virus simulations. These include compartmental, spatial, metapopulation, network, and individual-based models. We found that Zika models carried out for early epidemics were mostly fit into compartmental structures and were less complicated compared to the more recent ones. Simple models are still commonly used for the timely assessment of epidemics. Nevertheless, due to the availability of large-scale real-world data and computational power, recently there has been growing interest in more complex modeling frameworks. Mathematical models are employed to explore and predict how an infectious disease spreads in the real world, evaluate the disease importation risk, and assess the effectiveness of intervention strategies. As the trends in modeling of infectious diseases have been shifting towards data-driven approaches, simple and complex models should be exploited differently. Simple models can be produced in a timely fashion to provide an estimation of the possible impacts. In contrast, complex models integrating real-world data require more time to develop but are far more realistic. The preparation of complicated modeling frameworks prior to the outbreaks is recommended, including the case of future Zika epidemic preparation.
NASA Astrophysics Data System (ADS)
Legates, David R.; Junghenn, Katherine T.
2018-04-01
Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.
Indiana chronic disease management program risk stratification analysis.
Li, Jingjin; Holmes, Ann M; Rosenman, Marc B; Katz, Barry P; Downs, Stephen M; Murray, Michael D; Ackermann, Ronald T; Inui, Thomas S
2005-10-01
The objective of this study was to compare the ability of risk stratification models derived from administrative data to classify groups of patients for enrollment in a tailored chronic disease management program. This study included 19,548 Medicaid patients with chronic heart failure or diabetes in the Indiana Medicaid data warehouse during 2001 and 2002. To predict costs (total claims paid) in FY 2002, we considered candidate predictor variables available in FY 2001, including patient characteristics, the number and type of prescription medications, laboratory tests, pharmacy charges, and utilization of primary, specialty, inpatient, emergency department, nursing home, and home health care. We built prospective models to identify patients with different levels of expenditure. Model fit was assessed using R statistics, whereas discrimination was assessed using the weighted kappa statistic, predictive ratios, and the area under the receiver operating characteristic curve. We found a simple least-squares regression model in which logged total charges in FY 2002 were regressed on the log of total charges in FY 2001, the number of prescriptions filled in FY 2001, and the FY 2001 eligibility category, performed as well as more complex models. This simple 3-parameter model had an R of 0.30 and, in terms in classification efficiency, had a sensitivity of 0.57, a specificity of 0.90, an area under the receiver operator curve of 0.80, and a weighted kappa statistic of 0.51. This simple model based on readily available administrative data stratified Medicaid members according to predicted future utilization as well as more complicated models.
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
Gastroschisis Simulation Model: Pre-surgical Management Technical Report
Angert, Robert M
2017-01-01
This technical report describes the creation of a gastroschisis model for a newborn. This is a simple, low-cost task trainer that provides the opportunity for Neonatology providers, including fellows, residents, nurse practitioners, physician assistants, and nurses, to practice the management of a baby with gastroschisis after birth and prior to surgery. Included is a suggested checklist with which the model can be employed. The details can be modified to suit different learning objectives. PMID:28439484
A simple, analytic 3-dimensional downburst model based on boundary layer stagnation flow
NASA Technical Reports Server (NTRS)
Oseguera, Rosa M.; Bowles, Roland L.
1988-01-01
A simple downburst model is developed for use in batch and real-time piloted simulation studies of guidance strategies for terminal area transport aircraft operations in wind shear conditions. The model represents an axisymmetric stagnation point flow, based on velocity profiles from the Terminal Area Simulation System (TASS) model developed by Proctor and satisfies the mass continuity equation in cylindrical coordinates. Altitude dependence, including boundary layer effects near the ground, closely matches real-world measurements, as do the increase, peak, and decay of outflow and downflow with increasing distance from the downburst center. Equations for horizontal and vertical winds were derived, and found to be infinitely differentiable, with no singular points existent in the flow field. In addition, a simple relationship exists among the ratio of maximum horizontal to vertical velocities, the downdraft radius, depth of outflow, and altitude of maximum outflow. In use, a microburst can be modeled by specifying four characteristic parameters, velocity components in the x, y and z directions, and the corresponding nine partial derivatives are obtained easily from the velocity equations.
Agent Model Development for Assessing Climate-Induced Geopolitical Instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boslough, Mark B.; Backus, George A.
2005-12-01
We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less
Climatic impact of Amazon deforestation - a mechanistic model study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ning Zeng; Dickinson, R.E.; Xubin Zeng
1996-04-01
Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less
Meningomyelocele Simulation Model: Pre-surgical Management–Technical Report
Angert, Robert M
2018-01-01
This technical report describes the creation of a myelomeningocele model of a newborn baby. This is a simple, low-cost, and easy-to-assemble model that allows the medical team to practice the delivery room management of a newborn with myelomeningocele. The report includes scenarios and a suggested checklist with which the model can be employed. PMID:29713576
Meningomyelocele Simulation Model: Pre-surgical Management-Technical Report.
Rosen, Orna; Angert, Robert M
2018-02-26
This technical report describes the creation of a myelomeningocele model of a newborn baby. This is a simple, low-cost, and easy-to-assemble model that allows the medical team to practice the delivery room management of a newborn with myelomeningocele. The report includes scenarios and a suggested checklist with which the model can be employed.
Developing a Conceptual Architecture for a Generalized Agent-based Modeling Environment (GAME)
2008-03-01
4. REPAST (Java, Python , C#, Open Source) ........28 5. MASON: Multi-Agent Modeling Language (Swarm Extension... Python , C#, Open Source) Repast (Recursive Porous Agent Simulation Toolkit) was designed for building agent-based models and simulations in the...Repast makes it easy for inexperienced users to build models by including a built-in simple model and provide interfaces through which menus and Python
Development of mathematical models of environmental physiology
NASA Technical Reports Server (NTRS)
Stolwijk, J. A. J.; Mitchell, J. W.; Nadel, E. R.
1971-01-01
Selected articles concerned with mathematical or simulation models of human thermoregulation are presented. The articles presented include: (1) development and use of simulation models in medicine, (2) model of cardio-vascular adjustments during exercise, (3) effective temperature scale based on simple model of human physiological regulatory response, (4) behavioral approach to thermoregulatory set point during exercise, and (5) importance of skin temperature in sweat regulation.
A Simple Mathematical Model for Standard Model of Elementary Particles and Extension Thereof
NASA Astrophysics Data System (ADS)
Sinha, Ashok
2016-03-01
An algebraically (and geometrically) simple model representing the masses of the elementary particles in terms of the interaction (strong, weak, electromagnetic) constants is developed, including the Higgs bosons. The predicted Higgs boson mass is identical to that discovered by LHC experimental programs; while possibility of additional Higgs bosons (and their masses) is indicated. The model can be analyzed to explain and resolve many puzzles of particle physics and cosmology including the neutrino masses and mixing; origin of the proton mass and the mass-difference between the proton and the neutron; the big bang and cosmological Inflation; the Hubble expansion; etc. A novel interpretation of the model in terms of quaternion and rotation in the six-dimensional space of the elementary particle interaction-space - or, equivalently, in six-dimensional spacetime - is presented. Interrelations among particle masses are derived theoretically. A new approach for defining the interaction parameters leading to an elegant and symmetrical diagram is delineated. Generalization of the model to include supersymmetry is illustrated without recourse to complex mathematical formulation and free from any ambiguity. This Abstract represents some results of the Author's Independent Theoretical Research in Particle Physics, with possible connection to the Superstring Theory. However, only very elementary mathematics and physics is used in my presentation.
Flood Risk and Asset Management
2011-06-15
Model cascade could include HEC - RAS , HR BREACH and Dynamic RFSM. Action HRW to consider model coupling and advise DM. It was felt useful to...simple loss of life approach. WL can provide input and advise on USACE LIFESIM approaches. To enable comparison with HEC FRM approaches, it was
Soulis, Konstantinos X; Valiantzas, John D; Ntoulas, Nikolaos; Kargas, George; Nektarios, Panayiotis A
2017-09-15
In spite of the well-known green roof benefits, their widespread adoption in the management practices of urban drainage systems requires the use of adequate analytical and modelling tools. In the current study, green roof runoff modeling was accomplished by developing, testing, and jointly using a simple conceptual model and a physically based numerical simulation model utilizing HYDRUS-1D software. The use of such an approach combines the advantages of the conceptual model, namely simplicity, low computational requirements, and ability to be easily integrated in decision support tools with the capacity of the physically based simulation model to be easily transferred in conditions and locations other than those used for calibrating and validating it. The proposed approach was evaluated with an experimental dataset that included various green roof covers (either succulent plants - Sedum sediforme, or xerophytic plants - Origanum onites, or bare substrate without any vegetation) and two substrate depths (either 8 cm or 16 cm). Both the physically based and the conceptual models matched very closely the observed hydrographs. In general, the conceptual model performed better than the physically based simulation model but the overall performance of both models was sufficient in most cases as it is revealed by the Nash-Sutcliffe Efficiency index which was generally greater than 0.70. Finally, it was showcased how a physically based and a simple conceptual model can be jointly used to allow the use of the simple conceptual model for a wider set of conditions than the available experimental data and in order to support green roof design. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simple model of inhibition of chain-branching combustion processes
NASA Astrophysics Data System (ADS)
Babushok, Valeri I.; Gubernov, Vladimir V.; Minaev, Sergei S.; Miroshnichenko, Taisia P.
2017-11-01
A simple kinetic model has been suggested to describe the inhibition and extinction of flame propagation in reaction systems with chain-branching reactions typical for hydrocarbon systems. The model is based on the generalised model of the combustion process with chain-branching reaction combined with the one-stage reaction describing the thermal mode of flame propagation with the addition of inhibition reaction steps. Inhibitor addition suppresses the radical overshoot in flame and leads to the change of reaction mode from the chain-branching reaction to a thermal mode of flame propagation. With the increase of inhibitor the transition of chain-branching mode of reaction to the reaction with straight-chains (non-branching chain reaction) is observed. The inhibition part of the model includes a block of three reactions to describe the influence of the inhibitor. The heat losses are incorporated into the model via Newton cooling. The flame extinction is the result of the decreased heat release of inhibited reaction processes and the suppression of radical overshoot with the further decrease of the reaction rate due to the temperature decrease and mixture dilution. A comparison of the results of modelling laminar premixed methane/air flames inhibited by potassium bicarbonate (gas phase model, detailed kinetic model) with the results obtained using the suggested simple model is presented. The calculations with the detailed kinetic model demonstrate the following modes of combustion process: (1) flame propagation with chain-branching reaction (with radical overshoot, inhibitor addition decreases the radical overshoot down to the equilibrium level); (2) saturation of chemical influence of inhibitor, and (3) transition to thermal mode of flame propagation (non-branching chain mode of reaction). The suggested simple kinetic model qualitatively reproduces the modes of flame propagation with the addition of the inhibitor observed using detailed kinetic models.
Modeling vibration response and damping of cables and cabled structures
NASA Astrophysics Data System (ADS)
Spak, Kaitlin S.; Agnes, Gregory S.; Inman, Daniel J.
2015-02-01
In an effort to model the vibration response of cabled structures, the distributed transfer function method is developed to model cables and a simple cabled structure. The model includes shear effects, tension, and hysteretic damping for modeling of helical stranded cables, and includes a method for modeling cable attachment points using both linear and rotational damping and stiffness. The damped cable model shows agreement with experimental data for four types of stranded cables, and the damped cabled beam model shows agreement with experimental data for the cables attached to a beam structure, as well as improvement over the distributed mass method for cabled structure modeling.
Landscape scale mapping of forest inventory data by nearest neighbor classification
Andrew Lister
2009-01-01
One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...
Aerothermal modeling program, phase 1
NASA Technical Reports Server (NTRS)
Srinivasan, R.; Reynolds, R.; Ball, I.; Berry, R.; Johnson, K.; Mongia, H.
1983-01-01
Aerothermal submodels used in analytical combustor models are analyzed. The models described include turbulence and scalar transport, gaseous full combustion, spray evaporation/combustion, soot formation and oxidation, and radiation. The computational scheme is discussed in relation to boundary conditions and convergence criteria. Also presented is the data base for benchmark quality test cases and an analysis of simple flows.
A novel and simple model of the uptake of organic chemicals by vegetation from air and soil.
Hung, H; Mackay, D
1997-09-01
A novel and simple three-compartment fugacity model has been developed to predict the kinetics and equilibria of the uptake of organic chemicals in herbaceous agricultural plants at various times, including the time of harvest using only readily available input data. The chemical concentration in each of the three plant compartments leaf, stem which includes fruits and seeds, and root) is expressed as a function of both time and chemical concentrations in soil and air. The model was developed using the fugacity concept; however, the final expressions are presented in terms of concentrations in soil and air, equilibrium partition coefficients and a set of transport and transformation half-lives. An illustrative application of the model is presented which describes the uptake of bromacil by a soybean plant under hydroponic conditions. The model, which is believed to give acceptably accurate prediction of the distribution of chemicals among plant tissues, air and soil, may be used for the assessment of exposure to, and risk from contaminants consumed either directly from vegetation or indirectly in natural and agricultural food chains.
Analytically tractable climate-carbon cycle feedbacks under 21st century anthropogenic forcing
NASA Astrophysics Data System (ADS)
Lade, Steven J.; Donges, Jonathan F.; Fetzer, Ingo; Anderies, John M.; Beer, Christian; Cornell, Sarah E.; Gasser, Thomas; Norberg, Jon; Richardson, Katherine; Rockström, Johan; Steffen, Will
2018-05-01
Changes to climate-carbon cycle feedbacks may significantly affect the Earth system's response to greenhouse gas emissions. These feedbacks are usually analysed from numerical output of complex and arguably opaque Earth system models. Here, we construct a stylised global climate-carbon cycle model, test its output against comprehensive Earth system models, and investigate the strengths of its climate-carbon cycle feedbacks analytically. The analytical expressions we obtain aid understanding of carbon cycle feedbacks and the operation of the carbon cycle. Specific results include that different feedback formalisms measure fundamentally the same climate-carbon cycle processes; temperature dependence of the solubility pump, biological pump, and CO2 solubility all contribute approximately equally to the ocean climate-carbon feedback; and concentration-carbon feedbacks may be more sensitive to future climate change than climate-carbon feedbacks. Simple models such as that developed here also provide workbenches
for simple but mechanistically based explorations of Earth system processes, such as interactions and feedbacks between the planetary boundaries, that are currently too uncertain to be included in comprehensive Earth system models.
Backward bifurcations, turning points and rich dynamics in simple disease models.
Zhang, Wenjing; Wahl, Lindi M; Yu, Pei
2016-10-01
In this paper, dynamical systems theory and bifurcation theory are applied to investigate the rich dynamical behaviours observed in three simple disease models. The 2- and 3-dimensional models we investigate have arisen in previous investigations of epidemiology, in-host disease, and autoimmunity. These closely related models display interesting dynamical behaviors including bistability, recurrence, and regular oscillations, each of which has possible clinical or public health implications. In this contribution we elucidate the key role of backward bifurcations in the parameter regimes leading to the behaviors of interest. We demonstrate that backward bifurcations with varied positions of turning points facilitate the appearance of Hopf bifurcations, and the varied dynamical behaviors are then determined by the properties of the Hopf bifurcation(s), including their location and direction. A Maple program developed earlier is implemented to determine the stability of limit cycles bifurcating from the Hopf bifurcation. Numerical simulations are presented to illustrate phenomena of interest such as bistability, recurrence and oscillation. We also discuss the physical motivations for the models and the clinical implications of the resulting dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Christopher A.
In this dissertation the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas is investigated. To properly assess this possibility, data from both numerical simulations and experiment are analyzed. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos in the data. These tools include phase portraits and Poincare sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulatemore » the plasma dynamics. These are the DEBS code, which models global RFP dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low dimensional chaos and simple determinism. Experimental date were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or low simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition
NASA Astrophysics Data System (ADS)
McGilvray, M.; Dann, A. G.; Jacobs, P. A.
2013-07-01
Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.
Predicting the Ability of Marine Mammal Populations to Compensate for Behavioral Disturbances
2015-09-30
approaches, including simple theoretical models as well as statistical analysis of data rich conditions. Building on models developed for PCoD [2,3], we...conditions is population trajectory most likely to be affected (the central aim of PCoD ). For the revised model presented here, we include a population...averaged condition individuals (here used as a proxy for individual health as defined in PCoD ), and E is the quality of the environment in which the
Modelling of capillary-driven flow for closed paper-based microfluidic channels
NASA Astrophysics Data System (ADS)
Songok, Joel; Toivakka, Martti
2017-06-01
Paper-based microfluidics is an emerging field focused on creating inexpensive devices, with simple fabrication methods for applications in various fields including healthcare, environmental monitoring and veterinary medicine. Understanding the flow of liquid is important in achieving consistent operation of the devices. This paper proposes capillary models to predict flow in paper-based microfluidic channels, which include a flow accelerating hydrophobic top cover. The models, which consider both non-absorbing and absorbing substrates, are in good agreement with the experimental results.
Meesters, Johannes A J; Koelmans, Albert A; Quik, Joris T K; Hendriks, A Jan; van de Meent, Dik
2014-05-20
Screening level models for environmental assessment of engineered nanoparticles (ENP) are not generally available. Here, we present SimpleBox4Nano (SB4N) as the first model of this type, assess its validity, and evaluate it by comparisons with a known material flow model. SB4N expresses ENP transport and concentrations in and across air, rain, surface waters, soil, and sediment, accounting for nanospecific processes such as aggregation, attachment, and dissolution. The model solves simultaneous mass balance equations (MBE) using simple matrix algebra. The MBEs link all concentrations and transfer processes using first-order rate constants for all processes known to be relevant for ENPs. The first-order rate constants are obtained from the literature. The output of SB4N is mass concentrations of ENPs as free dispersive species, heteroaggregates with natural colloids, and larger natural particles in each compartment in time and at steady state. Known scenario studies for Switzerland were used to demonstrate the impact of the transport processes included in SB4N on the prediction of environmental concentrations. We argue that SB4N-predicted environmental concentrations are useful as background concentrations in environmental risk assessment.
Predicting outcome in severe traumatic brain injury using a simple prognostic model.
Sobuwa, Simpiwe; Hartzenberg, Henry Benjamin; Geduld, Heike; Uys, Corrie
2014-06-17
Several studies have made it possible to predict outcome in severe traumatic brain injury (TBI) making it beneficial as an aid for clinical decision-making in the emergency setting. However, reliable predictive models are lacking for resource-limited prehospital settings such as those in developing countries like South Africa. To develop a simple predictive model for severe TBI using clinical variables in a South African prehospital setting. All consecutive patients admitted at two level-one centres in Cape Town, South Africa, for severe TBI were included. A binary logistic regression model was used, which included three predictor variables: oxygen saturation (SpO₂), Glasgow Coma Scale (GCS) and pupil reactivity. The Glasgow Outcome Scale was used to assess outcome on hospital discharge. A total of 74.4% of the outcomes were correctly predicted by the logistic regression model. The model demonstrated SpO₂ (p=0.019), GCS (p=0.001) and pupil reactivity (p=0.002) as independently significant predictors of outcome in severe TBI. Odds ratios of a good outcome were 3.148 (SpO₂ ≥ 90%), 5.108 (GCS 6 - 8) and 4.405 (pupils bilaterally reactive). This model is potentially useful for effective predictions of outcome in severe TBI.
Simple dynamical models capturing the key features of the Central Pacific El Niño.
Chen, Nan; Majda, Andrew J
2016-10-18
The Central Pacific El Niño (CP El Niño) has been frequently observed in recent decades. The phenomenon is characterized by an anomalous warm sea surface temperature (SST) confined to the central Pacific and has different teleconnections from the traditional El Niño. Here, simple models are developed and shown to capture the key mechanisms of the CP El Niño. The starting model involves coupled atmosphere-ocean processes that are deterministic, linear, and stable. Then, systematic strategies are developed for incorporating several major mechanisms of the CP El Niño into the coupled system. First, simple nonlinear zonal advection with no ad hoc parameterization of the background SST gradient is introduced that creates coupled nonlinear advective modes of the SST. Secondly, due to the recent multidecadal strengthening of the easterly trade wind, a stochastic parameterization of the wind bursts including a mean easterly trade wind anomaly is coupled to the simple atmosphere-ocean processes. Effective stochastic noise in the wind burst model facilitates the intermittent occurrence of the CP El Niño with realistic amplitude and duration. In addition to the anomalous warm SST in the central Pacific, other major features of the CP El Niño such as the rising branch of the anomalous Walker circulation being shifted to the central Pacific and the eastern Pacific cooling with a shallow thermocline are all captured by this simple coupled model. Importantly, the coupled model succeeds in simulating a series of CP El Niño that lasts for 5 y, which resembles the two CP El Niño episodes during 1990-1995 and 2002-2006.
Oxygen Transport: A Simple Model for Study and Examination.
ERIC Educational Resources Information Center
Gaar, Kermit A., Jr.
1985-01-01
Describes an oxygen transport model computer program (written in Applesoft BASIC) which uses such variables as amount of time lapse from beginning of the simulation, arterial blood oxygen concentration, alveolar oxygen pressure, and venous blood oxygen concentration and pressure. Includes information on obtaining the program and its documentation.…
An Equilibrium Flow Model of a University Campus.
ERIC Educational Resources Information Center
Oliver, Robert M.; Hopkins, David S. P.
This paper develops a simple deterministic model that relates student admissions and enrollments to the final demand for educated students. It includes the effects of dropout rates and student-teacher ratios on student enrollments and faculty staffing levels. Certain technological requirements are assumed known and given. These, as well as the…
Volume shift and charge instability of simple-metal clusters
NASA Astrophysics Data System (ADS)
Brajczewska, M.; Vieira, A.; Fiolhais, C.; Perdew, J. P.
1996-12-01
Experiment indicates that small clusters show changes (mostly contractions) of the bond lengths with respect to bulk values. We use the stabilized jellium model to study the self-expansion and self-compression of spherical clusters (neutral or ionized) of simple metals. Results from Kohn - Sham density functional theory are presented for small clusters of Al and Na, including negatively-charged ones. We also examine the stability of clusters with respect to charging.
Biomat development in soil treatment units for on-site wastewater treatment.
Winstanley, H F; Fowler, A C
2013-10-01
We provide a simple mathematical model of the bioremediation of contaminated wastewater leaching into the subsoil below a septic tank percolation system. The model comprises a description of the percolation system's flows, together with equations describing the growth of biomass and the uptake of an organic contaminant concentration. By first rendering the model dimensionless, it can be partially solved, to provide simple insights into the processes which control the efficacy of the system. In particular, we provide quantitative insight into the effect of a near surface biomat on subsoil permeability; this can lead to trench ponding, and thus propagation of effluent further down the trench. Using the computed vadose zone flow field, the model can be simply extended to include reactive transport of other contaminants of interest.
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An atmospheric model developed by Jacchia, quite accurate but requiring a large amount of computer storage and execution time, was found to be ill-suited for the space shuttle onboard program. The development of a simple atmospheric density model to simulate the Jacchia model was studied. Required characteristics including variation with solar activity, diurnal variation, variation with geomagnetic activity, semiannual variation, and variation with height were met by the new atmospheric density model.
Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas
2014-01-01
A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
Hewitt, Angela L.; Popa, Laurentiu S.; Pasalar, Siavash; Hendrix, Claudia M.
2011-01-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of Radj2), followed by position (28 ± 24% of Radj2) and speed (11 ± 19% of Radj2). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower Radj2 values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics. PMID:21795616
ERIC Educational Resources Information Center
School Science Review, 1981
1981-01-01
Describes 13 activities, experiments and demonstrations, including the preparation of iron (III) chloride, simple alpha-helix model, investigating camping gas, redox reactions of some organic compounds, a liquid crystal thermometer, and the oxidation number concept in organic chemistry. (JN)
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
A simple electric circuit model for proton exchange membrane fuel cells
NASA Astrophysics Data System (ADS)
Lazarou, Stavros; Pyrgioti, Eleftheria; Alexandridis, Antonio T.
A simple and novel dynamic circuit model for a proton exchange membrane (PEM) fuel cell suitable for the analysis and design of power systems is presented. The model takes into account phenomena like activation polarization, ohmic polarization, and mass transport effect present in a PEM fuel cell. The proposed circuit model includes three resistors to approach adequately these phenomena; however, since for the PEM dynamic performance connection or disconnection of an additional load is of crucial importance, the proposed model uses two saturable inductors accompanied by an ideal transformer to simulate the double layer charging effect during load step changes. To evaluate the effectiveness of the proposed model its dynamic performance under load step changes is simulated. Experimental results coming from a commercial PEM fuel cell module that uses hydrogen from a pressurized cylinder at the anode and atmospheric oxygen at the cathode, clearly verify the simulation results.
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
pyhector: A Python interface for the simple climate model Hector
DOE Office of Scientific and Technical Information (OSTI.GOV)
N Willner, Sven; Hartin, Corinne; Gieseke, Robert
2017-04-01
Pyhector is a Python interface for the simple climate model Hector (Hartin et al. 2015) developed in C++. Simple climate models like Hector can, for instance, be used in the analysis of scenarios within integrated assessment models like GCAM1, in the emulation of complex climate models, and in uncertainty analyses. Hector is an open-source, object oriented, simple global climate carbon cycle model. Its carbon cycle consists of a one pool atmosphere, three terrestrial pools which can be broken down into finer biomes or regions, and four carbon pools in the ocean component. The terrestrial carbon cycle includes primary production andmore » respiration fluxes. The ocean carbon cycle circulates carbon via a simplified thermohaline circulation, calculating air-sea fluxes as well as the marine carbonate system (Hartin et al. 2016). The model input is time series of greenhouse gas emissions; as example scenarios for these the Pyhector package contains the Representative Concentration Pathways (RCPs)2. These were developed to cover the range of baseline and mitigation emissions scenarios and are widely used in climate change research and model intercomparison projects. Using DataFrames from the Python library Pandas (McKinney 2010) as a data structure for the scenarios simplifies generating and adapting scenarios. Other parameters of the Hector model can easily be modified when running the model. Pyhector can be installed using pip from the Python Package Index.3 Source code and issue tracker are available in Pyhector's GitHub repository4. Documentation is provided through Readthedocs5. Usage examples are also contained in the repository as a Jupyter Notebook (Pérez and Granger 2007; Kluyver et al. 2016). Courtesy of the Mybinder project6, the example Notebook can also be executed and modified without installing Pyhector locally.« less
Spatial interactions in a modified Daisyworld model: Heat diffusivity and greenhouse effects
NASA Astrophysics Data System (ADS)
Alberti, T.; Primavera, L.; Vecchio, A.; Lepreti, F.; Carbone, V.
2015-11-01
In this work we investigate a modified version of the Daisyworld model, originally introduced by Lovelock and Watson to describe in a simple way the interactions between an Earth-like planet, its biosphere, and the incoming solar radiation. Here a spatial dependency on latitude is included, and both a variable heat diffusivity along latitudes and a simple greenhouse effect description are introduced in the model. We show that the spatial interactions between the variables of the system can locally stabilize the coexistence of the two vegetation types. The feedback on albedo is able to generate equilibrium solutions which can efficiently self-regulate the planet climate, even for values of the solar luminosity relatively far from the current Earth conditions.
A model of the plumes above basaltic fissure eruptions
NASA Astrophysics Data System (ADS)
Woods, Andrew W.
1993-06-01
A simple model of the ascent of the volatiles above basaltic fissure eruptions shows that atmospheric moisture may play an important role in injecting volatiles high into the atmosphere. As ambient water vapor is entrained and carried upwards by the plume, it decompresses and some condensation may occur. This causes the release of latent heat which heats up the air and thereby increases the buoyancy of the plume enabling it to ascend several kilometers higher than in a dry atmosphere. The height of such plumes also increases with the mass fraction of fine ash in the fountain. Although very simple, the model predictions are in accord with observations of plume heights during historical eruptions including the 1984 eruption of Mauna Loa.
BehavePlus fire modeling system, version 5.0: Design and Features
Faith Ann Heinsch; Patricia L. Andrews
2010-01-01
The BehavePlus fire modeling system is a computer program that is based on mathematical models that describe wildland fire behavior and effects and the fire environment. It is a flexible system that produces tables, graphs, and simple diagrams. It can be used for a host of fire management applications, including projecting the behavior of an ongoing fire, planning...
BehavePlus fire modeling system, version 4.0: User's Guide
Patricia L. Andrews; Collin D. Bevins; Robert C. Seli
2005-01-01
The BehavePlus fire modeling system is a program for personal computers that is a collection of mathematical models that describe fire and the fire environment. It is a flexible system that produces tables, graphs, and simple diagrams. It can be used for a multitude of fire management applications including projecting the behavior of an ongoing fire, planning...
Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H
2017-09-01
Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.
Dynamics of Zika virus outbreaks: an overview of mathematical modeling approaches
Wiratsudakul, Anuwat; Suparit, Parinya
2018-01-01
Background The Zika virus was first discovered in 1947. It was neglected until a major outbreak occurred on Yap Island, Micronesia, in 2007. Teratogenic effects resulting in microcephaly in newborn infants is the greatest public health threat. In 2016, the Zika virus epidemic was declared as a Public Health Emergency of International Concern (PHEIC). Consequently, mathematical models were constructed to explicitly elucidate related transmission dynamics. Survey Methodology In this review article, two steps of journal article searching were performed. First, we attempted to identify mathematical models previously applied to the study of vector-borne diseases using the search terms “dynamics,” “mathematical model,” “modeling,” and “vector-borne” together with the names of vector-borne diseases including chikungunya, dengue, malaria, West Nile, and Zika. Then the identified types of model were further investigated. Second, we narrowed down our survey to focus on only Zika virus research. The terms we searched for were “compartmental,” “spatial,” “metapopulation,” “network,” “individual-based,” “agent-based” AND “Zika.” All relevant studies were included regardless of the year of publication. We have collected research articles that were published before August 2017 based on our search criteria. In this publication survey, we explored the Google Scholar and PubMed databases. Results We found five basic model architectures previously applied to vector-borne virus studies, particularly in Zika virus simulations. These include compartmental, spatial, metapopulation, network, and individual-based models. We found that Zika models carried out for early epidemics were mostly fit into compartmental structures and were less complicated compared to the more recent ones. Simple models are still commonly used for the timely assessment of epidemics. Nevertheless, due to the availability of large-scale real-world data and computational power, recently there has been growing interest in more complex modeling frameworks. Discussion Mathematical models are employed to explore and predict how an infectious disease spreads in the real world, evaluate the disease importation risk, and assess the effectiveness of intervention strategies. As the trends in modeling of infectious diseases have been shifting towards data-driven approaches, simple and complex models should be exploited differently. Simple models can be produced in a timely fashion to provide an estimation of the possible impacts. In contrast, complex models integrating real-world data require more time to develop but are far more realistic. The preparation of complicated modeling frameworks prior to the outbreaks is recommended, including the case of future Zika epidemic preparation. PMID:29593941
Anthropogenic heat flux: advisable spatial resolutions when input data are scarce
NASA Astrophysics Data System (ADS)
Gabey, A. M.; Grimmond, C. S. B.; Capel-Timms, I.
2018-02-01
Anthropogenic heat flux (QF) may be significant in cities, especially under low solar irradiance and at night. It is of interest to many practitioners including meteorologists, city planners and climatologists. QF estimates at fine temporal and spatial resolution can be derived from models that use varying amounts of empirical data. This study compares simple and detailed models in a European megacity (London) at 500 m spatial resolution. The simple model (LQF) uses spatially resolved population data and national energy statistics. The detailed model (GQF) additionally uses local energy, road network and workday population data. The Fractions Skill Score (FSS) and bias are used to rate the skill with which the simple model reproduces the spatial patterns and magnitudes of QF, and its sub-components, from the detailed model. LQF skill was consistently good across 90% of the city, away from the centre and major roads. The remaining 10% contained elevated emissions and "hot spots" representing 30-40% of the total city-wide energy. This structure was lost because it requires workday population, spatially resolved building energy consumption and/or road network data. Daily total building and traffic energy consumption estimates from national data were within ± 40% of local values. Progressively coarser spatial resolutions to 5 km improved skill for total QF, but important features (hot spots, transport network) were lost at all resolutions when residential population controlled spatial variations. The results demonstrate that simple QF models should be applied with conservative spatial resolution in cities that, like London, exhibit time-varying energy use patterns.
In response to the new, size-discriminate federal standards for Inhalable Particulate Matter, the Regional Lagrangian Model of Air Pollution (RELMAP) has been modified to include simple, linear parameterizations. As an initial step in the possible refinement, RELMAP has been subj...
Heat Transfer Modeling of Jet Vane Thrust Vector Control (TVC) Systems.
1987-12-01
Cost and complexity, to include materials, labor , design and fabrication. b. Effectiveness and ability to perform two and three axis control. c...8217 ESTR ’) CALL ESTRGR C C.... SCRS contains the simple-chemical-reaction-model of C combustion, the theoretical basis of which is found in the C book
The Factor Structure and Screening Utility of the Social Interaction Anxiety Scale
ERIC Educational Resources Information Center
Rodebaugh, Thomas L.; Woods, Carol M.; Heimberg, Richard G.; Liebowitz, Michael R.; Schneier, Franklin R.
2006-01-01
The widely used Social Interaction Anxiety Scale (SIAS; R. P. Mattick & J. C. Clarke, 1998) possesses favorable psychometric properties, but questions remain concerning its factor structure and item properties. Analyses included 445 people with social anxiety disorder and 1,689 undergraduates. Simple unifactorial models fit poorly, and models that…
A Study of a "Model of School Learning." Monograph Number 4.
ERIC Educational Resources Information Center
Carroll, John B.; Spearritt, Donald
A booklet of a programmed-instruction type was developed to obtain the measures needed to test Carroll's model of school learning, including ability, aptitude, quality of instruction, opportunity for learning, perserverance, and time criterion. Simple rules in an artificial foreign language were taught by means of the booklet to sixth-grade…
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
Kostanyan, Artak E
2015-12-04
The ideal (the column outlet is directly connected to the column inlet) and non-ideal (includes the effects of extra-column dispersion) recycling equilibrium-cell models are used to simulate closed-loop recycling counter-current chromatography (CLR CCC). Simple chromatogram equations for the individual cycles and equations describing the transport and broadening of single peaks and complex chromatograms inside the recycling closed-loop column for ideal and non-ideal recycling models are presented. The extra-column dispersion is included in the theoretical analysis, by replacing the recycling system (connecting lines, pump and valving) by a cascade of Nec perfectly mixed cells. To evaluate extra-column contribution to band broadening, two limiting regimes of recycling are analyzed: plug-flow, Nec→∞, and maximum extra-column dispersion, Nec=1. Comparative analysis of ideal and non-ideal models has shown that when the volume of the recycling system is less than one percent of the column volume, the influence of the extra-column processes on the CLR CCC separation may be neglected. Copyright © 2015 Elsevier B.V. All rights reserved.
Modeling procedures for handling qualities evaluation of flexible aircraft
NASA Technical Reports Server (NTRS)
Govindaraj, K. S.; Eulrich, B. J.; Chalk, C. R.
1981-01-01
This paper presents simplified modeling procedures to evaluate the impact of flexible modes and the unsteady aerodynamic effects on the handling qualities of Supersonic Cruise Aircraft (SCR). The modeling procedures involve obtaining reduced order transfer function models of SCR vehicles, including the important flexible mode responses and unsteady aerodynamic effects, and conversion of the transfer function models to time domain equations for use in simulations. The use of the modeling procedures is illustrated by a simple example.
Mechanisms of Neuronal Computation in Mammalian Visual Cortex
Priebe, Nicholas J.; Ferster, David
2012-01-01
Orientation selectivity in the primary visual cortex (V1) is a receptive field property that is at once simple enough to make it amenable to experimental and theoretical approaches and yet complex enough to represent a significant transformation in the representation of the visual image. As a result, V1 has become an area of choice for studying cortical computation and its underlying mechanisms. Here we consider the receptive field properties of the simple cells in cat V1—the cells that receive direct input from thalamic relay cells—and explore how these properties, many of which are highly nonlinear, arise. We have found that many receptive field properties of V1 simple cells fall directly out of Hubel and Wiesel’s feedforward model when the model incorporates realistic neuronal and synaptic mechanisms, including threshold, synaptic depression, response variability, and the membrane time constant. PMID:22841306
Experimental evaluation of expendable supersonic nozzle concepts
NASA Technical Reports Server (NTRS)
Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.
1990-01-01
Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.
NASA Technical Reports Server (NTRS)
Dabney, Philip W.; Harding, David J.; Valett, Susan R.; Vasilyev, Aleksey A.; Yu, Anthony W.
2012-01-01
The Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) is a multi-beam, micropulse airborne laser altimeter that acquires active and passive polarimetric optical remote sensing measurements at visible and near-infrared wavelengths. SIMPL was developed to demonstrate advanced measurement approaches of potential benefit for improved, more efficient spaceflight laser altimeter missions. SIMPL data have been acquired for wide diversity of forest types in the summers of 2010 and 2011 in order to assess the potential of its novel capabilities for characterization of vegetation structure and composition. On each of its four beams SIMPL provides highly-resolved measurements of forest canopy structure by detecting single-photons with 15 cm ranging precision using a narrow-beam system operating at a laser repetition rate of 11 kHz. Associated with that ranging data SIMPL provides eight amplitude parameters per beam unlike the single amplitude provided by typical laser altimeters. Those eight parameters are received energy that is parallel and perpendicular to that of the plane-polarized transmit pulse at 532 nm (green) and 1064 nm (near IR), for both the active laser backscatter retro-reflectance and the passive solar bi-directional reflectance. This poster presentation will cover the instrument architecture and highlight the performance of the SIMPL instrument with examples taken from measurements for several sites with distinct canopy structures and compositions. Specific performance areas such as probability of detection, after pulsing, and dead time, will be highlighted and addressed, along with examples of their impact on the measurements and how they limit the ability to accurately model and recover the canopy properties. To assess the sensitivity of SIMPL's measurements to canopy properties an instrument model has been implemented in the FLIGHT radiative transfer code, based on Monte Carlo simulation of photon transport. SIMPL data collected in 2010 over the Smithsonian Environmental Research Center, MD are currently being modelled and compared to other remote sensing and in situ data sets. Results on the adaptation of FLIGHT to model micropulse, single'photon ranging measurements are presented elsewhere at this conference. NASA's ICESat-2 spaceflight mission, scheduled for launch in 2016, will utilize a multi-beam, micropulse, single-photon ranging measurement approach (although non-polarimetric and only at 532 nm). Insights gained from the analysis and modelling of SIMPL data will help guide preparations for that mission, including development of calibration/validation plans and algorithms for the estimation of forest biophysical parameters.
T.L. Rogerson
1980-01-01
A simple simulation model to predict rainfall for individual storms in central Arkansas is described. Output includes frequency distribution tables for days between storms and for storm size classes; a storm summary by day number (January 1 = 1 and December 31 = 365) and rainfall amount; and an annual storm summary that includes monthly values for rainfall and number...
The Systemic Vision of the Educational Learning
ERIC Educational Resources Information Center
Lima, Nilton Cesar; Penedo, Antonio Sergio Torres; de Oliveira, Marcio Mattos Borges; de Oliveira, Sonia Valle Walter Borges; Queiroz, Jamerson Viegas
2012-01-01
As the sophistication of technology is increasing, also increased the demand for quality in education. The expectation for quality has promoted broad range of products and systems, including in education. These factors include the increased diversity in the student body, which requires greater emphasis that allows a simple and dynamic model in the…
Testing the Two-Layer Model for Correcting Clear Sky Reflectance near Clouds
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Evans, Frank; Varnai, Tamas; Levy, Rob
2015-01-01
A two-layer model (2LM) was developed in our earlier studies to estimate the clear sky reflectance enhancement due to cloud-molecular radiative interaction at MODIS at 0.47 micrometers. Recently, we extended the model to include cloud-surface and cloud-aerosol radiative interactions. We use the LES/SHDOM simulated 3D true radiation fields to test the 2LM for reflectance enhancement at 0.47 micrometers. We find: The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; the cloud-molecular interaction alone accounts for 70 percent of the enhancement; the cloud-surface interaction accounts for 16 percent of the enhancement; the cloud-aerosol interaction accounts for an additional 13 percent of the enhancement. We conclude that the 2LM is simple to apply and unbiased.
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni
2015-01-01
Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960
A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni
2015-01-01
Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.
Simulating Eastern- and Central-Pacific Type ENSO Using a Simple Coupled Model
NASA Astrophysics Data System (ADS)
Fang, Xianghui; Zheng, Fei
2018-06-01
Severe biases exist in state-of-the-art general circulation models (GCMs) in capturing realistic central-Pacific (CP) El Niño structures. At the same time, many observational analyses have emphasized that thermocline (TH) feedback and zonal advective (ZA) feedback play dominant roles in the development of eastern-Pacific (EP) and CP El Niño-Southern Oscillation (ENSO), respectively. In this work, a simple linear air-sea coupled model, which can accurately depict the strength distribution of the TH and ZA feedbacks in the equatorial Pacific, is used to investigate these two types of El Niño. The results indicate that the model can reproduce the main characteristics of CP ENSO if the TH feedback is switched off and the ZA feedback is retained as the only positive feedback, confirming the dominant role played by ZA feedback in the development of CP ENSO. Further experiments indicate that, through a simple nonlinear control approach, many ENSO characteristics, including the existence of both CP and EP El Niño and the asymmetries between El Niño and La Niña, can be successfully captured using the simple linear air-sea coupled model. These analyses indicate that an accurate depiction of the climatological sea surface temperature distribution and the related ZA feedback, which are the subject of severe biases in GCMs, is very important in simulating a realistic CP El Niño.
Aerosol Complexity and Implications for Predictability and Short-Term Forecasting
NASA Technical Reports Server (NTRS)
Colarco, Peter
2016-01-01
There are clear NWP and climate impacts from including aerosol radiative and cloud interactions. Changes in dynamics and cloud fields affect aerosol lifecycle, plume height, long-range transport, overall forcing of the climate system, etc. Inclusion of aerosols in NWP systems has benefit to surface field biases (e.g., T2m, U10m). Including aerosol affects has impact on analysis increments and can have statistically significant impacts on, e.g., tropical cyclogenesis. Above points are made especially with respect to aerosol radiative interactions, but aerosol-cloud interaction is a bigger signal on the global system. Many of these impacts are realized even in models with relatively simple (bulk) aerosol schemes (approx.10 -20 tracers). Simple schemes though imply simple representation of aerosol absorption and importantly for aerosol-cloud interaction particle-size distribution. Even so, more complex schemes exhibit a lot of diversity between different models, with issues such as size selection both for emitted particles and for modes. Prospects for complex sectional schemes to tune modal (and even bulk) schemes toward better selection of size representation. I think this is a ripe topic for more research -Systematic documentation of benefits of no vs. climatological vs. interactive (direct and then direct+indirect) aerosols. Document aerosol impact on analysis increments, inclusion in NWP data assimilation operator -Further refinement of baseline assumptions in model design (e.g., absorption, particle size distribution). Did not get into model resolution and interplay of other physical processes with aerosols (e.g., moist physics, obviously important), chemistry
NASA Astrophysics Data System (ADS)
Escriva-Bou, A.; Lund, J. R.; Pulido-Velazquez, M.; Medellin-Azuara, J.
2015-12-01
Most individual processes relating water and energy interdependence have been assessed in many different ways over the last decade. It is time to step up and include the results of these studies in management by proportionating a tool for integrating these processes in decision-making to effectively understand the tradeoffs between water and energy from management options and scenarios. A simple but powerful decision support system (DSS) for water management is described that includes water-related energy use and GHG emissions not solely from the water operations, but also from final water end uses, including demands from cities, agriculture, environment and the energy sector. Because one of the main drivers of energy use and GHG emissions is water pumping from aquifers, the DSS combines a surface water management model with a simple groundwater model, accounting for their interrelationships. The model also explicitly includes economic data to optimize water use across sectors during shortages and calculate return flows from different uses. Capabilities of the DSS are demonstrated on a case study over California's intertied water system. Results show that urban end uses account for most GHG emissions of the entire water cycle, but large water conveyance produces significant peaks over the summer season. Also the development of more efficient water application on the agricultural sector has increased the total energy consumption and the net water use in the basins.
A crack-like rupture model for the 19 September 1985 Michoacan, Mexico, earthquake
NASA Astrophysics Data System (ADS)
Ruppert, Stanley D.; Yomogida, Kiyoshi
1992-09-01
Evidence supporting a smooth crack-like rupture process of the Michoacan earthquake of 1985 is obtained from a major earthquake for the first time. Digital strong motion data from three stations (Caleta de Campos, La Villita, and La Union), recording near-field radiation from the fault, show unusually simple ramped displacements and permanent offsets previously only seen in theoretical models. The recording of low frequency (0 to 1 Hz) near-field waves together with the apparently smooth rupture favors a crack-like model to a step or Haskell-type dislocation model under the constraint of the slip distribution obtained by previous studies. A crack-like rupture, characterized by an approximated dynamic slip function and systematic decrease in slip duration away from the point of rupture nucleation, produces the best fit to the simple ramped displacements observed. Spatially varying rupture duration controls several important aspects of the synthetic seismograms, including the variation in displacement rise times between components of motion observed at Caleta de Campos. Ground motion observed at Caleta de Campos can be explained remarkably well with a smoothly propagating crack model. However, data from La Villita and La Union suggest a more complex rupture process than the simple crack-like model for the south-eastern portion of the fault.
Food-web models predict species abundances in response to habitat change.
Gotelli, Nicholas J; Ellison, Aaron M
2006-10-01
Plant and animal population sizes inevitably change following habitat loss, but the mechanisms underlying these changes are poorly understood. We experimentally altered habitat volume and eliminated top trophic levels of the food web of invertebrates that inhabit rain-filled leaves of the carnivorous pitcher plant Sarracenia purpurea. Path models that incorporated food-web structure better predicted population sizes of food-web constituents than did simple keystone species models, models that included only autecological responses to habitat volume, or models including both food-web structure and habitat volume. These results provide the first experimental confirmation that trophic structure can determine species abundances in the face of habitat loss.
Controlled recovery of phylogenetic communities from an evolutionary model using a network approach
NASA Astrophysics Data System (ADS)
Sousa, Arthur M. Y. R.; Vieira, André P.; Prado, Carmen P. C.; Andrade, Roberto F. S.
2016-04-01
This works reports the use of a complex network approach to produce a phylogenetic classification tree of a simple evolutionary model. This approach has already been used to treat proteomic data of actual extant organisms, but an investigation of its reliability to retrieve a traceable evolutionary history is missing. The used evolutionary model includes key ingredients for the emergence of groups of related organisms by differentiation through random mutations and population growth, but purposefully omits other realistic ingredients that are not strictly necessary to originate an evolutionary history. This choice causes the model to depend only on a small set of parameters, controlling the mutation probability and the population of different species. Our results indicate that for a set of parameter values, the phylogenetic classification produced by the used framework reproduces the actual evolutionary history with a very high average degree of accuracy. This includes parameter values where the species originated by the evolutionary dynamics have modular structures. In the more general context of community identification in complex networks, our model offers a simple setting for evaluating the effects, on the efficiency of community formation and identification, of the underlying dynamics generating the network itself.
The vertical distribution of nutrients and oxygen 18 in the upper Arctic Ocean
NASA Astrophysics Data System (ADS)
BjöRk, GöRan
1990-09-01
The observed vertical nutrient distribution including a maximum at about 100 m depth in the Arctic Ocean is investigated using a one-dimensional time-dependent circulation model together with a simple biological model. The circulation model includes a shelf-forced circulation. This is thought to take place in a box from which the outflow is specified regarding temperature and volume flux at different salinities. It has earlier been shown that the circulation model is able to reproduce the observed mean salinity and temperature stratification in the Arctic Ocean. Before introducing nutrients in the model a test is performed using the conservative tracer δ18 (18O/16O ratio) as one extra state variable in order to verify the circulation model. It is shown that the field measurements can be simulated. The result is, however, rather sensitive to the tracer concentration in the Bering Strait inflow. The nutrients nitrate, phosphate, and silicate are then treated by coupling a simple biological model to the circulation model. The biological model describes some overall effects of production, sinking, and decomposition of organic matter. First a standard case of the biological model is presented. This is followed by some modified cases. It is shown that the observed nutrient distribution including the maximum can be generated. The available nutrient data from the Arctic Ocean are not sufficient to decide which among the cases is the most likely to occur. One case is, however, chosen as the best case. A nutrient budget and estimates of the magnitudes of the new production are presented for this case.
Précis of Simple heuristics that make us smart.
Todd, P M; Gigerenzer, G
2000-10-01
How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart (Gigerenzer et al. 1999), we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this précis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, ćaǧlar Ö.; Zettl, A.
2007-11-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, Caglar; Zettl, Alex
2008-03-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Initial Kernel Timing Using a Simple PIM Performance Model
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Block, Gary L.; Springer, Paul L.; Sterling, Thomas; Brockman, Jay B.; Callahan, David
2005-01-01
This presentation will describe some initial results of paper-and-pencil studies of 4 or 5 application kernels applied to a processor-in-memory (PIM) system roughly similar to the Cascade Lightweight Processor (LWP). The application kernels are: * Linked list traversal * Sun of leaf nodes on a tree * Bitonic sort * Vector sum * Gaussian elimination The intent of this work is to guide and validate work on the Cascade project in the areas of compilers, simulators, and languages. We will first discuss the generic PIM structure. Then, we will explain the concepts needed to program a parallel PIM system (locality, threads, parcels). Next, we will present a simple PIM performance model that will be used in the remainder of the presentation. For each kernel, we will then present a set of codes, including codes for a single PIM node, and codes for multiple PIM nodes that move data to threads and move threads to data. These codes are written at a fairly low level, between assembly and C, but much closer to C than to assembly. For each code, we will present some hand-drafted timing forecasts, based on the simple PIM performance model. Finally, we will conclude by discussing what we have learned from this work, including what programming styles seem to work best, from the point-of-view of both expressiveness and performance.
ERIC Educational Resources Information Center
Brady, Timothy F.; Tenenbaum, Joshua B.
2013-01-01
When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…
Can Neuroscience Help Us Do a Better Job of Teaching Music?
ERIC Educational Resources Information Center
Hodges, Donald A.
2010-01-01
We are just at the beginning stages of applying neuroscientific findings to music teaching. A simple model of the learning cycle based on neuroscience is Sense [right arrow] Integrate [right arrow] Act (sometimes modified as Act [right arrow] Sense [right arrow] Integrate). Additional components can be added to the model, including such concepts…
Rolling Friction on a Wheeled Laboratory Cart
ERIC Educational Resources Information Center
Mungan, Carl E.
2012-01-01
A simple model is developed that predicts the coefficient of rolling friction for an undriven laboratory cart on a track that is approximately independent of the mass loaded onto the cart and of the angle of inclination of the track. The model includes both deformation of the wheels/track and frictional torque at the axles/bearings. The concept of…
Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen
2014-01-01
very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.
Comparative Research Productivity Measures for Economic Departments.
ERIC Educational Resources Information Center
Huettner, David A.; Clark, William
1997-01-01
Develops a simple theoretical model to evaluate interdisciplinary differences in research productivity between economics departments and related subjects. Compares the research publishing statistics of economics, finance, psychology, geology, physics, oceanography, chemistry, and geophysics. Considers a number of factors including journal…
Chaos in plasma simulation and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, C.; Newman, D.E.; Sprott, J.C.
1993-09-01
We investigate the possibility that chaos and simple determinism are governing the dynamics of reversed field pinch (RFP) plasmas using data from both numerical simulations and experiment. A large repertoire of nonlinear analysis techniques is used to identify low dimensional chaos. These tools include phase portraits and Poincard sections, correlation dimension, the spectrum of Lyapunov exponents and short term predictability. In addition, nonlinear noise reduction techniques are applied to the experimental data in an attempt to extract any underlying deterministic dynamics. Two model systems are used to simulate the plasma dynamics. These are -the DEBS code, which models global RFPmore » dynamics, and the dissipative trapped electron mode (DTEM) model, which models drift wave turbulence. Data from both simulations show strong indications of low,dimensional chaos and simple determinism. Experimental data were obtained from the Madison Symmetric Torus RFP and consist of a wide array of both global and local diagnostic signals. None of the signals shows any indication of low dimensional chaos or other simple determinism. Moreover, most of the analysis tools indicate the experimental system is very high dimensional with properties similar to noise. Nonlinear noise reduction is unsuccessful at extracting an underlying deterministic system.« less
Modeling of the merging of two colliding field reversed configuration plasmoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu
2016-06-15
The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Vertical and pitching resonance of train cars moving over a series of simple beams
NASA Astrophysics Data System (ADS)
Yang, Y. B.; Yau, J. D.
2015-02-01
The resonant response, including both vertical and pitching motions, of an undamped sprung mass unit moving over a series of simple beams is studied by a semi-analytical approach. For a sprung mass that is very small compared with the beam, we first simplify the sprung mass as a constant moving force and obtain the response of the beam in closed form. With this, we then solve for the response of the sprung mass passing over a series of simple beams, and validate the solution by an independent finite element analysis. To evaluate the pitching resonance, we consider the cases of a two-axle model and a coach model traveling over rough rails supported by a series of simple beams. The resonance of a train car is characterized by the fact that its response continues to build up, as it travels over more and more beams. For train cars with long axle intervals, the vertical acceleration induced by pitching resonance dominates the peak response of the train traveling over a series of simple beams. The present semi-analytical study allows us to grasp the key parameters involved in the primary/sub-resonant responses. Other phenomena of resonance are also discussed in the exemplar study.
A simple model of hysteresis behavior using spreadsheet analysis
NASA Astrophysics Data System (ADS)
Ehrmann, A.; Blachowicz, T.
2015-01-01
Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.
Comparison of rigorous and simple vibrational models for the CO2 gasdynamic laser
NASA Technical Reports Server (NTRS)
Monson, D. J.
1977-01-01
The accuracy of a simple vibrational model for computing the gain in a CO2 gasdynamic laser is assessed by comparing results computed from it with results computed from a rigorous vibrational model. The simple model is that of Anderson et al. (1971), in which the vibrational kinetics are modeled by grouping the nonequilibrium vibrational degrees of freedom into two modes, to each of which there corresponds an equation describing vibrational relaxation. The two models agree fairly well in the computed gain at low temperatures, but the simple model predicts too high a gain at the higher temperatures of current interest. The sources of error contributing to the overestimation given by the simple model are determined by examining the simplified relaxation equations.
A general consumer-resource population model
Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.
2015-01-01
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Vincent K., E-mail: vincent.shen@nist.gov; Siderius, Daniel W.
2014-06-28
Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phasemore » transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called “breathing” of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.« less
NASA Astrophysics Data System (ADS)
Shen, Vincent K.; Siderius, Daniel W.
2014-06-01
Using flat-histogram Monte Carlo methods, we investigate the adsorptive behavior of the square-well fluid in two simple slit-pore-like models intended to capture fundamental characteristics of flexible adsorbent materials. Both models require as input thermodynamic information about the flexible adsorbent material itself. An important component of this work involves formulating the flexible pore models in the appropriate thermodynamic (statistical mechanical) ensembles, namely, the osmotic ensemble and a variant of the grand-canonical ensemble. Two-dimensional probability distributions, which are calculated using flat-histogram methods, provide the information necessary to determine adsorption thermodynamics. For example, we are able to determine precisely adsorption isotherms, (equilibrium) phase transition conditions, limits of stability, and free energies for a number of different flexible adsorbent materials, distinguishable as different inputs into the models. While the models used in this work are relatively simple from a geometric perspective, they yield non-trivial adsorptive behavior, including adsorption-desorption hysteresis solely due to material flexibility and so-called "breathing" of the adsorbent. The observed effects can in turn be tied to the inherent properties of the bare adsorbent. Some of the effects are expected on physical grounds while others arise from a subtle balance of thermodynamic and mechanical driving forces. In addition, the computational strategy presented here can be easily applied to more complex models for flexible adsorbents.
Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas
NASA Astrophysics Data System (ADS)
Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.
2003-04-01
Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.
Cunningham, J C; Sinka, I C; Zavaliangos, A
2004-08-01
In this first of two articles on the modeling of tablet compaction, the experimental inputs related to the constitutive model of the powder and the powder/tooling friction are determined. The continuum-based analysis of tableting makes use of an elasto-plastic model, which incorporates the elements of yield, plastic flow potential, and hardening, to describe the mechanical behavior of microcrystalline cellulose over the range of densities experienced during tableting. Specifically, a modified Drucker-Prager/cap plasticity model, which includes material parameters such as cohesion, internal friction, and hydrostatic yield pressure that evolve with the internal state variable relative density, was applied. Linear elasticity is assumed with the elastic parameters, Young's modulus, and Poisson's ratio dependent on the relative density. The calibration techniques were developed based on a series of simple mechanical tests including diametrical compression, simple compression, and die compaction using an instrumented die. The friction behavior is measured using an instrumented die and the experimental data are analyzed using the method of differential slices. The constitutive model and frictional properties are essential experimental inputs to the finite element-based model described in the companion article. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:2022-2039, 2004
Analyzing inflammatory response as excitable media
NASA Astrophysics Data System (ADS)
Yde, Pernille; Høgh Jensen, Mogens; Trusina, Ala
2011-11-01
The regulatory system of the transcription factor NF-κB plays a great role in many cell functions, including inflammatory response. Interestingly, the NF-κB system is known to up-regulate production of its own triggering signal—namely, inflammatory cytokines such as TNF, IL-1, and IL-6. In this paper we investigate a previously presented model of the NF-κB, which includes both spatial effects and the positive feedback from cytokines. The model exhibits the properties of an excitable medium and has the ability to propagate waves of high cytokine concentration. These waves represent an optimal way of sending an inflammatory signal through the tissue as they create a chemotactic signal able to recruit neutrophils to the site of infection. The simple model displays three qualitatively different states; low stimuli leads to no or very little response. Intermediate stimuli leads to reoccurring waves of high cytokine concentration. Finally, high stimuli leads to a sustained high cytokine concentration, a scenario which is toxic for the tissue cells and corresponds to chronic inflammation. Due to the few variables of the simple model, we are able to perform a phase-space analysis leading to a detailed understanding of the functional form of the model and its limitations. The spatial effects of the model contribute to the robustness of the cytokine wave formation and propagation.
Meteorological adjustment of yearly mean values for air pollutant concentration comparison
NASA Technical Reports Server (NTRS)
Sidik, S. M.; Neustadter, H. E.
1976-01-01
Using multiple linear regression analysis, models which estimate mean concentrations of Total Suspended Particulate (TSP), sulfur dioxide, and nitrogen dioxide as a function of several meteorologic variables, two rough economic indicators, and a simple trend in time are studied. Meteorologic data were obtained and do not include inversion heights. The goodness of fit of the estimated models is partially reflected by the squared coefficient of multiple correlation which indicates that, at the various sampling stations, the models accounted for about 23 to 47 percent of the total variance of the observed TSP concentrations. If the resulting model equations are used in place of simple overall means of the observed concentrations, there is about a 20 percent improvement in either: (1) predicting mean concentrations for specified meteorological conditions; or (2) adjusting successive yearly averages to allow for comparisons devoid of meteorological effects. An application to source identification is presented using regression coefficients of wind velocity predictor variables.
NASA Technical Reports Server (NTRS)
Hoffman, P. F.
1986-01-01
A prograding (direction unspecified) trench-arc system is favored as a simple yet comprehensive model for crustal generation in a 250,000 sq km granite-greenstone terrain. The model accounts for the evolutionary sequence of volcanism, sedimentation, deformation, metamorphism and plutonism, observed througout the Slave province. Both unconformable (trench inner slope) and subconformable (trench outer slope) relations between the volcanics and overlying turbidities; and the existence of relatively minor amounts of pre-greenstone basement (microcontinents) and syn-greenstone plutons (accreted arc roots) are explained. Predictions include: a varaiable gap between greenstone volcanism and trench turbidite sedimentation (accompanied by minor volcanism) and systematic regional variations in age span of volcanism and plutonism. Implications of the model will be illustrated with reference to a 1:1 million scale geological map of the Slave Province (and its bounding 1.0 Ga orogens).
Predictive power of food web models based on body size decreases with trophic complexity.
Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo
2018-05-01
Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2 = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.
Modeling Spacecraft Fuel Slosh at Embry-Riddle Aeronautical University
NASA Technical Reports Server (NTRS)
Schlee, Keith L.
2007-01-01
As a NASA-sponsored GSRP Fellow, I worked with other researchers and analysts at Embry-Riddle Aeronautical University and NASA's ELV Division to investigate the effect of spacecraft fuel slosh. NASA's research into the effects of fuel slosh includes modeling the response in full-sized tanks using equipment such as the Spinning Slosh Test Rig (SSTR), located at Southwest Research Institute (SwRI). NASA and SwRI engineers analyze data taken from SSTR runs and hand-derive equations of motion to identify model parameters and characterize the sloshing motion. With guidance from my faculty advisor, Dr. Sathya Gangadharan, and NASA flight controls analysts James Sudermann and Charles Walker, I set out to automate this parameter identification process by building a simple physical experimental setup to model free surface slosh in a spherical tank with a simple pendulum analog. This setup was then modeled using Simulink and SimMechanics. The Simulink Parameter Estimation Tool was then used to identify the model parameters.
Applying the take-grant protection model
NASA Technical Reports Server (NTRS)
Bishop, Matt
1990-01-01
The Take-Grant Protection Model has in the past been used to model multilevel security hierarchies and simple protection systems. The models are extended to include theft of rights and sharing information, and additional security policies are examined. The analysis suggests that in some cases the basic rules of the Take-Grant Protection Model should be augmented to represent the policy properly; when appropriate, such modifications are made and their efforts with respect to the policy and its Take-Grant representation are discussed.
NASA Technical Reports Server (NTRS)
Stutzman, Warren L.
1989-01-01
This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.
Ponce, Carlos; Bravo, Carolina; Alonso, Juan Carlos
2014-01-01
Studies evaluating agri-environmental schemes (AES) usually focus on responses of single species or functional groups. Analyses are generally based on simple habitat measurements but ignore food availability and other important factors. This can limit our understanding of the ultimate causes determining the reactions of birds to AES. We investigated these issues in detail and throughout the main seasons of a bird's annual cycle (mating, postfledging and wintering) in a dry cereal farmland in a Special Protection Area for farmland birds in central Spain. First, we modeled four bird response parameters (abundance, species richness, diversity and “Species of European Conservation Concern” [SPEC]-score), using detailed food availability and vegetation structure measurements (food models). Second, we fitted new models, built using only substrate composition variables (habitat models). Whereas habitat models revealed that both, fields included and not included in the AES benefited birds, food models went a step further and included seed and arthropod biomass as important predictors, respectively, in winter and during the postfledging season. The validation process showed that food models were on average 13% better (up to 20% in some variables) in predicting bird responses. However, the cost of obtaining data for food models was five times higher than for habitat models. This novel approach highlighted the importance of food availability-related causal processes involved in bird responses to AES, which remained undetected when using conventional substrate composition assessment models. Despite their higher costs, measurements of food availability add important details to interpret the reactions of the bird community to AES interventions and thus facilitate evaluating the real efficiency of AES programs. PMID:25165523
Electron heating in a Monte Carlo model of a high Mach number, supercritical, collisionless shock
NASA Technical Reports Server (NTRS)
Ellison, Donald C.; Jones, Frank C.
1987-01-01
Preliminary work in the investigation of electron injection and acceleration at parallel shocks is presented. A simple model of electron heating that is derived from a unified shock model which includes the effects of an electrostatic potential jump is described. The unified shock model provides a kinetic description of the injection and acceleration of ions and a fluid description of electron heating at high Mach number, supercritical, and parallel shocks.
Analysis of precision and accuracy in a simple model of machine learning
NASA Astrophysics Data System (ADS)
Lee, Julian
2017-12-01
Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.
Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters
NASA Astrophysics Data System (ADS)
Wetterer, C.; Sheppard, D.; Hunt, B.
The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.
A Mind of Three Minds: Evolution of the Human Brain
ERIC Educational Resources Information Center
MacLean, Paul D.
1978-01-01
The author examines the evolutionary and neural roots of a triune intelligence comprised of a primal mind, an emotional mind, and a rational mind. A simple brain model and some definitions of unfamiliar behavioral terms are included. (Author/MA)
Connecting fishery sustainability to estuarine habitats and nutrient loading
The production of several important fishery species depends on critical estuarine habitats, including seagrasses and salt marshes. Relatively simple models can be constructed to relate fishery productivity to the extent and distribution of these habitats by linking fishery-depend...
Brachypodium distachyon genetic resources
USDA-ARS?s Scientific Manuscript database
Brachypodium distachyon is a well-established model species for the grass family Poaceae. It possesses an array of features that make it suited for this purpose, including a small sequenced genome, simple transformation methods, and additional functional genomics tools. However, the most critical to...
NASA Technical Reports Server (NTRS)
Flowers, George T.
1994-01-01
Progress over the past year includes the following: A simplified rotor model with a flexible shaft and backup bearings has been developed. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501. The magnetic bearing test rig is currently floating and dynamics/control tests are being conducted. A paper has been written that documents the work using the T-501 engine model. Work has continued with the simplified model. The finite element model is currently being modified to include the effects of foundation dynamics. A literature search for material on foil bearings has been conducted. A finite element model is being developed for a magnetic bearing in series with a foil backup bearing.
Hewitt, Angela L; Popa, Laurentiu S; Pasalar, Siavash; Hendrix, Claudia M; Ebner, Timothy J
2011-11-01
Encoding of movement kinematics in Purkinje cell simple spike discharge has important implications for hypotheses of cerebellar cortical function. Several outstanding questions remain regarding representation of these kinematic signals. It is uncertain whether kinematic encoding occurs in unpredictable, feedback-dependent tasks or kinematic signals are conserved across tasks. Additionally, there is a need to understand the signals encoded in the instantaneous discharge of single cells without averaging across trials or time. To address these questions, this study recorded Purkinje cell firing in monkeys trained to perform a manual random tracking task in addition to circular tracking and center-out reach. Random tracking provides for extensive coverage of kinematic workspaces. Direction and speed errors are significantly greater during random than circular tracking. Cross-correlation analyses comparing hand and target velocity profiles show that hand velocity lags target velocity during random tracking. Correlations between simple spike firing from 120 Purkinje cells and hand position, velocity, and speed were evaluated with linear regression models including a time constant, τ, as a measure of the firing lead/lag relative to the kinematic parameters. Across the population, velocity accounts for the majority of simple spike firing variability (63 ± 30% of R(adj)(2)), followed by position (28 ± 24% of R(adj)(2)) and speed (11 ± 19% of R(adj)(2)). Simple spike firing often leads hand kinematics. Comparison of regression models based on averaged vs. nonaveraged firing and kinematics reveals lower R(adj)(2) values for nonaveraged data; however, regression coefficients and τ values are highly similar. Finally, for most cells, model coefficients generated from random tracking accurately estimate simple spike firing in either circular tracking or center-out reach. These findings imply that the cerebellum controls movement kinematics, consistent with a forward internal model that predicts upcoming limb kinematics.
A case-mix classification system for explaining healthcare costs using administrative data in Italy.
Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico
2018-03-04
The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.
Food-Web Models Predict Species Abundances in Response to Habitat Change
Gotelli, Nicholas J; Ellison, Aaron M
2006-01-01
Plant and animal population sizes inevitably change following habitat loss, but the mechanisms underlying these changes are poorly understood. We experimentally altered habitat volume and eliminated top trophic levels of the food web of invertebrates that inhabit rain-filled leaves of the carnivorous pitcher plant Sarracenia purpurea. Path models that incorporated food-web structure better predicted population sizes of food-web constituents than did simple keystone species models, models that included only autecological responses to habitat volume, or models including both food-web structure and habitat volume. These results provide the first experimental confirmation that trophic structure can determine species abundances in the face of habitat loss. PMID:17002518
Psychometric Properties of an Abbreviated Instrument of the Five-Factor Model
ERIC Educational Resources Information Center
Mullins-Sweatt, Stephanie N.; Jamerson, Janetta E.; Samuel, Douglas B.; Olson, David R.; Widiger, Thomas A.
2006-01-01
Brief measures of the five-factor model (FFM) have been developed but none include an assessment of facets within each domain. The purpose of this study was to examine the validity of a simple, one-page, facet-level description of the FFM. Five data collections were completed to assess the reliability and the convergent and discriminant validity…
NASA Astrophysics Data System (ADS)
Gwiazda, A.; Banas, W.; Sekala, A.; Foit, K.; Hryniewicz, P.; Kost, G.
2015-11-01
Process of workcell designing is limited by different constructional requirements. They are related to technological parameters of manufactured element, to specifications of purchased elements of a workcell and to technical characteristics of a workcell scene. This shows the complexity of the design-constructional process itself. The results of such approach are individually designed workcell suitable to the specific location and specific production cycle. Changing this parameters one must rebuild the whole configuration of a workcell. Taking into consideration this it is important to elaborate the base of typical elements of a robot kinematic chain that could be used as the tool for building Virtual modelling of kinematic chains of industrial robots requires several preparatory phase. Firstly, it is important to create a database element, which will be models of industrial robot arms. These models could be described as functional primitives that represent elements between components of the kinematic pairs and structural members of industrial robots. A database with following elements is created: the base kinematic pairs, the base robot structural elements, the base of the robot work scenes. The first of these databases includes kinematic pairs being the key component of the manipulator actuator modules. Accordingly, as mentioned previously, it includes the first stage rotary pair of fifth stage. This type of kinematic pairs was chosen due to the fact that it occurs most frequently in the structures of industrial robots. Second base consists of structural robot elements therefore it allows for the conversion of schematic structures of kinematic chains in the structural elements of the arm of industrial robots. It contains, inter alia, the structural elements such as base, stiff members - simple or angular units. They allow converting recorded schematic three-dimensional elements. Last database is a database of scenes. It includes elements of both simple and complex: simple models of technological equipment, conveyors models, models of the obstacles and like that. Using these elements it could be formed various production spaces (robotized workcells), in which it is possible to virtually track the operation of an industrial robot arm modelled in the system.
NASA Astrophysics Data System (ADS)
Gerhard, J.; Zanoni, M. A. B.; Torero, J. L.
2017-12-01
Smouldering (i.e., flameless combustion) underpins the technology Self-sustaining Treatment for Active Remediation (STAR). STAR achieves the in situ destruction of nonaqueous phase liquids (NAPLs) by generating a self-sustained smouldering reaction that propagates through the source zone. This research explores the nature of the travelling reaction and the influence of key in situ and engineered characteristics. A novel one-dimensional numerical model was developed (in COMSOL) to simulate the smouldering remediation of bitumen-contaminated sand. This model was validated against laboratory column experiments. Achieving model validation depended on correctly simulating the energy balance at the reaction front, including properly accounting for heat transfer, smouldering kinetics, and heat losses. Heat transfer between soil and air was demonstrated to be generally not at equilibrium. Moreover, existing heat transfer correlations were found to be inappropriate for the low air flow Reynold's numbers (Re < 30) relevant in this and similar thermal remediation systems. Therefore, a suite of experiments were conducted to generate a new heat transfer correlation, which generated correct simulations of convective heat flow through soil. Moreover, it was found that, for most cases of interest, a simple two-step pyrolysis/oxidation set of kinetic reactions was sufficient. Arrhenius parameters, calculated independently from thermogravimetric experiments, allowed the reaction kinetics to be validated in the smouldering model. Furthermore, a simple heat loss term sufficiently accounted for radial heat losses from the column. Altogether, these advances allow this simple model to reasonably predict the self-sustaining process including the peak reaction temperature, the reaction velocity, and the complete destruction of bitumen behind the front. Simulations with the validated model revealed numerous unique insights, including how the system inherently recycles energy, how air flow rate and NAPL saturation dictate contaminant destruction rates, and the extremes that lead to extinction. Overall, this research provides unique insights into the complex interplay of thermochemical processes that govern the success of smouldering as well as other thermal remediation approaches.
NASA Astrophysics Data System (ADS)
Bakker, Alexander; Louchard, Domitille; Keller, Klaus
2016-04-01
Sea-level rise threatens many coastal areas around the world. The integrated assessment of potential adaptation and mitigation strategies requires a sound understanding of the upper tails and the major drivers of the uncertainties. Global warming causes sea-level to rise, primarily due to thermal expansion of the oceans and mass loss of the major ice sheets, smaller ice caps and glaciers. These components show distinctly different responses to temperature changes with respect to response time, threshold behavior, and local fingerprints. Projections of these different components are deeply uncertain. Projected uncertainty ranges strongly depend on (necessary) pragmatic choices and assumptions; e.g. on the applied climate scenarios, which processes to include and how to parameterize them, and on error structure of the observations. Competing assumptions are very hard to objectively weigh. Hence, uncertainties of sea-level response are hard to grasp in a single distribution function. The deep uncertainty can be better understood by making clear the key assumptions. Here we demonstrate this approach using a relatively simple model framework. We present a mechanistically motivated, but simple model framework that is intended to efficiently explore the deeply uncertain sea-level response to anthropogenic climate change. The model consists of 'building blocks' that represent the major components of sea-level response and its uncertainties, including threshold behavior. The framework's simplicity enables the simulation of large ensembles allowing for an efficient exploration of parameter uncertainty and for the simulation of multiple combined adaptation and mitigation strategies. The model framework can skilfully reproduce earlier major sea level assessments, but due to the modular setup it can also be easily utilized to explore high-end scenarios and the effect of competing assumptions and parameterizations.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A
2008-12-01
Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.
Calibration of a simple and a complex model of global marine biogeochemistry
NASA Astrophysics Data System (ADS)
Kriest, Iris
2017-11-01
The assessment of the ocean biota's role in climate change is often carried out with global biogeochemical ocean models that contain many components and involve a high level of parametric uncertainty. Because many data that relate to tracers included in a model are only sparsely observed, assessment of model skill is often restricted to tracers that can be easily measured and assembled. Examination of the models' fit to climatologies of inorganic tracers, after the models have been spun up to steady state, is a common but computationally expensive procedure to assess model performance and reliability. Using new tools that have become available for global model assessment and calibration in steady state, this paper examines two different model types - a complex seven-component model (MOPS) and a very simple four-component model (RetroMOPS) - for their fit to dissolved quantities. Before comparing the models, a subset of their biogeochemical parameters has been optimised against annual-mean nutrients and oxygen. Both model types fit the observations almost equally well. The simple model contains only two nutrients: oxygen and dissolved organic phosphorus (DOP). Its misfit and large-scale tracer distributions are sensitive to the parameterisation of DOP production and decay. The spatio-temporal decoupling of nitrogen and oxygen, and processes involved in their uptake and release, renders oxygen and nitrate valuable tracers for model calibration. In addition, the non-conservative nature of these tracers (with respect to their upper boundary condition) introduces the global bias (fixed nitrogen and oxygen inventory) as a useful additional constraint on model parameters. Dissolved organic phosphorus at the surface behaves antagonistically to phosphate, and suggests that observations of this tracer - although difficult to measure - may be an important asset for model calibration.
Simple versus complex models of trait evolution and stasis as a response to environmental change
NASA Astrophysics Data System (ADS)
Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott
2015-04-01
Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.
Equivalent circuit models for interpreting impedance perturbation spectroscopy data
NASA Astrophysics Data System (ADS)
Smith, R. Lowell
2004-07-01
As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.
Simple animal models for amyotrophic lateral sclerosis drug discovery.
Patten, Shunmoogum A; Parker, J Alex; Wen, Xiao-Yan; Drapeau, Pierre
2016-08-01
Simple animal models have enabled great progress in uncovering the disease mechanisms of amyotrophic lateral sclerosis (ALS) and are helping in the selection of therapeutic compounds through chemical genetic approaches. Within this article, the authors provide a concise overview of simple model organisms, C. elegans, Drosophila and zebrafish, which have been employed to study ALS and discuss their value to ALS drug discovery. In particular, the authors focus on innovative chemical screens that have established simple organisms as important models for ALS drug discovery. There are several advantages of using simple animal model organisms to accelerate drug discovery for ALS. It is the authors' particular belief that the amenability of simple animal models to various genetic manipulations, the availability of a wide range of transgenic strains for labelling motoneurons and other cell types, combined with live imaging and chemical screens should allow for new detailed studies elucidating early pathological processes in ALS and subsequent drug and target discovery.
Gottingen Wind Tunnel for Testing Aircraft Models
NASA Technical Reports Server (NTRS)
Prandtl, L
1920-01-01
Given here is a brief description of the Gottingen Wind Tunnel for the testing of aircraft models, preceded by a history of its development. Included are a number of diagrams illustrating, among other things, a sectional elevation of the wind tunnel, the pressure regulator, the entrance cone and method of supporting a model for simple drag tests, a three-component balance, and a propeller testing device, all of which are discussed in the text.
Improved model for the angular dependence of excimer laser ablation rates in polymer materials
NASA Astrophysics Data System (ADS)
Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.
2009-10-01
Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.
Evidence of complex contagion of information in social media: An experiment using Twitter bots.
Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune
2017-01-01
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
Thermal Indices and Thermophysiological Modeling for Heat Stress.
Havenith, George; Fiala, Dusan
2015-12-15
The assessment of the risk of human exposure to heat is a topic as relevant today as a century ago. The introduction and use of heat stress indices and models to predict and quantify heat stress and heat strain has helped to reduce morbidity and mortality in industrial, military, sports, and leisure activities dramatically. Models used range from simple instruments that attempt to mimic the human-environment heat exchange to complex thermophysiological models that simulate both internal and external heat and mass transfer, including related processes through (protective) clothing. This article discusses the most commonly used indices and models and looks at how these are deployed in the different contexts of industrial, military, and biometeorological applications, with focus on use to predict related thermal sensations, acute risk of heat illness, and epidemiological analysis of morbidity and mortality. A critical assessment is made of tendencies to use simple indices such as WBGT in more complex conditions (e.g., while wearing protective clothing), or when employed in conjunction with inappropriate sensors. Regarding the more complex thermophysiological models, the article discusses more recent developments including model individualization approaches and advanced systems that combine simulation models with (body worn) sensors to provide real-time risk assessment. The models discussed in the article range from historical indices to recent developments in using thermophysiological models in (bio) meteorological applications as an indicator of the combined effect of outdoor weather settings on humans. Copyright © 2015 John Wiley & Sons, Inc.
A simple vibrating sample magnetometer for macroscopic samples
NASA Astrophysics Data System (ADS)
Lopez-Dominguez, V.; Quesada, A.; Guzmán-Mínguez, J. C.; Moreno, L.; Lere, M.; Spottorno, J.; Giacomone, F.; Fernández, J. F.; Hernando, A.; García, M. A.
2018-03-01
We here present a simple model of a vibrating sample magnetometer (VSM). The system allows recording magnetization curves at room temperature with a resolution of the order of 0.01 emu and is appropriated for macroscopic samples. The setup can be mounted with different configurations depending on the requirements of the sample to be measured (mass, saturation magnetization, saturation field, etc.). We also include here examples of curves obtained with our setup and comparison curves measured with a standard commercial VSM that confirms the reliability of our device.
Generalized gauge U(1) family symmetry for quarks and leptons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kownacki, Corey; Ma, Ernest; Pollard, Nicholas
2017-01-11
If the standard model of quarks and leptons is extended to include three singlet right-handed neutrinos, then the resulting fermion structure admits an infinite number of anomaly-free solutions with just one simple constraint. Well-known examples satisfying this constraint are B–L, L μ–Lτ, B–3Lτ, etc. Here, we derive this simple constraint, and discuss two new examples which offer some insights to the structure of mixing among quark and lepton families, together with their possible verification at the Large Hadron Collider.
Cloud fluid models of gas dynamics and star formation in galaxies
NASA Technical Reports Server (NTRS)
Struck-Marcell, Curtis; Scalo, John M.; Appleton, P. N.
1987-01-01
The large dynamic range of star formation in galaxies, and the apparently complex environmental influences involved in triggering or suppressing star formation, challenges the understanding. The key to this understanding may be the detailed study of simple physical models for the dominant nonlinear interactions in interstellar cloud systems. One such model is described, a generalized Oort model cloud fluid, and two simple applications of it are explored. The first of these is the relaxation of an isolated volume of cloud fluid following a disturbance. Though very idealized, this closed box study suggests a physical mechanism for starbursts, which is based on the approximate commensurability of massive cloud lifetimes and cloud collisional growth times. The second application is to the modeling of colliding ring galaxies. In this case, the driving processes operating on a dynamical timescale interact with the local cloud processes operating on the above timescale. The results is a variety of interesting nonequilibrium behaviors, including spatial variations of star formation that do not depend monotonically on gas density.
Regression-based model of skin diffuse reflectance for skin color analysis
NASA Astrophysics Data System (ADS)
Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi
2008-11-01
A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.
Electrical Lumped Model Examination for Load Variation of Circulation System
NASA Astrophysics Data System (ADS)
Koya, Yoshiharu; Ito, Mitsuyo; Mizoshiri, Isao
Modeling and analysis of the circulation system enables the characteristic decision of circulation system in the body to be made. So, many models of circulation system have been proposed. But, they are complicated because the models include a lot of elements. Therefore, we proposed a complete circulation model as a lumped electrical circuit, which is comparatively simple. In this paper, we examine the effectiveness of the complete circulation model as a lumped electrical circuit. We use normal, angina pectoris, dilated cardiomyopathy and myocardial infarction for evaluation of the ventricular contraction function.
Exact solutions for network rewiring models
NASA Astrophysics Data System (ADS)
Evans, T. S.
2007-03-01
Evolving networks with a constant number of edges may be modelled using a rewiring process. These models are used to describe many real-world processes including the evolution of cultural artifacts such as family names, the evolution of gene variations, and the popularity of strategies in simple econophysics models such as the minority game. The model is closely related to Urn models used for glasses, quantum gravity and wealth distributions. The full mean field equation for the degree distribution is found and its exact solution and generating solution are given.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Implications of Biospheric Energization
NASA Astrophysics Data System (ADS)
Budding, Edd; Demircan, Osman; Gündüz, Güngör; Emin Özel, Mehmet
2016-07-01
Our physical model relating to the origin and development of lifelike processes from very simple beginnings is reviewed. This molecular ('ABC') process is compared with the chemoton model, noting the role of the autocatalytic tuning to the time-dependent source of energy. This substantiates a Darwinian character to evolution. The system evolves from very simple beginnings to a progressively more highly tuned, energized and complex responding biosphere, that grows exponentially; albeit with a very low net growth factor. Rates of growth and complexity in the evolution raise disturbing issues of inherent stability. Autocatalytic processes can include a fractal character to their development allowing recapitulative effects to be observed. This property, in allowing similarities of pattern to be recognized, can be useful in interpreting complex (lifelike) systems.
Pencil-and-Paper Neural Networks: An Undergraduate Laboratory Exercise in Computational Neuroscience
Crisp, Kevin M.; Sutter, Ellen N.; Westerberg, Jacob A.
2015-01-01
Although it has been more than 70 years since McCulloch and Pitts published their seminal work on artificial neural networks, such models remain primarily in the domain of computer science departments in undergraduate education. This is unfortunate, as simple network models offer undergraduate students a much-needed bridge between cellular neurobiology and processes governing thought and behavior. Here, we present a very simple laboratory exercise in which students constructed, trained and tested artificial neural networks by hand on paper. They explored a variety of concepts, including pattern recognition, pattern completion, noise elimination and stimulus ambiguity. Learning gains were evident in changes in the use of language when writing about information processing in the brain. PMID:26557791
Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods
NASA Technical Reports Server (NTRS)
Adams, G. F.
1980-01-01
The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.
Sakhteman, Amirhossein; Zare, Bijan
2016-01-01
An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276
ECOLOGICAL THEORY. A general consumer-resource population model.
Lafferty, Kevin D; DeLeo, Giulio; Briggs, Cheryl J; Dobson, Andrew P; Gross, Thilo; Kuris, Armand M
2015-08-21
Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model. Copyright © 2015, American Association for the Advancement of Science.
Sustainability Indicators for Coupled Human-Earth Systems
NASA Astrophysics Data System (ADS)
Motesharrei, S.; Rivas, J. R.; Kalnay, E.
2014-12-01
Over the last two centuries, the Human System went from having a small impact on the Earth System (including the Climate System) to becoming dominant, because both population and per capita consumption have grown extremely fast, especially since about 1950. We therefore argue that Human System Models must be included into Earth System Models through bidirectional couplings with feedbacks. In particular, population should be modeled endogenously, rather than exogenously as done currently in most Integrated Assessment Models. The growth of the Human System threatens to overwhelm the Carrying Capacity of the Earth System, and may be leading to catastrophic climate change and collapse. We propose a set of Ecological and Economic "Sustainability Indicators" that can employ large data-sets for developing and assessing effective mitigation and adaptation policies. Using the Human and Nature Dynamical Model (HANDY) and Coupled Human-Climate-Water Model (COWA), we carry out experiments with this set of Sustainability Indicators and show that they are applicable to various coupled systems including Population, Climate, Water, Energy, Agriculture, and Economy. Impact of nonrenewable resources and fossil fuels could also be understood using these indicators. We demonstrate interconnections of Ecological and Economic Indicators. Coupled systems often include feedbacks and can thus display counterintuitive dynamics. This makes it difficult for even experts to see coming catastrophes from just the raw data for different variables. Sustainability Indicators boil down the raw data into a set of simple numbers that cross their sustainability thresholds with a large time-lag before variables enter their catastrophic regimes. Therefore, we argue that Sustainability Indicators constitute a powerful but simple set of tools that could be directly used for making policies for sustainability.
Determination of the transmission coefficients for quantum structures using FDTD method.
Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan
2011-12-01
The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.
A model for the space shuttle main engine high pressure oxidizer turbopump shaft seal system
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
1990-01-01
A simple static model is presented which solves for the flow properties of pressure, temperature, and mass flow in the Space Shuttle Main Engine pressure Oxidizer Turbopump Shaft Seal Systems. This system includes the primary and secondary turbine seals, the primary and secondary turbine drains, the helium purge seals and feed line, the primary oxygen drain, and the slinger/labyrinth oxygen seal pair. The model predicts the changes in flow variables that occur during and after failures of the various seals. Such information would be particularly useful in a post flight situation where processing of sensor information using this model could identify a particular seal that had experienced excessive wear. Most of the seals in the system are modeled using simple one dimensional equations which can be applied to almost any seal provided that the fluid is gaseous. A failure is modeled as an increase in the clearance between the shaft and the seal. Thus, the model does not attempt to predict how the failure process actually occurs (e.g., wear, seal crack initiation). The results presented were obtained using a FORTRAN implementation of the model running on a VAX computer. Solution for the seal system properties is obtained iteratively; however, a further simplified implementation (which does not include the slinger/labyrinth combination) was also developed which provides fast and reasonable results for most engine operating conditions. Results from the model compare favorably with the limited redline data available.
A neural computational model for animal's time-to-collision estimation.
Wang, Ling; Yao, Dezhong
2013-04-17
The time-to-collision (TTC) is the time elapsed before a looming object hits the subject. An accurate estimation of TTC plays a critical role in the survival of animals in nature and acts as an important factor in artificial intelligence systems that depend on judging and avoiding potential dangers. The theoretic formula for TTC is 1/τ≈θ'/sin θ, where θ and θ' are the visual angle and its variation, respectively, and the widely used approximation computational model is θ'/θ. However, both of these measures are too complex to be implemented by a biological neuronal model. We propose a new simple computational model: 1/τ≈Mθ-P/(θ+Q)+N, where M, P, Q, and N are constants that depend on a predefined visual angle. This model, weighted summation of visual angle model (WSVAM), can achieve perfect implementation through a widely accepted biological neuronal model. WSVAM has additional merits, including a natural minimum consumption and simplicity. Thus, it yields a precise and neuronal-implemented estimation for TTC, which provides a simple and convenient implementation for artificial vision, and represents a potential visual brain mechanism.
Giovannini, Giannina; Sbarciog, Mihaela; Steyer, Jean-Philippe; Chamy, Rolando; Vande Wouwer, Alain
2018-05-01
Hydrogen has been found to be an important intermediate during anaerobic digestion (AD) and a key variable for process monitoring as it gives valuable information about the stability of the reactor. However, simple dynamic models describing the evolution of hydrogen are not commonplace. In this work, such a dynamic model is derived using a systematic data driven-approach, which consists of a principal component analysis to deduce the dimension of the minimal reaction subspace explaining the data, followed by an identification of the kinetic parameters in the least-squares sense. The procedure requires the availability of informative data sets. When the available data does not fulfill this condition, the model can still be built from simulated data, obtained using a detailed model such as ADM1. This dynamic model could be exploited in monitoring and control applications after a re-identification of the parameters using actual process data. As an example, the model is used in the framework of a control strategy, and is also fitted to experimental data from raw industrial wine processing wastewater. Copyright © 2018 Elsevier Ltd. All rights reserved.
Energy Savings Analysis for Energy Monitoring and Control Systems
1995-01-01
for evaluating design and construction a:-0 quality, and for studying the effectiveness of air - tightening AC retrofits. No simple relationship...Energy These models of residential infiltration are based on statistical "Resource Center (1983) include information on air tightening in fits of
Simple Statistics: - Summarized!
ERIC Educational Resources Information Center
Blai, Boris, Jr.
Statistics are an essential tool for making proper judgement decisions. It is concerned with probability distribution models, testing of hypotheses, significance tests and other means of determining the correctness of deductions and the most likely outcome of decisions. Measures of central tendency include the mean, median and mode. A second…
Classical electron mass and fields 2
NASA Technical Reports Server (NTRS)
Spaniol, Craig; Sutton, John F.
1991-01-01
Continued here is the development of a model of the electron (HYDRA), which includes rotational and magnetic terms. The atomic electron state is discussed and a comparison is made with a simple harmonic oscillator. Experimental data is reviewed that supports the possibility of a new lepton.
Classical electron mass and fields. II
NASA Technical Reports Server (NTRS)
Spaniol, Craig; Sutton, John E.
1992-01-01
Continued here is the development of a model of the electron (HYDRA), which includes rotational and magnetic terms. The atomic electron state is discussed and a comparison is made with a simple harmonic oscillator. Experimental data is reviewed that supports the possibility of a new lepton.
A Piagetian Learning Cycle for Introductory Chemical Kinetics.
ERIC Educational Resources Information Center
Batt, Russell H.
1980-01-01
Described is a Piagetian learning cycle based on Monte Carlo modeling of several simple reaction mechanisms. Included are descriptions of learning cycle phases (exploration, invention, and discovery) and four BASIC-PLUS computer programs to be used in the explanation of chemical reacting systems. (Author/DS)
Nonlinear transient analysis via energy minimization
NASA Technical Reports Server (NTRS)
Kamat, M. P.; Knight, N. F., Jr.
1978-01-01
The formulation basis for nonlinear transient analysis of finite element models of structures using energy minimization is provided. Geometric and material nonlinearities are included. The development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. The results indicate the effectiveness of the technique as a viable tool for this purpose.
Marcus V. Warwell; Gerald E. Rehfeldt; Nicholas L. Crookston
2006-01-01
The Random Forests multiple regression tree was used to develop an empirically-based bioclimate model for the distribution of Pinus albicaulis (whitebark pine) in western North America, latitudes 31° to 51° N and longitudes 102° to 125° W. Independent variables included 35 simple expressions of temperature and precipitation and their interactions....
ERIC Educational Resources Information Center
Ehrmann, Stephen C.; Milam, John H., Jr.
2003-01-01
This volume describes for educators how to create simple models of the full costs of educational innovations, including the costs for time devoted to the activity, space needed for the activity, etc. Examples come from educational uses of technology in higher education in the United States and China. Real case studies illustrate the method in use:…
Simple Model with Time-Varying Fine-Structure ``Constant''
NASA Astrophysics Data System (ADS)
Berman, M. S.
2009-10-01
Extending the original version written in colaboration with L.A. Trevisan, we study the generalisation of Dirac's LNH, so that time-variation of the fine-structure constant, due to varying electrical and magnetic permittivities is included along with other variations (cosmological and gravitational ``constants''), etc. We consider the present Universe, and also an inflationary scenario. Rotation of the Universe is a given possibility in this model.
Characteristics of pattern formation and evolution in approximations of Physarum transport networks.
Jones, Jeff
2010-01-01
Most studies of pattern formation place particular emphasis on its role in the development of complex multicellular body plans. In simpler organisms, however, pattern formation is intrinsic to growth and behavior. Inspired by one such organism, the true slime mold Physarum polycephalum, we present examples of complex emergent pattern formation and evolution formed by a population of simple particle-like agents. Using simple local behaviors based on chemotaxis, the mobile agent population spontaneously forms complex and dynamic transport networks. By adjusting simple model parameters, maps of characteristic patterning are obtained. Certain areas of the parameter mapping yield particularly complex long term behaviors, including the circular contraction of network lacunae and bifurcation of network paths to maintain network connectivity. We demonstrate the formation of irregular spots and labyrinthine and reticulated patterns by chemoattraction. Other Turing-like patterning schemes were obtained by using chemorepulsion behaviors, including the self-organization of regular periodic arrays of spots, and striped patterns. We show that complex pattern types can be produced without resorting to the hierarchical coupling of reaction-diffusion mechanisms. We also present network behaviors arising from simple pre-patterning cues, giving simple examples of how the emergent pattern formation processes evolve into networks with functional and quasi-physical properties including tensionlike effects, network minimization behavior, and repair to network damage. The results are interpreted in relation to classical theories of biological pattern formation in natural systems, and we suggest mechanisms by which emergent pattern formation processes may be used as a method for spatially represented unconventional computation.
Simulation of Combustion Systems with Realistic g-jitter
NASA Technical Reports Server (NTRS)
Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.
2003-01-01
In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.
Fatigue-life distributions for reaction time data.
Tejo, Mauricio; Niklitschek-Soto, Sebastián; Marmolejo-Ramos, Fernando
2018-06-01
The family of fatigue-life distributions is introduced as an alternative model of reaction time data. This family includes the shifted Wald distribution and a shifted version of the Birnbaum-Saunders distribution. Although the former has been proposed as a way to model reaction time data, the latter has not. Hence, we provide theoretical, mathematical and practical arguments in support of the shifted Birnbaum-Saunders as a suitable model of simple reaction times and associated cognitive mechanisms.
Dynamical network interactions in distributed control of robots
NASA Astrophysics Data System (ADS)
Buscarino, Arturo; Fortuna, Luigi; Frasca, Mattia; Rizzo, Alessandro
2006-03-01
In this paper the dynamical network model of the interactions within a group of mobile robots is investigated and proposed as a possible strategy for controlling the robots without central coordination. Motivated by the results of the analysis of our simple model, we show that the system performance in the presence of noise can be improved by including long-range connections between the robots. Finally, a suitable strategy based on this model to control exploration and transport is introduced.
RESEARCH AREA 7.1: Exploring the Systematics of Controlling Quantum Phenomena
2016-10-05
the bottom to the top of the landscape. Computational analyses for simple model quantum systems are performed to ascertain the relative abundance of...SECURITY CLASSIFICATION OF: This research is concerned with the theoretical and experimental control quantum dynamics phenomena. Advances include new...algorithms to accelerate quantum control as well as provide physical insights into the controlled dynamics. The latter research includes the
A simple model for calculating air pollution within street canyons
NASA Astrophysics Data System (ADS)
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
NASA Astrophysics Data System (ADS)
Caldararu, S.; Smith, M. J.; Purves, D.; Emmott, S.
2013-12-01
Global agriculture will, in the future, be faced with two main challenges: climate change and an increase in global food demand driven by an increase in population and changes in consumption habits. To be able to predict both the impacts of changes in climate on crop yields and the changes in agricultural practices necessary to respond to such impacts we currently need to improve our understanding of crop responses to climate and the predictive capability of our models. Ideally, what we would have at our disposal is a modelling tool which, given certain climatic conditions and agricultural practices, can predict the growth pattern and final yield of any of the major crops across the globe. We present a simple, process-based crop growth model based on the assumption that plants allocate above- and below-ground biomass to maintain overall carbon optimality and that, to maintain this optimality, the reproductive stage begins at peak nitrogen uptake. The model includes responses to available light, water, temperature and carbon dioxide concentration as well as nitrogen fertilisation and irrigation. The model is data constrained at two sites, the Yaqui Valley, Mexico for wheat and the Southern Great Plains flux site for maize and soybean, using a robust combination of space-based vegetation data (including data from the MODIS and Landsat TM and ETM+ instruments), as well as ground-based biomass and yield measurements. We show a number of climate response scenarios, including increases in temperature and carbon dioxide concentrations as well as responses to irrigation and fertiliser application.
Rigid aggregates: theory and applications
NASA Astrophysics Data System (ADS)
Richardson, D. C.
2005-08-01
Numerical models employing ``perfect'' self-gravitating rubble piles that consist of monodisperse rigid spheres with configurable contact dissipation have been used to explore collisional and rotational disruption of gravitational aggregates. Applications of these simple models include numerical simulations of planetesimal evolution, asteroid family formation, tidal disruption, and binary asteroid formation. These studies may be limited by the idealized nature of the rubble pile model, since perfect identical spheres stack and shear in a very specific, possibly over-idealized way. To investigate how constituent properties affect the overall characteristics of a gravitational aggregate, particularly its failure modes, we have generalized our numerical code to model colliding, self-gravitating, rigid aggregates made up of variable-size spheres. Euler's equation of rigid-body motion in the presence of external torques are implemented, along with a self-consistent prescription for handling non-central impacts. Simple rules for sticking and breaking are also included. Preliminary results will be presented showing the failure modes of gravitational aggregates made up of smaller, rigid, non-idealized components. Applications of this new capability include more realistic aggregate models, convenient modeling of arbitrary rigid shapes for studies of the stability of orbiting companions (replacing one or both bodies with rigid aggregates eliminates expensive interparticle collisions while preserving the shape, spin, and gravity field of the bodies), and sticky particle aggregation in dense planetary rings. This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NAG511722 issued through the Office of Space Science and by the National Science Foundation under Grant No. AST0307549.
A simple rule for the costs of vigilance: empirical evidence from a social forager.
Cowlishaw, Guy; Lawes, Michael J.; Lightbody, Margaret; Martin, Alison; Pettifor, Richard; Rowcliffe, J. Marcus
2004-01-01
It is commonly assumed that anti-predator vigilance by foraging animals is costly because it interrupts food searching and handling time, leading to a reduction in feeding rate. When food handling does not require visual attention, however, a forager may handle food while simultaneously searching for the next food item or scanning for predators. We present a simple model of this process, showing that when the length of such compatible handling time Hc is long relative to search time S, specifically Hc/S > 1, it is possible to perform vigilance without a reduction in feeding rate. We test three predictions of this model regarding the relationships between feeding rate, vigilance and the Hc/S ratio, with data collected from a wild population of social foragers (samango monkeys, Cercopithecus mitis erythrarchus). These analyses consistently support our model, including our key prediction: as Hc/S increases, the negative relationship between feeding rate and the proportion of time spent scanning becomes progressively shallower. This pattern is more strongly driven by changes in median scan duration than scan frequency. Our study thus provides a simple rule that describes the extent to which vigilance can be expected to incur a feeding rate cost. PMID:15002768
Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E
2017-07-01
High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.
Turbulence and modeling in transonic flow
NASA Technical Reports Server (NTRS)
Rubesin, Morris W.; Viegas, John R.
1989-01-01
A review is made of the performance of a variety of turbulence models in the evaluation of a particular well documented transonic flow. This is done to supplement a previous attempt to calibrate and verify transonic airfoil codes by including many more turbulence models than used in the earlier work and applying the calculations to an experiment that did not suffer from uncertainties in angle of attack and was free of wind tunnel interference. It is found from this work, as well as in the earlier study, that the Johnson-King turbulence model is superior for transonic flows over simple aerodynamic surfaces, including moderate separation. It is also shown that some field equation models with wall function boundary conditions can be competitive with it.
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
Quantitative Modeling of Earth Surface Processes
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.
This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes.
NASA Astrophysics Data System (ADS)
Ali, Mohamed H.; Rakib, Fazle; Al-Saad, Khalid; Al-Saady, Rafif; Lyng, Fiona M.; Goormaghtigh, Erik
2018-07-01
Breast cancer is the second most common cancer after lung cancer. So far, in clinical practice, most cancer parameters originating from histopathology rely on the visualization by a pathologist of microscopic structures observed in stained tissue sections, including immunohistochemistry markers. Fourier transform infrared spectroscopy (FTIR) spectroscopy provides a biochemical fingerprint of a biopsy sample and, together with advanced data analysis techniques, can accurately classify cell types. Yet, one of the challenges when dealing with FTIR imaging is the slow recording of the data. One cm2 tissue section requires several hours of image recording. We show in the present paper that 2D covariance analysis singles out only a few wavenumbers where both variance and covariance are large. Simple models could be built using 4 wavenumbers to identify the 4 main cell types present in breast cancer tissue sections. Decision trees provide particularly simple models to reach discrimination between the 4 cell types. The robustness of these simple decision-tree models were challenged with FTIR spectral data obtained using different recording conditions. One test set was recorded by transflection on tissue sections in the presence of paraffin while the training set was obtained on dewaxed tissue sections by transmission. Furthermore, the test set was collected with a different brand of FTIR microscope and a different pixel size. Despite the different recording conditions, separating extracellular matrix (ECM) from carcinoma spectra was 100% successful, underlying the robustness of this univariate model and the utility of covariance analysis for revealing efficient wavenumbers. We suggest that 2D covariance maps using the full spectral range could be most useful to select the interesting wavenumbers and achieve very fast data acquisition on quantum cascade laser infrared imaging microscopes.
NASA Astrophysics Data System (ADS)
Beh, Kian Lim
2000-10-01
This study was designed to explore the effect of a typical traditional method of instruction in physics on the formation of useful mental models among college students for problem-solving using simple electric circuits as a context. The study was also aimed at providing a comprehensive description of the understanding regarding electric circuits among novices and experts. In order to achieve these objectives, the following two research approaches were employed: (1) A students survey to collect data from 268 physics students; and (2) An interview protocol to collect data from 23 physics students and 24 experts (including 10 electrical engineering graduates, 4 practicing electrical engineers, 2 secondary school physics teachers, 8 physics lecturers, and 4 electrical engineers). Among the major findings are: (1) Most students do not possess accurate models of simple electric circuits as presented implicitly in physics textbooks; (2) Most students display good procedural understanding for solving simple problems concerning electric circuits but have no in-depth conceptual understanding in terms of practical knowledge of current, voltage, resistance, and circuit connections; (3) Most students encounter difficulty in discerning parallel connections that are drawn in a non-conventional format; (4) After a year of college physics, students show significant improvement in areas, including practical knowledge of current and voltage, ability to compute effective resistance and capacitance, ability to identify circuit connections, and ability to solve problems; however, no significance was found in practical knowledge of resistance and ability to connect circuits; and (5) The differences and similarities between the physics students and the experts include: (a) Novices perceive parallel circuits more in terms of 'branch', 'current', and 'resistors with the same resistance' while experts perceive parallel circuits more in terms of 'node', 'voltage', and 'less resistance'; and (b) Both novices and experts use phrases such as 'side-by side' and 'one on top of the other' in describing parallel circuits which emphasize the geometry of the standard circuit drawing when describing parallel resistors.
The NIST Simple Guide for Evaluating and Expressing Measurement Uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio
2016-11-01
NIST has recently published guidance on the evaluation and expression of the uncertainty of NIST measurement results [1, 2], supplementing but not replacing B. N. Taylor and C. E. Kuyatt's (1994) Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results (NIST Technical Note 1297) [3], which tracks closely the Guide to the expression of uncertainty in measurement (GUM) [4], originally published in 1995 by the Joint Committee for Guides in Metrology of the International Bureau of Weights and Measures (BIPM). The scope of this Simple Guide, however, is much broader than the scope of both NIST Technical Note 1297 and the GUM, because it attempts to address several of the uncertainty evaluation challenges that have arisen at NIST since the 1990s, for example to include molecular biology, greenhouse gases and climate science measurements, and forensic science. The Simple Guide also expands the scope of those two other guidance documents by recognizing observation equations (that is, statistical models) as bona fide measurement models. These models are indispensable to reduce data from interlaboratory studies, to combine measurement results for the same measurand obtained by different methods, and to characterize the uncertainty of calibration and analysis functions used in the measurement of force, temperature, or composition of gas mixtures. This presentation reviews the salient aspects of the Simple Guide, illustrates the use of models and methods for uncertainty evaluation not contemplated in the GUM, and also demonstrates the NIST Uncertainty Machine [5] and the NIST Consensus Builder, which are web-based applications accessible worldwide that facilitate evaluations of measurement uncertainty and the characterization of consensus values in interlaboratory studies.
ERIC Educational Resources Information Center
School Science Review, 1981
1981-01-01
Presents a variety of laboratory procedures, discussions, and demonstrations including centripedal force apparatus, model ear drum, hot air balloons, air as a real substance, centering a ball, simple test tube rack, demonstration fire extinguisher, pin-hole camera, and guidelines for early primary science education (5-10 years) concepts and lesson…
Science and Society Test VI: Energy Economics.
ERIC Educational Resources Information Center
Hafemeister, David W.
1982-01-01
Develops simple numerical estimates to quantify a variety of energy economics issues, including among others, a modified Verhulst equation (considers effect of finite resources on petroleum) for supply/demand economics and a phenomenological model for market penetration also presents an analysis of economic returns of an energy conservation…
Apparatus for Demonstrating Confined and Unconfined Aquifer Characteristics.
ERIC Educational Resources Information Center
Gillham, Robert W.; O'Hannesin, Stephanie F.
1984-01-01
Students in hydrogeology classes commonly have difficulty appreciating differences between the mechanisms of water release from confined and unconfined aquifers. Describes a simple and inexpensive laboratory model for demonstrating the hydraulic responses of confined and unconfined aquifers to pumping. Includes a worked example to demonstrate the…
Simple Model of Macroscopic Instability in XeCl Discharge Pumped Lasers
NASA Astrophysics Data System (ADS)
Ahmed, Belasri; Zoheir, Harrache
2003-10-01
The aim of this work is to study the development of the macroscopic non uniformity of the electron density of high pressure discharge for excimer lasers and eventually its propagation because of the medium kinetics phenomena. This study is executed using a transverse mono-dimensional model, in which the plasma is represented by a set of resistance's in parallel. This model was employed using a numerical code including three strongly coupled parts: electric circuit equations, electron Boltzmann equation, and kinetics equations (chemical kinetics model). The time variations of the electron density in each plasma element are obtained by solving a set of ordinary differential equations describing the plasma kinetics and external circuit. The use of the present model allows a good comprehension of the halogen depletion phenomena, which is the principal cause of laser ending and allows a simple study of a large-scale non uniformity in preionization density and its effects on electrical and chemical plasma properties. The obtained results indicate clearly that about 50consumed at the end of the pulse. KEY WORDS Excimer laser, XeCl, Modeling, Cold plasma, Kinetic, Halogen depletion, Macroscopic instability.
Dependence of tropical cyclone development on coriolis parameter: A theoretical model
NASA Astrophysics Data System (ADS)
Deng, Liyuan; Li, Tim; Bi, Mingyu; Liu, Jia; Peng, Melinda
2018-03-01
A simple theoretical model was formulated to investigate how tropical cyclone (TC) intensification depends on the Coriolis parameter. The theoretical framework includes a two-layer free atmosphere and an Ekman boundary layer at the bottom. The linkage between the free atmosphere and the boundary layer is through the Ekman pumping vertical velocity in proportion to the vorticity at the top of the boundary layer. The closure of this linear system assumes a simple relationship between the free atmosphere diabatic heating and the boundary layer moisture convergence. Under a set of realistic atmospheric parameter values, the model suggests that the most preferred latitude for TC development is around 5° without considering other factors. The theoretical result is confirmed by high-resolution WRF model simulations in a zero-mean flow and a constant SST environment on an f -plane with different Coriolis parameters. Given an initially balanced weak vortex, the TC-like vortex intensifies most rapidly at the reference latitude of 5°. Thus, the WRF model simulations confirm the f-dependent characteristics of TC intensification rate as suggested by the theoretical model.
A hierarchy for modeling high speed propulsion systems
NASA Technical Reports Server (NTRS)
Hartley, Tom T.; Deabreu, Alex
1991-01-01
General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery, such as inlets, ramjets, and scramjets. The discussion is separated into four areas: (1) computational fluid dynamics models for the entire nonlinear system or high order nonlinear models; (2) high order linearized models derived from fundamental physics; (3) low order linear models obtained from the other high order models; and (4) low order nonlinear models (order here refers to the number of dynamic states). Included in the discussion are any special considerations based on the relevant control system designs. The methods discussed are for the quasi-one-dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, including moving normal shocks, hammershocks, simple subsonic combustion via heat addition, temperature dependent gases, detonations, and thermal choking. The report also contains a comprehensive list of papers and theses generated by this grant.
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Incorporating inductances in tissue-scale models of cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Stockli, R.; Vidale, P. L.
2003-04-01
The importance of correctly including land surface processes in climate models has been increasingly recognized in the past years. Even on seasonal to interannual time scales land surface - atmosphere feedbacks can play a substantial role in determining the state of the near-surface climate. The availability of soil moisture for both runoff and evapotranspiration is dependent on biophysical processes occuring in plants and in the soil acting on a wide time-scale from minutes to years. Fluxnet site measurements in various climatic zones are used to drive three generations of LSM's (land surface models) in order to assess the level of complexity needed to represent vegetation processes at the local scale. The three models were the Bucket model (Manabe 1969), BATS 1E (Dickinson 1984) and SiB 2 (Sellers et al. 1996). Evapotranspiration and runoff processes simulated by these models range from simple one-layer soils and no-vegetation parameterizations to complex multilayer soils, including realistic photosynthesis-stomatal conductance models. The latter is driven by satellite remote sensing land surface parameters inheriting the spatiotemporal evolution of vegetation phenology. In addition a simulation with SiB 2 not only including vertical water fluxes but also lateral soil moisture transfers by downslope flow is conducted for a pre-alpine catchment in Switzerland. Preliminary results are presented and show that - depending on the climatic environment and on the season - a realistic representation of evapotranspiration processes including seasonally and interannually-varying state of vegetation is significantly improving the representation of observed latent and sensible heat fluxes on the local scale. Moreover, the interannual evolution of soil moisture availability and runoff is strongly dependent on the chosen model complexity. Biophysical land surface parameters from satellite allow to represent the seasonal changes in vegetation activity, which has great impact on the yearly budget of transpiration fluxes. For some sites, however, the hydrological cycle is simulated reasonably well even with simple land surface representations.
An Analytic Model for the Success Rate of a Robotic Actuator System in Hitting Random Targets.
Bradley, Stuart
2015-11-20
Autonomous robotic systems are increasingly being used in a wide range of applications such as precision agriculture, medicine, and the military. These systems have common features which often includes an action by an "actuator" interacting with a target. While simulations and measurements exist for the success rate of hitting targets by some systems, there is a dearth of analytic models which can give insight into, and guidance on optimization, of new robotic systems. The present paper develops a simple model for estimation of the success rate for hitting random targets from a moving platform. The model has two main dimensionless parameters: the ratio of actuator spacing to target diameter; and the ratio of platform distance moved (between actuator "firings") to the target diameter. It is found that regions of parameter space having specified high success are described by simple equations, providing guidance on design. The role of a "cost function" is introduced which, when minimized, provides optimization of design, operating, and risk mitigation costs.
The probability heuristics model of syllogistic reasoning.
Chater, N; Oaksford, M
1999-03-01
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.
A simple mathematical model to predict sea surface temperature over the northwest Indian Ocean
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Abbasi, Mahmud Reza; Adamowski, Jan Franklin; Dehghani, Majid
2017-10-01
A novel and simple mathematical model was developed in this study to enhance the capacity of a reduced-order model based on eigenvectors (RMEV) to predict sea surface temperature (SST) in the northwest portion of the Indian Ocean, including the Persian and Oman Gulfs and Arabian Sea. Developed using only the first two of 12,416 possible modes, the enhanced RMEV closely matched observed daily optimum interpolation SST (DOISST) values. Spatial distribution of the first mode indicated the greatest variations in DOISST occurred in the Persian Gulf. Also, the slightly increasing trend in the temporal component of the first mode observed in the study area over the last 34 years properly reflected the impact of climate change and rising DOISST. Given its simplicity and high level of accuracy, the enhanced RMEV can be applied to forecast DOISST in oceans, which the poor forecasting performance and large computational-time of other numerical models may not allow.
Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.
2015-01-01
Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124
NASA Technical Reports Server (NTRS)
Wen, Guoyong; Marshak, Alexander; Varnai, Tamas; Levy, Robert
2016-01-01
A transition zone exists between cloudy skies and clear sky; such that, clouds scatter solar radiation into clear-sky regions. From a satellite perspective, it appears that clouds enhance the radiation nearby. We seek a simple method to estimate this enhancement, since it is so computationally expensive to account for all three-dimensional (3-D) scattering processes. In previous studies, we developed a simple two-layer model (2LM) that estimated the radiation scattered via cloud-molecular interactions. Here we have developed a new model to account for cloud-surface interaction (CSI). We test the models by comparing to calculations provided by full 3-D radiative transfer simulations of realistic cloud scenes. For these scenes, the Moderate Resolution Imaging Spectroradiometer (MODIS)-like radiance fields were computed from the Spherical Harmonic Discrete Ordinate Method (SHDOM), based on a large number of cumulus fields simulated by the University of California, Los Angeles (UCLA) large eddy simulation (LES) model. We find that the original 2LM model that estimates cloud-air molecule interactions accounts for 64 of the total reflectance enhancement and the new model (2LM+CSI) that also includes cloud-surface interactions accounts for nearly 80. We discuss the possibility of accounting for cloud-aerosol radiative interactions in 3-D cloud-induced reflectance enhancement, which may explain the remaining 20 of enhancements. Because these are simple models, these corrections can be applied to global satellite observations (e.g., MODIS) and help to reduce biases in aerosol and other clear-sky retrievals.
Generation of multicellular tumor spheroids by the hanging-drop method.
Timmins, Nicholas E; Nielsen, Lars K
2007-01-01
Owing to their in vivo-like characteristics, three-dimensional (3D) multicellular tumor spheroid (MCTS) cultures are gaining increasing popularity as an in vitro model of tumors. A straightforward and simple approach to the cultivation of these MCTS is the hanging-drop method. Cells are suspended in droplets of medium, where they develop into coherent 3D aggregates and are readily accessed for analysis. In addition to being simple, the method eliminates surface interactions with an underlying substratum (e.g., polystyrene plastic or agarose), requires only a low number of starting cells, and is highly reproducible. This method has also been applied to the co-cultivation of mixed cell populations, including the co-cultivation of endothelial cells and tumor cells as a model of early tumor angiogenesis.
The Evolution of Transition Region Loops Using IRIS and AIA
NASA Technical Reports Server (NTRS)
Winebarger, Amy R.; DePontieu, Bart
2014-01-01
Over the past 50 years, the model for the structure of the solar transition region has evolved from a simple transition layer between the cooler chromosphere to the hotter corona to a complex and diverse region that is dominated by complete loops that never reach coronal temperatures. The IRIS slitjaw images show many complete transition region loops. Several of the "coronal" channels in the SDO AIA instrument include contributions from weak transition region lines. In this work, we combine slitjaw images from IRIS with these channels to determine the evolution of the loops. We develop a simple model for the temperature and density evolution of the loops that can explain the simultaneous observations. Finally, we estimate the percentage of AIA emission that originates in the transition region.
Predicting Networked Strategic Behavior via Machine Learning and Game Theory
2015-01-13
The funding for this project was used to develop basic models, methodology and algorithms for the application of machine learning and related tools to settings in which strategic behavior is central. Among the topics studied was the development of simple behavioral models explaining and predicting human subject behavior in networked strategic experiments from prior work. These included experiments in biased voting and networked trading, among others.
ERIC Educational Resources Information Center
van der Linden, Wim J.
Latent class models for mastery testing differ from continuum models in that they do not postulate a latent mastery continuum but conceive mastery and non-mastery as two latent classes, each characterized by different probabilities of success. Several researchers use a simple latent class model that is basically a simultaneous application of the…
Effects of host social hierarchy on disease persistence.
Davidson, Ross S; Marion, Glenn; Hutchings, Michael R
2008-08-07
The effects of social hierarchy on population dynamics and epidemiology are examined through a model which contains a number of fundamental features of hierarchical systems, but is simple enough to allow analytical insight. In order to allow for differences in birth rates, contact rates and movement rates among different sets of individuals the population is first divided into subgroups representing levels in the hierarchy. Movement, representing dominance challenges, is allowed between any two levels, giving a completely connected network. The model includes hierarchical effects by introducing a set of dominance parameters which affect birth rates in each social level and movement rates between social levels, dependent upon their rank. Although natural hierarchies vary greatly in form, the skewing of contact patterns, introduced here through non-uniform dominance parameters, has marked effects on the spread of disease. A simple homogeneous mixing differential equation model of a disease with SI dynamics in a population subject to simple birth and death process is presented and it is shown that the hierarchical model tends to this as certain parameter regions are approached. Outside of these parameter regions correlations within the system give rise to deviations from the simple theory. A Gaussian moment closure scheme is developed which extends the homogeneous model in order to take account of correlations arising from the hierarchical structure, and it is shown that the results are in reasonable agreement with simulations across a range of parameters. This approach helps to elucidate the origin of hierarchical effects and shows that it may be straightforward to relate the correlations in the model to measurable quantities which could be used to determine the importance of hierarchical corrections. Overall, hierarchical effects decrease the levels of disease present in a given population compared to a homogeneous unstructured model, but show higher levels of disease than structured models with no hierarchy. The separation between these three models is greatest when the rate of dominance challenges is low, reducing mixing, and when the disease prevalence is low. This suggests that these effects will often need to be considered in models being used to examine the impact of control strategies where the low disease prevalence behaviour of a model is critical.
VisTrails SAHM: visualization and workflow management for species habitat modeling
Morisette, Jeffrey T.; Jarnevich, Catherine S.; Holcombe, Tracy R.; Talbert, Colin B.; Ignizio, Drew A.; Talbert, Marian; Silva, Claudio; Koop, David; Swanson, Alan; Young, Nicholas E.
2013-01-01
The Software for Assisted Habitat Modeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre- and post-processing steps and modeling options incorporated in the construction of a species distribution model through the established workflow management and visualization VisTrails software. This paper provides an overview of the VisTrails:SAHM software including a link to the open source code, a table detailing the current SAHM modules, and a simple example modeling an invasive weed species in Rocky Mountain National Park, USA.
Modeling, system identification, and control of ASTREX
NASA Technical Reports Server (NTRS)
Abhyankar, Nandu S.; Ramakrishnan, J.; Byun, K. W.; Das, A.; Cossey, Derek F.; Berg, J.
1993-01-01
The modeling, system identification and controller design aspects of the ASTREX precision space structure are presented in this work. Modeling of ASTREX is performed using NASTRAN, TREETOPS and I-DEAS. The models generated range from simple linear time-invariant models to nonlinear models used for large angle simulations. Identification in both the time and frequency domains are presented. The experimental set up and the results from the identification experiments are included. Finally, controller design for ASTREX is presented. Simulation results using this optimal controller demonstrate the controller performance. Finally the future directions and plans for the facility are addressed.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Using simple agent-based modeling to inform and enhance neighborhood walkability.
Badland, Hannah; White, Marcus; Macaulay, Gus; Eagleson, Serryn; Mavoa, Suzanne; Pettit, Christopher; Giles-Corti, Billie
2013-12-11
Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory 'what-if' scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) 'learning' and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume).
Rocket Engine Oscillation Diagnostics
NASA Technical Reports Server (NTRS)
Nesman, Tom; Turner, James E. (Technical Monitor)
2002-01-01
Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.
Control of ITBs in Fusion Self-Heated Plasmas
NASA Astrophysics Data System (ADS)
Panta, Soma; Newman, David; Terry, Paul; Sanchez, Raul
2015-11-01
Simple dynamical models have been able to capture a remarkable amount of the dynamics of the transport barriers found in many devices, including the often disconnected nature of the electron thermal transport channel sometimes observed in the presence of a standard (``ion channel'') barrier. By including in this rich though simple dynamic transport model an evolution equation for electron fluctuations we have previously investigated the interaction between the formation of the standard ion channel barrier and the somewhat less common electron channel barrier. The electron channel formation and evolution is even more sensitive to the alignment of the various gradients making up the sheared radial electric field then the ion barrier is. Because of this sensitivity and coupling of the barrier dynamics, the dynamic evolution of the fusion self-heating profile can have a significant impact on the barrier location and dynamics. To investigate this, self-heating has been added this model and the impact of the self-heating on the formation and controllability of the various barriers is explored. It has been found that the evolution of the heating profiles can suppress or collapse the electron channel barrier. NBI and RF schemes will be investigated for profile/barrier control.
Using Necessary Information to Identify Item Dependence in Passage-Based Reading Comprehension Tests
ERIC Educational Resources Information Center
Baldonado, Angela Argo; Svetina, Dubravka; Gorin, Joanna
2015-01-01
Applications of traditional unidimensional item response theory models to passage-based reading comprehension assessment data have been criticized based on potential violations of local independence. However, simple rules for determining dependency, such as including all items associated with a particular passage, may overestimate the dependency…
Language Management in the Czech Republic
ERIC Educational Resources Information Center
Neustupny, J. V.; Nekvapil, Jiri
2003-01-01
This monograph, based on the Language Management model, provides information on both the "simple" (discourse-based) and "organised" modes of attention to language problems in the Czech Republic. This includes but is not limited to the language policy of the State. This approach does not satisfy itself with discussing problems…
DOING Physics--Physics Activities for Groups.
ERIC Educational Resources Information Center
Zwicker, Earl, Ed.
1985-01-01
Students are challenged to investigate a simple electric motor and to build their own model from a battery, wood block, clips, enameled copper wire, bare wire, and sandpaper. Through trial and error, several discoveries are made, including a substitute commutator and use of a radio to detect motor armature contact changes. (DH)
A Simple Boyle's Law Experiment.
ERIC Educational Resources Information Center
Lewis, Don L.
1997-01-01
Describes an experiment to demonstrate Boyle's law that provides pressure measurements in a familiar unit (psi) and makes no assumptions concerning atmospheric pressure. Items needed include bathroom scales and a 60-ml syringe, castor oil, disposable 3-ml syringe and needle, modeling clay, pliers, and a wooden block. Commercial devices use a…
Ericson, Keith M Marzilli; White, John Myles; Laibson, David; Cohen, Jonathan D
2015-06-01
Heuristic models have been proposed for many domains involving choice. We conducted an out-of-sample, cross-validated comparison of heuristic models of intertemporal choice (which can account for many of the known intertemporal choice anomalies) and discounting models. Heuristic models outperformed traditional utility-discounting models, including models of exponential and hyperbolic discounting. The best-performing models predicted choices by using a weighted average of absolute differences and relative percentage differences of the attributes of the goods in a choice set. We concluded that heuristic models explain time-money trade-off choices in experiments better than do utility-discounting models. © The Author(s) 2015.
A simple model of electron beam initiated dielectric breakdown
NASA Technical Reports Server (NTRS)
Beers, B. L.; Daniell, R. E.; Delmer, T. N.
1985-01-01
A steady state model that describes the internal charge distribution of a planar dielectric sample exposed to a uniform electron beam was developed. The model includes the effects of charge deposition and ionization of the beam, separate trap-modulated mobilities for electrons and holes, electron-hole recombination, and pair production by drifting thermal electrons. If the incident beam current is greater than a certain critical value (which depends on sample thickness as well as other sample properties), the steady state solution is non-physical.
Continuous versus discontinuous albedo representations in a simple diffusive climate model
NASA Astrophysics Data System (ADS)
Simmons, P. A.; Griffel, D. H.
1988-07-01
A one-dimensional annually and zonally averaged energy-balance model, with diffusive meridional heat transport and including icealbedo feedback, is considered. This type of model is found to be very sensitive to the form of albedo used. The solutions for a discontinuous step-function albedo are compared to those for a more realistic smoothly varying albedo. The smooth albedo gives a closer fit to present conditions, but the discontinuous form gives a better representation of climates in earlier epochs.
Microeconomics of 300-mm process module control
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.
2001-08-01
Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.
MOAB : a mesh-oriented database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tautges, Timothy James; Ernst, Corey; Stimpson, Clint
A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can storemore » structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers included with MOAB, or as a translator between mesh formats, using readers and writers included with MOAB. The remainder of this report is organized as follows. Section 2, 'Getting Started', provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 contains a brief description of MOAB's relation to the TSTT mesh interface. Section 7 gives a conclusion and future plans for MOAB development. Section 8 gives references cited in this report. A reference description of the full MOAB API is contained in Section 9.« less
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
Galactic chemical evolution and nucleocosmochronology - Standard model with terminated infall
NASA Technical Reports Server (NTRS)
Clayton, D. D.
1984-01-01
Some exactly soluble families of models for the chemical evolution of the Galaxy are presented. The parameters considered include gas mass, the age-metallicity relation, the star mass vs. metallicity, the age distribution, and the mean age of dwarfs. A short BASIC program for calculating these parameters is given. The calculation of metallicity gradients, nuclear cosmochronology, and extinct radioactivities is addressed. An especially simple, mathematically linear model is recommended as a standard model of galaxies with truncated infall due to its internal consistency and compact display of the physical effects of the parameters.
An asymptotic solution to a passive biped walker model
NASA Astrophysics Data System (ADS)
Yudaev, Sergey A.; Rachinskii, Dmitrii; Sobolev, Vladimir A.
2017-02-01
We consider a simple model of a passive dynamic biped robot walker with point feet and legs without knee. The model is a switched system, which includes an inverted double pendulum. Robot’s gait and its stability depend on parameters such as the slope of the ramp, the length of robot’s legs, and the mass distribution along the legs. We present an asymptotic solution of the model. The first correction to the zero order approximation is shown to agree with the numerical solution for a limited parameter range.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
A minimalist feedback-regulated model for galaxy formation during the epoch of reionization
NASA Astrophysics Data System (ADS)
Furlanetto, Steven R.; Mirocha, Jordan; Mebane, Richard H.; Sun, Guochao
2017-12-01
Near-infrared surveys have now determined the luminosity functions of galaxies at 6 ≲ z ≲ 8 to impressive precision and identified a number of candidates at even earlier times. Here, we develop a simple analytic model to describe these populations that allows physically motivated extrapolation to earlier times and fainter luminosities. We assume that galaxies grow through accretion on to dark matter haloes, which we model by matching haloes at fixed number density across redshift, and that stellar feedback limits the star formation rate. We allow for a variety of feedback mechanisms, including regulation through supernova energy and momentum from radiation pressure. We show that reasonable choices for the feedback parameters can fit the available galaxy data, which in turn substantially limits the range of plausible extrapolations of the luminosity function to earlier times and fainter luminosities: for example, the global star formation rate declines rapidly (by a factor of ∼20 from z = 6 to 15 in our fiducial model), but the bright galaxies accessible to observations decline even faster (by a factor ≳ 400 over the same range). Our framework helps us develop intuition for the range of expectations permitted by simple models of high-z galaxies that build on our understanding of 'normal' galaxy evolution. We also provide predictions for galaxy measurements by future facilities, including James Webb Space Telescope and Wide-Field Infrared Survey Telescope.
Simple Tidal Prism Models Revisited
NASA Astrophysics Data System (ADS)
Luketina, D.
1998-01-01
Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.
Simple model dielectric functions for insulators
NASA Astrophysics Data System (ADS)
Vos, Maarten; Grande, Pedro L.
2017-05-01
The Drude dielectric function is a simple way of describing the dielectric function of free electron materials, which have an uniform electron density, in a classical way. The Mermin dielectric function describes a free electron gas, but is based on quantum physics. More complex metals have varying electron densities and are often described by a sum of Drude dielectric functions, the weight of each function being taken proportional to the volume with the corresponding density. Here we describe a slight variation on the Drude dielectric functions that describes insulators in a semi-classical way and a form of the Levine-Louie dielectric function including a relaxation time that does the same within the framework of quantum physics. In the optical limit the semi-classical description of an insulator and the quantum physics description coincide, in the same way as the Drude and Mermin dielectric function coincide in the optical limit for metals. There is a simple relation between the coefficients used in the classical and quantum approaches, a relation that ensures that the obtained dielectric function corresponds to the right static refractive index. For water we give a comparison of the model dielectric function at non-zero momentum with inelastic X-ray measurements, both at relative small momenta and in the Compton limit. The Levine-Louie dielectric function including a relaxation time describes the spectra at small momentum quite well, but in the Compton limit there are significant deviations.
Spacecraft Internal Acoustic Environment Modeling
NASA Technical Reports Server (NTRS)
Chu, Shao-Sheng R.; Allen Christopher S.
2010-01-01
Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Meta-analysis of pesticide sorption in subsoils
NASA Astrophysics Data System (ADS)
Jarvis, Nicholas
2017-04-01
It has been known for several decades that sorption koc values tend to be larger in soils that are low in organic carbon (i.e. subsoils). Nevertheless, in a regulatory context, the models used to assess leaching of pesticides to groundwater still rely on a constant koc value, which is usually measured on topsoil samples. This is mainly because the general applicability of any improved model approach that is also simple enough to use for regulatory purposes has not been demonstrated. The objective of this study was therefore first to summarize and generalize available literature data in order to assess the magnitude of any systematic increase of koc values in subsoil and to test an alternative model of subsoil sorption that could be useful in pesticide risk assessment and management. To this end, a database containing the results of batch sorption experiments for pesticides was compiled from published studies in the literature, which placed at least as much emphasis on measurements in subsoil horizons as in topsoil. The database includes 967 data entries from 46 studies and for 34 different active substances (15 non-ionic compounds, 13 weak acids, 6 weak bases). In order to minimize pH effects on sorption, data for weak acids and bases were only included if the soil pH was more than two units larger than the compound pKa. A simple empirical model, whereby the sorption constant is given as a power law function of the soil organic carbon content, gave good fits to most data sets. Overall, the apparent koc value, koc(app), for non-ionic compounds and weak bases roughly doubled as the soil organic carbon content decreased by a factor of ten. The typical increase in koc(app) was even larger for weak acids: on average koc(app) increased by a factor of six as soil organic carbon content decreased by a factor of ten. These results suggest the koc concept currently used in leaching models should be replaced by an alternative approach that gives a more realistic representation of pesticide sorption in subsoil. The model tested in this study appears to be widely applicable and simple enough to parameterize for risk assessment purposes. However, more data on subsoil sorption should first be included in the analysis to enable reliable estimation of worst-case percentile values of the power law exponent in the model.
NASA Technical Reports Server (NTRS)
Flowers, George T.
1994-01-01
Substantial progress has been made toward the goals of this research effort in the past six months. A simplified rotor model with a flexible shaft and backup bearings has been developed. The model is based upon the work of Ishii and Kirk. Parameter studies of the behavior of this model are currently being conducted. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. The study consists of simulation work coupled with experimental verification. The work is documented in the attached paper. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. The dynamics of this model are currently being studied with the objective of verifying the conclusions obtained from the simpler models. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501.
A simple modern correctness condition for a space-based high-performance multiprocessor
NASA Technical Reports Server (NTRS)
Probst, David K.; Li, Hon F.
1992-01-01
A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.
Predictive Analytics In Healthcare: Medications as a Predictor of Medical Complexity.
Higdon, Roger; Stewart, Elizabeth; Roach, Jared C; Dombrowski, Caroline; Stanberry, Larissa; Clifton, Holly; Kolker, Natali; van Belle, Gerald; Del Beccaro, Mark A; Kolker, Eugene
2013-12-01
Children with special healthcare needs (CSHCN) require health and related services that exceed those required by most hospitalized children. A small but growing and important subset of the CSHCN group includes medically complex children (MCCs). MCCs typically have comorbidities and disproportionately consume healthcare resources. To enable strategic planning for the needs of MCCs, simple screens to identify potential MCCs rapidly in a hospital setting are needed. We assessed whether the number of medications used and the class of those medications correlated with MCC status. Retrospective analysis of medication data from the inpatients at Seattle Children's Hospital found that the numbers of inpatient and outpatient medications significantly correlated with MCC status. Numerous variables based on counts of medications, use of individual medications, and use of combinations of medications were considered, resulting in a simple model based on three different counts of medications: outpatient and inpatient drug classes and individual inpatient drug names. The combined model was used to rank the patient population for medical complexity. As a result, simple, objective admission screens for predicting the complexity of patients based on the number and type of medications were implemented.
NASA Technical Reports Server (NTRS)
Tsai, H. C.; Arocho, A. M.
1992-01-01
A simple one-dimensional fiber-matrix interphase model has been developed and analytical results obtained correlated well with available experimental data. It was found that by including the interphase between the fiber and matrix in the model, much better local stress results were obtained than with the model without the interphase. A more sophisticated two-dimensional micromechanical model, which included the interphase properties was also developed. Both one-dimensional and two-dimensional models were used to study the effect of the interphase properties on the local stresses at the fiber, interphase and matrix. From this study, it was found that interphase modulus and thickness have significant influence on the transverse tensile strength and mode of failure in fiber reinforced composites.
Statistical mechanics of simple models of protein folding and design.
Pande, V S; Grosberg, A Y; Tanaka, T
1997-01-01
It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hooper, R.P.; West, C.T.; Peters, N.E.
1990-08-01
The authors constructed a simple, process-oriented model, called the Alpine Lake Forecaster (ALF), using data collected during the Integrated Watershed Study at Emerald Lake, Sequoia National Park, CA. The model was designed to answer questions concerning the impact of acid deposition on high-elevation watersheds in the Sierra Nevada, CA. ALF is able to capture the basic solute patterns in stream water during snowmelt in this alpine catchment where ground water is a minor contributor to stream flow. It includes an empirical representation of primary mineral weathering as the only alkalinity-generating mechanism. Hydrologic and chemical data from a heavy snow yearmore » were used to calibrate the model. Watershed processes during a light snow year appeared to be different from the calibration year. The model forecast concludes that stream and lake water are most likely to experience a loss of ANC and depression in pH during spring rain storms that occur during the snowmelt dilution phase.« less
Some anticipated contributions to core fluid dynamics from the GRM
NASA Technical Reports Server (NTRS)
Vanvorhies, C.
1985-01-01
It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.
Multibody dynamic analysis using a rotation-free shell element with corotational frame
NASA Astrophysics Data System (ADS)
Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen
2018-03-01
Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.
Data and methodological problems in establishing state gasoline-conservation targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, D.L.; Walton, G.H.
The Emergency Energy Conservation Act of 1979 gives the President the authority to set gasoline-conservation targets for states in the event of a supply shortage. This paper examines data and methodological problems associated with setting state gasoline-conservation targets. The target-setting method currently used is examined and found to have some flaws. Ways of correcting these deficiencies through the use of Box-Jenkins time-series analysis are investigated. A successful estimation of Box-Jenkins models for all states included the estimation of the magnitude of the supply shortages of 1979 in each state and a preliminary estimation of state short-run price elasticities, which weremore » found to vary about a median value of -0.16. The time-series models identified were very simple in structure and lent support to the simple consumption growth model assumed by the current target method. The authors conclude that the flaws in the current method can be remedied either by replacing the current procedures with time-series models or by using the models in conjunction with minor modifications of the current method.« less
Parachute Models Used in the Mars Science Laboratory Entry, Descent, and Landing Simulation
NASA Technical Reports Server (NTRS)
Cruz, Juan R.; Way, David W.; Shidner, Jeremy D.; Davis, Jody L.; Powell, Richard W.; Kipp, Devin M.; Adams, Douglas S.; Witkowski, Al; Kandis, Mike
2013-01-01
An end-to-end simulation of the Mars Science Laboratory (MSL) entry, descent, and landing (EDL) sequence was created at the NASA Langley Research Center using the Program to Optimize Simulated Trajectories II (POST2). This simulation is capable of providing numerous MSL system and flight software responses, including Monte Carlo-derived statistics of these responses. The MSL POST2 simulation includes models of EDL system elements, including those related to the parachute system. Among these there are models for the parachute geometry, mass properties, deployment, inflation, opening force, area oscillations, aerodynamic coefficients, apparent mass, interaction with the main landing engines, and off-loading. These models were kept as simple as possible, considering the overall objectives of the simulation. The main purpose of this paper is to describe these parachute system models to the extent necessary to understand how they work and some of their limitations. A list of lessons learned during the development of the models and simulation is provided. Future improvements to the parachute system models are proposed.
NASA Astrophysics Data System (ADS)
Karandish, Fatemeh; Šimůnek, Jiří
2016-12-01
Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a better choice for predicting SWCs with lower uncertainties when required data are available, and thus for designing water saving strategies for agriculture and for other environmental applications requiring estimates of SWCs.
Fattorini, Simone
2006-08-01
Any method of identifying hotspots should take into account the effect of area on species richness. I examined the importance of the species-area relationship in determining tenebrionid (Coleoptera: Tenebrionidae) hotspots on the Aegean Islands (Greece). Thirty-two islands and 170 taxa (species and subspecies) were included in this study. I tested several species-area relationship models with linear and nonlinear regressions, including power exponential, negative exponential, logistic, Gompertz, Weibull, Lomolino, and He-Legendre functions. Islands with positive residuals were identified as hotspots. I also analyzed the values of the C parameter of the power function and the simple species-area ratios. Species richness was significantly correlated with island area for all models. The power function model was the most convenient one. Most functions, however identified certain islands as hotspots. The importance of endemics in insular biotas should be evaluated carefully because they are of high conservation concern. The simple use of the species-area relationship can be problematic when areas with no endemics are included. Therefore the importance of endemics should be evaluated according to different methods, such as percentages, to take into account different levels of endemism and different kinds of "endemics" (e.g., endemic to single islands vs. endemic to the archipelago). Because the species-area relationship is a key pattern in ecology, my findings can be applied at broader scales.
Pyrotechnic modeling for the NSI and pin puller
NASA Technical Reports Server (NTRS)
Powers, Joseph M.; Gonthier, Keith A.
1993-01-01
A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
NASA Astrophysics Data System (ADS)
Wong, Tony E.; Bakker, Alexander M. R.; Ruckert, Kelsey; Applegate, Patrick; Slangen, Aimée B. A.; Keller, Klaus
2017-07-01
Simple models can play pivotal roles in the quantification and framing of uncertainties surrounding climate change and sea-level rise. They are computationally efficient, transparent, and easy to reproduce. These qualities also make simple models useful for the characterization of risk. Simple model codes are increasingly distributed as open source, as well as actively shared and guided. Alas, computer codes used in the geosciences can often be hard to access, run, modify (e.g., with regards to assumptions and model components), and review. Here, we describe the simple model framework BRICK (Building blocks for Relevant Ice and Climate Knowledge) v0.2 and its underlying design principles. The paper adds detail to an earlier published model setup and discusses the inclusion of a land water storage component. The framework largely builds on existing models and allows for projections of global mean temperature as well as regional sea levels and coastal flood risk. BRICK is written in R and Fortran. BRICK gives special attention to the model values of transparency, accessibility, and flexibility in order to mitigate the above-mentioned issues while maintaining a high degree of computational efficiency. We demonstrate the flexibility of this framework through simple model intercomparison experiments. Furthermore, we demonstrate that BRICK is suitable for risk assessment applications by using a didactic example in local flood risk management.
Current Status and Challenges of Atmospheric Data Assimilation
NASA Astrophysics Data System (ADS)
Atlas, R. M.; Gelaro, R.
2016-12-01
The issues of modern atmospheric data assimilation are fairly simple to comprehend but difficult to address, involving the combination of literally billions of model variables and tens of millions of observations daily. In addition to traditional meteorological variables such as wind, temperature pressure and humidity, model state vectors are being expanded to include explicit representation of precipitation, clouds, aerosols and atmospheric trace gases. At the same time, model resolutions are approaching single-kilometer scales globally and new observation types have error characteristics that are increasingly non-Gaussian. This talk describes the current status and challenges of atmospheric data assimilation, including an overview of current methodologies, the difficulty of estimating error statistics, and progress toward coupled earth system analyses.
Szilágyi, N; Kovács, R; Kenyeres, I; Csikor, Zs
2013-01-01
Biofilm development in a fixed bed biofilm reactor system performing municipal wastewater treatment was monitored aiming at accumulating colonization and maximum biofilm mass data usable in engineering practice for process design purposes. Initially a 6 month experimental period was selected for investigations where the biofilm formation and the performance of the reactors were monitored. The results were analyzed by two methods: for simple, steady-state process design purposes the maximum biofilm mass on carriers versus influent load and a time constant of the biofilm growth were determined, whereas for design approaches using dynamic models a simple biofilm mass prediction model including attachment and detachment mechanisms was selected and fitted to the experimental data. According to a detailed statistical analysis, the collected data have not allowed us to determine both the time constant of biofilm growth and the maximum biofilm mass on carriers at the same time. The observed maximum biofilm mass could be determined with a reasonable error and ranged between 438 gTS/m(2) carrier surface and 843 gTS/m(2), depending on influent load, and hydrodynamic conditions. The parallel analysis of the attachment-detachment model showed that the experimental data set allowed us to determine the attachment rate coefficient which was in the range of 0.05-0.4 m d(-1) depending on influent load and hydrodynamic conditions.
Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.
Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L
2014-05-20
Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.
Analyzing C2 Structures and Self-Synchronization with Simple Computational Models
2011-06-01
16th ICCRTS “Collective C2 in Multinational Civil-Military Operations” Analyzing C2 Structures and Self- Synchronization with Simple...Self- Synchronization with Simple Computational Models 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...models. The Kuramoto Model, though with some serious limitations, provides a representation of information flow and self- synchronization in an
Computing local edge probability in natural scenes from a population of oriented simple cells
Ramachandra, Chaithanya A.; Mel, Bartlett W.
2013-01-01
A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295
Entropic Lattice Boltzmann Methods
2001-12-10
model of fluid dynamics in one dimension, first considered by Renda et al. in 1997 [14]. Here the geometric picture involves a four dimensional polytope...convention of including constant terms in an extra column of the matrix, using the device of appending 1 to the column vector of unknowns. In general, there...we apply the entropic lattice Boltzmann method to a simple five-velocity model of fluid dynamics in one dimension, first considered by Renda et al
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
Beniwal, Ankit; Lewicki, Marek; Wells, James D.; ...
2017-08-23
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
NASA Astrophysics Data System (ADS)
Beniwal, Ankit; Lewicki, Marek; Wells, James D.; White, Martin; Williams, Anthony G.
2017-08-01
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. We discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Signal transmission competing with noise in model excitable brains
NASA Astrophysics Data System (ADS)
Marro, J.; Mejias, J. F.; Pinamonti, G.; Torres, J. J.
2013-01-01
This is a short review of recent studies in our group on how weak signals may efficiently propagate in a system with noise-induced excitation-inhibition competition which adapts to the activity at short-time scales and thus induces excitable conditions. Our numerical results on simple mathematical models should hold for many complex networks in nature, including some brain cortical areas. In particular, they serve us here to interpret available psycho-technical data.
Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniwal, Ankit; Lewicki, Marek; Wells, James D.
We analyse a simple extension of the SM with just an additional scalar singlet coupled to the Higgs boson. Here, we discuss the possible probes for electroweak baryogenesis in this model including collider searches, gravitational wave and direct dark matter detection signals. We show that a large portion of the model parameter space exists where the observation of gravitational waves would allow detection while the indirect collider searches would not.
Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?
NASA Astrophysics Data System (ADS)
Halide, Halmar
2017-01-01
We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.
Heuristics for the Hodgkin-Huxley system.
Hoppensteadt, Frank
2013-09-01
Hodgkin and Huxley (HH) discovered that voltages control ionic currents in nerve membranes. This led them to describe electrical activity in a neuronal membrane patch in terms of an electronic circuit whose characteristics were determined using empirical data. Due to the complexity of this model, a variety of heuristics, including relaxation oscillator circuits and integrate-and-fire models, have been used to investigate activity in neurons, and these simpler models have been successful in suggesting experiments and explaining observations. Connections between most of the simpler models had not been made clear until recently. Shown here are connections between these heuristics and the full HH model. In particular, we study a new model (Type III circuit): It includes the van der Pol-based models; it can be approximated by a simple integrate-and-fire model; and it creates voltages and currents that correspond, respectively, to the h and V components of the HH system. Copyright © 2012 Elsevier Inc. All rights reserved.
An empirical model for estimating annual consumption by freshwater fish populations
Liao, H.; Pierce, C.L.; Larscheid, J.G.
2005-01-01
Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.
NASA Astrophysics Data System (ADS)
Donker, N. H. W.
2001-01-01
A hydrological model (YWB, yearly water balance) has been developed to model the daily rainfall-runoff relationship of the 202 km2 Teba river catchment, located in semi-arid south-eastern Spain. The period of available data (1976-1993) includes some very rainy years with intensive storms (responsible for flooding parts of the town of Malaga) and also some very dry years.The YWB model is in essence a simple tank model in which the catchment is subdivided into a limited number of meaningful hydrological units. Instead of generating per unit surface runoff resulting from infiltration excess, runoff has been made the result of storage excess. Actual evapotranspiration is obtained by means of curves, included in the software, representing the relationship between the ratio of actual to potential evapotranspiration as a function of soil moisture content for three soil texture classes.The total runoff generated is split between base flow and surface runoff according to a given baseflow index. The two components are routed separately and subsequently joined. A large number of sequential years can be processed, and the results of each year are summarized by a water balance table and a daily based rainfall runoff time series. An attempt has been made to restrict the amount of input data to the minimum.Interactive manual calibration is advocated in order to allow better incorporation of field evidence and the experience of the model user. Field observations allowed for an approximate calibration at the hydrological unit level.
Jack, Barbara A; O'Brien, Mary R; Kirton, Jennifer A; Marley, Kate; Whelan, Alison; Baldry, Catherine R; Groves, Karen E
2013-12-01
Good communication skills in healthcare professionals are acknowledged as a core competency. The consequences of poor communication are well-recognised with far reaching costs including; reduced treatment compliance, higher psychological morbidity, incorrect or delayed diagnoses, and increased complaints. The Simple Skills Secrets is a visual, easily memorised, model of communication for healthcare staff to respond to the distress or unanswerable questions of patients, families and colleagues. To explore the impact of the Simple Skills Secrets model of communication training on the general healthcare workforce. An evaluation methodology encompassing a quantitative pre- and post-course testing of confidence and willingness to have conversations with distressed patients, carers and colleagues and qualitative semi-structured telephone interviews with participants 6-8 weeks post course. During the evaluation, 153 staff undertook the training of which 149 completed the pre- and post-training questionnaire. A purposive sampling approach was adopted for the follow up qualitative interviews and 14 agreed to participate. There is a statistically significant improvement in both willingness and confidence for all categories; (overall confidence score, t(148)=-15.607, p=<0.05 overall willingness score, t(148)=-10.878, p=<0.05) with the greatest improvement in confidence in communicating with carers (pre-course mean 6.171 to post course mean 8.171). There is no statistical significant difference between the registered and support staff. Several themes were obtained from the qualitative data, including: a method of communicating differently, a structured approach, thinking differently and additional skills. The value of the model in clinical practice was reported. This model can be suggested as increasing the confidence of staff, in dealing with a myriad of situations which, if handled appropriately can lead to increased patient and carers' satisfaction. Empowering staff appears to have increased their willingness to undertake these conversations, which could lead to earlier intervention and minimise distress. Copyright © 2013 Elsevier Ltd. All rights reserved.
Toward unraveling the complexity of simple epithelial keratins in human disease.
Omary, M Bishr; Ku, Nam-On; Strnad, Pavel; Hanada, Shinichiro
2009-07-01
Simple epithelial keratins (SEKs) are found primarily in single-layered simple epithelia and include keratin 7 (K7), K8, K18-K20, and K23. Genetically engineered mice that lack SEKs or overexpress mutant SEKs have helped illuminate several keratin functions and served as important disease models. Insight into the contribution of SEKs to human disease has indicated that K8 and K18 are the major constituents of Mallory-Denk bodies, hepatic inclusions associated with several liver diseases, and are essential for inclusion formation. Furthermore, mutations in the genes encoding K8, K18, and K19 predispose individuals to a variety of liver diseases. Hence, as we discuss here, the SEK cytoskeleton is involved in the orchestration of several important cellular functions and contributes to the pathogenesis of human liver disease.
Toward unraveling the complexity of simple epithelial keratins in human disease
Omary, M. Bishr; Ku, Nam-On; Strnad, Pavel; Hanada, Shinichiro
2009-01-01
Simple epithelial keratins (SEKs) are found primarily in single-layered simple epithelia and include keratin 7 (K7), K8, K18–K20, and K23. Genetically engineered mice that lack SEKs or overexpress mutant SEKs have helped illuminate several keratin functions and served as important disease models. Insight into the contribution of SEKs to human disease has indicated that K8 and K18 are the major constituents of Mallory-Denk bodies, hepatic inclusions associated with several liver diseases, and are essential for inclusion formation. Furthermore, mutations in the genes encoding K8, K18, and K19 predispose individuals to a variety of liver diseases. Hence, as we discuss here, the SEK cytoskeleton is involved in the orchestration of several important cellular functions and contributes to the pathogenesis of human liver disease. PMID:19587454
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Klein, R. H.
1975-01-01
As part of a comprehensive program exploring driver/vehicle system response in lateral steering tasks, driver/vehicle system describing functions and other dynamic data have been gathered in several milieu. These include a simple fixed base simulator with an elementary roadway delineation only display; a fixed base statically operating automobile with a terrain model based, wide angle projection system display; and a full scale moving base automobile operating on the road. Dynamic data with the two fixed base simulators compared favorably, implying that the impoverished visual scene, lack of engine noise, and simplified steering wheel feel characteristics in the simple simulator did not induce significant driver dynamic behavior variations. The fixed base vs. moving base comparisons showed substantially greater crossover frequencies and phase margins on the road course.
Development of the Concept of Energy Conservation using Simple Experiments for Grade 10 Students
NASA Astrophysics Data System (ADS)
Rachniyom, S.; Toedtanya, K.; Wuttiprom, S.
2017-09-01
The purpose of this research was to develop students’ concept of and retention rate in relation to energy conservation. Activities included simple and easy experiments that considered energy transformation from potential to kinetic energy. The participants were 30 purposively selected grade 10 students in the second semester of the 2016 academic year. The research tools consisted of learning lesson plans and a learning achievement test. Results showed that the experiments worked well and were appropriate as learning activities. The students’ achievement scores significantly increased at the statistical level of 05, the students’ retention rates were at a high level, and learning behaviour was at a good level. These simple experiments allowed students to learn to demonstrate to their peers and encouraged them to use familiar models to explain phenomena in daily life.
Fractional poisson--a simple dose-response model for human norovirus.
Messner, Michael J; Berger, Philip; Nappier, Sharon P
2014-10-01
This study utilizes old and new Norovirus (NoV) human challenge data to model the dose-response relationship for human NoV infection. The combined data set is used to update estimates from a previously published beta-Poisson dose-response model that includes parameters for virus aggregation and for a beta-distribution that describes variable susceptibility among hosts. The quality of the beta-Poisson model is examined and a simpler model is proposed. The new model (fractional Poisson) characterizes hosts as either perfectly susceptible or perfectly immune, requiring a single parameter (the fraction of perfectly susceptible hosts) in place of the two-parameter beta-distribution. A second parameter is included to account for virus aggregation in the same fashion as it is added to the beta-Poisson model. Infection probability is simply the product of the probability of nonzero exposure (at least one virus or aggregate is ingested) and the fraction of susceptible hosts. The model is computationally simple and appears to be well suited to the data from the NoV human challenge studies. The model's deviance is similar to that of the beta-Poisson, but with one parameter, rather than two. As a result, the Akaike information criterion favors the fractional Poisson over the beta-Poisson model. At low, environmentally relevant exposure levels (<100), estimation error is small for the fractional Poisson model; however, caution is advised because no subjects were challenged at such a low dose. New low-dose data would be of great value to further clarify the NoV dose-response relationship and to support improved risk assessment for environmentally relevant exposures. © 2014 Society for Risk Analysis Published 2014. This article is a U.S. Government work and is in the public domain for the U.S.A.
NASA Technical Reports Server (NTRS)
Saunders, D. F.; Thomas, G. E. (Principal Investigator); Kinsman, F. E.; Beatty, D. F.
1973-01-01
The author has identified the following significant results. This study was performed to investigate applications of ERTS-1 imagery in commercial reconnaissance for mineral and hydrocarbon resources. ERTS-1 imagery collected over five areas in North America (Montana; Colorado; New Mexico-West Texas; Superior Province, Canada; and North Slope, Alaska) has been analyzed for data content including linears, lineaments, and curvilinear anomalies. Locations of these features were mapped and compared with known locations of mineral and hydrocarbon accumulations. Results were analyzed in the context of a simple-shear, block-coupling model. Data analyses have resulted in detection of new lineaments, some of which may be continental in extent, detection of many curvilinear patterns not generally seen on aerial photos, strong evidence of continental regmatic fracture patterns, and realization that geological features can be explained in terms of a simple-shear, block-coupling model. The conculsions are that ERTS-1 imagery is of great value in photogeologic/geomorphic interpretations of regional features, and the simple-shear, block-coupling model provides a means of relating data from ERTS imagery to structures that have controlled emplacement of ore deposits and hydrocarbon accumulations, thus providing a basis for a new approach for reconnaissance for mineral, uranium, gas, and oil deposits and structures.
Iwai, Sosuke; Fujiwara, Kenji; Tamura, Takuro
2016-09-01
Algal endosymbiosis is widely distributed in eukaryotes including many protists and metazoans, and plays important roles in aquatic ecosystems, combining phagotrophy and phototrophy. To maintain a stable symbiotic relationship, endosymbiont population size in the host must be properly regulated and maintained at a constant level; however, the mechanisms underlying the maintenance of algal endosymbionts are still largely unknown. Here we investigate the population dynamics of the unicellular ciliate Paramecium bursaria and its Chlorella-like algal endosymbiont under various experimental conditions in a simple culture system. Our results suggest that endosymbiont population size in P. bursaria was not regulated by active processes such as cell division coupling between the two organisms, or partitioning of the endosymbionts at host cell division. Regardless, endosymbiont population size was eventually adjusted to a nearly constant level once cells were grown with light and nutrients. To explain this apparent regulation of population size, we propose a simple mechanism based on the different growth properties (specifically the nutrient requirements) of the two organisms, and based from this develop a mathematical model to describe the population dynamics of host and endosymbiont. The proposed mechanism and model may provide a basis for understanding the maintenance of algal endosymbionts. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
de la Torre, Jose Garcia; Cifre, Jose G. Hernandez; Martinez, M. Carmen Lopez
2008-01-01
This paper describes a computational exercise at undergraduate level that demonstrates the employment of Monte Carlo simulation to study the conformational statistics of flexible polymer chains, and to predict solution properties. Three simple chain models, including excluded volume interactions, have been implemented in a public-domain computer…
Let the Dogs Out: Using Bobble Head Toys to Explore Force and Motion.
ERIC Educational Resources Information Center
Foster, Andrea S.
2003-01-01
Introduces an activity in which students learn principles of force and motion, systems, and simple machines by exploring the best position of the dogs on the dashboard. Includes a sample lesson plan written in the five instructional models: (1) engagement; (2) exploration; (3) explanation; (4) elaboration; and (5) evaluation. (KHR)
ERIC Educational Resources Information Center
Clausen, Thomas P.
2011-01-01
The Fisher esterification reaction is ideally suited for the undergraduate organic laboratory because it is easy to carry out and often involves a suitable introduction to basic laboratory techniques including extraction, distillation, and simple spectroscopic (IR and NMR) analyses. Here, a Fisher esterification reaction is described in which the…
Data Acquisition Programming (LabVIEW): An Aid to Teaching Instrumental Analytical Chemistry.
ERIC Educational Resources Information Center
Gostowski, Rudy
A course was developed at Austin Peay State University (Tennessee) which offered an opportunity for hands-on experience with the essential components of modern analytical instruments. The course aimed to provide college students with the skills necessary to construct a simple model instrument, including the design and fabrication of electronic…
Earth Model with Laser Beam Simulating Seismic Ray Paths.
ERIC Educational Resources Information Center
Ryan, John Arthur; Handzus, Thomas Jay, Jr.
1988-01-01
Described is a simple device, that uses a laser beam to simulate P waves. It allows students to follow ray paths, reflections and refractions within the earth. Included is a set of exercises that lead students through the steps by which the presence of the outer and inner cores can be recognized. (Author/CW)
Anton TenWolde; Mark T. Bomberg
2009-01-01
Overall, despite the lack of exact input data, the use of design tools, including models, is much superior to the simple following of rules of thumbs, and a moisture analysis should be standard procedure for any building envelope design. Exceptions can only be made for buildings in the same climate, similar occupancy, and similar envelope construction. This chapter...
Deep-down ionization of protoplanetary discs
NASA Astrophysics Data System (ADS)
Glassgold, A. E.; Lizano, S.; Galli, D.
2017-12-01
The possible occurrence of dead zones in protoplanetary discs subject to the magneto-rotational instability highlights the importance of disc ionization. We present a closed-form theory for the deep-down ionization by X-rays at depths below the disc surface dominated by far-ultraviolet radiation. Simple analytic solutions are given for the major ion classes, electrons, atomic ions, molecular ions and negatively charged grains. In addition to the formation of molecular ions by X-ray ionization of H2 and their destruction by dissociative recombination, several key processes that operate in this region are included, e.g. charge exchange of molecular ions and neutral atoms and destruction of ions by grains. Over much of the inner disc, the vertical decrease in ionization with depth into the disc is described by simple power laws, which can easily be included in more detailed modelling of magnetized discs. The new ionization theory is used to illustrate the non-ideal magnetohydrodynamic effects of Ohmic, Hall and Ambipolar diffusion for a magnetic model of a T Tauri star disc using the appropriate Elsasser numbers.
Honda, Hidehito; Matsuka, Toshihiko; Ueda, Kazuhiro
2017-05-01
Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based inference depending on the subjective difficulty of the inference task. We conducted competitive tests of binary choice inference models representing simple heuristics (fluency and familiarity heuristics) and knowledge-based inference models. We found that a simple heuristic model (especially a familiarity heuristic model) explained inference patterns for subjectively difficult inference tasks, and that a knowledge-based inference model explained subjectively easy inference tasks. These results were consistent with the predictions of the attribute substitution framework. Issues on usage of simple heuristics and psychological processes are discussed. Copyright © 2016 Cognitive Science Society, Inc.
Chaudhari, Mangesh I; Muralidharan, Ajay; Pratt, Lawrence R; Rempe, Susan B
2018-02-12
Progress in understanding liquid ethylene carbonate (EC) and propylene carbonate (PC) on the basis of molecular simulation, emphasizing simple models of interatomic forces, is reviewed. Results on the bulk liquids are examined from the perspective of anticipated applications to materials for electrical energy storage devices. Preliminary results on electrochemical double-layer capacitors based on carbon nanotube forests and on model solid-electrolyte interphase (SEI) layers of lithium ion batteries are considered as examples. The basic results discussed suggest that an empirically parameterized, non-polarizable force field can reproduce experimental structural, thermodynamic, and dielectric properties of EC and PC liquids with acceptable accuracy. More sophisticated force fields might include molecular polarizability and Buckingham-model description of inter-atomic overlap repulsions as extensions to Lennard-Jones models of van der Waals interactions. Simple approaches should be similarly successful also for applications to organic molecular ions in EC/PC solutions, but the important case of Li[Formula: see text] deserves special attention because of the particularly strong interactions of that small ion with neighboring solvent molecules. To treat the Li[Formula: see text] ions in liquid EC/PC solutions, we identify interaction models defined by empirically scaled partial charges for ion-solvent interactions. The empirical adjustments use more basic inputs, electronic structure calculations and ab initio molecular dynamics simulations, and also experimental results on Li[Formula: see text] thermodynamics and transport in EC/PC solutions. Application of such models to the mechanism of Li[Formula: see text] transport in glassy SEI models emphasizes the advantage of long time-scale molecular dynamics studies of these non-equilibrium materials.
Let's Go Off the Grid: Subsurface Flow Modeling With Analytic Elements
NASA Astrophysics Data System (ADS)
Bakker, M.
2017-12-01
Subsurface flow modeling with analytic elements has the major advantage that no grid or time stepping are needed. Analytic element formulations exist for steady state and transient flow in layered aquifers and unsaturated flow in the vadose zone. Analytic element models are vector-based and consist of points, lines and curves that represent specific features in the subsurface. Recent advances allow for the simulation of partially penetrating wells and multi-aquifer wells, including skin effect and wellbore storage, horizontal wells of poly-line shape including skin effect, sharp changes in subsurface properties, and surface water features with leaky beds. Input files for analytic element models are simple, short and readable, and can easily be generated from, for example, GIS databases. Future plans include the incorporation of analytic element in parts of grid-based models where additional detail is needed. This presentation will give an overview of advanced flow features that can be modeled, many of which are implemented in free and open-source software.
Animal models to study microRNA function
Pal, Arpita S.; Kasinski, Andrea L.
2018-01-01
The discovery of the microRNAs, lin-4 and let-7 as critical mediators of normal development in Caenorhabditis elegans and their conservation throughout evolution has spearheaded research towards identifying novel roles of microRNAs in other cellular processes. To accurately elucidate these fundamental functions, especially in the context of an intact organism various microRNA transgenic models have been generated and evaluated. Transgenic C. elegans (worms), Drosophila melanogaster (flies), Danio rerio (zebrafish), and Mus musculus (mouse) have contributed immensely towards uncovering the roles of multiple microRNAs in cellular processes such as proliferation, differentiation, and apoptosis, pathways that are severely altered in human diseases such as cancer. The simple model organisms, C. elegans, D. melanogaster and D. rerio do not develop cancers, but have proved to be convenient systesm in microRNA research, especially in characterizing the microRNA biogenesis machinery which is often dysregulated during human tumorigenesis. The microRNA-dependent events delineated via these simple in vivo systems have been further verified in vitro, and in more complex models of cancers, such as M. musculus. The focus of this review is to provide an overview of the important contributions made in the microRNA field using model organisms. The simple model systems provided the basis for the importance of microRNAs in normal cellular physiology, while the more complex animal systems provided evidence for the role of microRNAs dysregulation in cancers. Highlights include an overview of the various strategies used to generate transgenic organisms and a review of the use of transgenic mice for evaluating pre-clinical efficacy of microRNA-based cancer therapeutics. PMID:28882225
A comprehensive surface-groundwater flow model
NASA Astrophysics Data System (ADS)
Arnold, Jeffrey G.; Allen, Peter M.; Bernhardt, Gilbert
1993-02-01
In this study, a simple groundwater flow and height model was added to an existing basin-scale surface water model. The linked model is: (1) watershed scale, allowing the basin to be subdivided; (2) designed to accept readily available inputs to allow general use over large regions; (3) continuous in time to allow simulation of land management, including such factors as climate and vegetation changes, pond and reservoir management, groundwater withdrawals, and stream and reservoir withdrawals. The model is described, and is validated on a 471 km 2 watershed near Waco, Texas. This linked model should provide a comprehensive tool for water resource managers in development and planning.
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
Comparison of geometrical shock dynamics and kinematic models for shock-wave propagation
NASA Astrophysics Data System (ADS)
Ridoux, J.; Lardjane, N.; Monasse, L.; Coulouvrat, F.
2018-03-01
Geometrical shock dynamics (GSD) is a simplified model for nonlinear shock-wave propagation, based on the decomposition of the shock front into elementary ray tubes. Assuming small changes in the ray tube area, and neglecting the effect of the post-shock flow, a simple relation linking the local curvature and velocity of the front, known as the A{-}M rule, is obtained. More recently, a new simplified model, referred to as the kinematic model, was proposed. This model is obtained by combining the three-dimensional Euler equations and the Rankine-Hugoniot relations at the front, which leads to an equation for the normal variation of the shock Mach number at the wave front. In the same way as GSD, the kinematic model is closed by neglecting the post-shock flow effects. Although each model's approach is different, we prove their structural equivalence: the kinematic model can be rewritten under the form of GSD with a specific A{-}M relation. Both models are then compared through a wide variety of examples including experimental data or Eulerian simulation results when available. Attention is drawn to the simple cases of compression ramps and diffraction over convex corners. The analysis is completed by the more complex cases of the diffraction over a cylinder, a sphere, a mound, and a trough.
Mechanochemical pattern formation in simple models of active viscoelastic fluids and solids
NASA Astrophysics Data System (ADS)
Alonso, Sergio; Radszuweit, Markus; Engel, Harald; Bär, Markus
2017-11-01
The cytoskeleton of the organism Physarum polycephalum is a prominent example of a complex active viscoelastic material wherein stresses induce flows along the organism as a result of the action of molecular motors and their regulation by calcium ions. Experiments in Physarum polycephalum have revealed a rich variety of mechanochemical patterns including standing, traveling and rotating waves that arise from instabilities of spatially homogeneous states without gradients in stresses and resulting flows. Herein, we investigate simple models where an active stress induced by molecular motors is coupled to a model describing the passive viscoelastic properties of the cellular material. Specifically, two models for viscoelastic fluids (Maxwell and Jeffrey model) and two models for viscoelastic solids (Kelvin-Voigt and Standard model) are investigated. Our focus is on the analysis of the conditions that cause destabilization of spatially homogeneous states and the related onset of mechano-chemical waves and patterns. We carry out linear stability analyses and numerical simulations in one spatial dimension for different models. In general, sufficiently strong activity leads to waves and patterns. The primary instability is stationary for all active fluids considered, whereas all active solids have an oscillatory primary instability. All instabilities found are of long-wavelength nature reflecting the conservation of the total calcium concentration in the models studied.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
A simple geometrical model describing shapes of soap films suspended on two rings
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.; Kilvington, Charles D.; Wildenberg, Rebekah L.; Camacho, Franco E.; Walecki, Wojciech J.; Walecki, Peter S.; Walecki, Eve S.
2016-09-01
We measured and analysed the stability of two types of soap films suspended on two rings using the simple conical frusta-based model, where we use common definition of conical frustum as a portion of a cone that lies between two parallel planes cutting it. Using frusta-based we reproduced very well-known results for catenoid surfaces with and without a central disk. We present for the first time a simple conical frusta based spreadsheet model of the soap surface. This very simple, elementary, geometrical model produces results surprisingly well matching the experimental data and known exact analytical solutions. The experiment and the spreadsheet model can be used as a powerful teaching tool for pre-calculus and geometry students.
Karev, Georgy P; Wolf, Yuri I; Koonin, Eugene V
2003-10-12
The distributions of many genome-associated quantities, including the membership of paralogous gene families can be approximated with power laws. We are interested in developing mathematical models of genome evolution that adequately account for the shape of these distributions and describe the evolutionary dynamics of their formation. We show that simple stochastic models of genome evolution lead to power-law asymptotics of protein domain family size distribution. These models, called Birth, Death and Innovation Models (BDIM), represent a special class of balanced birth-and-death processes, in which domain duplication and deletion rates are asymptotically equal up to the second order. The simplest, linear BDIM shows an excellent fit to the observed distributions of domain family size in diverse prokaryotic and eukaryotic genomes. However, the stochastic version of the linear BDIM explored here predicts that the actual size of large paralogous families is reached on an unrealistically long timescale. We show that introduction of non-linearity, which might be interpreted as interaction of a particular order between individual family members, allows the model to achieve genome evolution rates that are much better compatible with the current estimates of the rates of individual duplication/loss events.
Simple model for multiple-choice collective decision making
NASA Astrophysics Data System (ADS)
Lee, Ching Hua; Lucas, Andrew
2014-11-01
We describe a simple model of heterogeneous, interacting agents making decisions between n ≥2 discrete choices. For a special class of interactions, our model is the mean field description of random field Potts-like models and is effectively solved by finding the extrema of the average energy E per agent. In these cases, by studying the propagation of decision changes via avalanches, we argue that macroscopic dynamics is well captured by a gradient flow along E . We focus on the permutation symmetric case, where all n choices are (on average) the same, and spontaneous symmetry breaking (SSB) arises purely from cooperative social interactions. As examples, we show that bimodal heterogeneity naturally provides a mechanism for the spontaneous formation of hierarchies between decisions and that SSB is a preferred instability to discontinuous phase transitions between two symmetric points. Beyond the mean field limit, exponentially many stable equilibria emerge when we place this model on a graph of finite mean degree. We conclude with speculation on decision making with persistent collective oscillations. Throughout the paper, we emphasize analogies between methods of solution to our model and common intuition from diverse areas of physics, including statistical physics and electromagnetism.
The division of labor: genotypic versus phenotypic specialization.
Wahl, L M
2002-07-01
A model of the division of labor in simple evolving systems is explored to compare two strategies evident in natural populations: phenotypic specialization (such as differentiation by regulated gene expression) and genotypic specialization (such as co-infection by complementary covirus populations). While genotypic specialization is vulnerable to the chance extinction of an essential specialist type and to parasitism, phenotypic specialization is able to overcome these hurdles. When simple spatial effects are included, phenotypic specialization has further benefits, protecting against destructive dynamic patterns. Many of the advantages of phenotypic specialization, however, can only be realized when a high degree of relatedness within groups is ensured.
Design and implementation of a simple nuclear power plant simulator
NASA Astrophysics Data System (ADS)
Miller, William H.
1983-02-01
A simple PWR nuclear power plant simulator has been designed and implemented on a minicomputer system. The system is intended for students use in understanding the power operation of a nuclear power plant. A PDP-11 minicomputer calculates reactor parameters in real time, uses a graphics terminal to display the results and a keyboard and joystick for control functions. Plant parameters calculated by the model include the core reactivity (based upon control rod positions, soluble boron concentration and reactivity feedback effects), the total core power, the axial core power distribution, the temperature and pressure in the primary and secondary coolant loops, etc.
NASA Astrophysics Data System (ADS)
Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko
2015-06-01
We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.
A proposed mathematical model for sleep patterning.
Lawder, R E
1984-01-01
The simple model of a ramp, intersecting a triangular waveform, yields results which conform with seven generalized observations of sleep patterning; including the progressive lengthening of 'rapid-eye-movement' (REM) sleep periods within near-constant REM/nonREM cycle periods. Predicted values of REM sleep time, and of Stage 3/4 nonREM sleep time, can be computed using the observed values of other parameters. The distributions of the actual REM and Stage 3/4 times relative to the predicted values were closer to normal than the distributions relative to simple 'best line' fits. It was found that sleep onset tends to occur at a particular moment in the individual subject's '90-min cycle' (the use of a solar time-scale masks this effect), which could account for a subject with a naturally short sleep/wake cycle synchronizing to a 24-h rhythm. A combined 'sleep control system' model offers quantitative simulation of the sleep patterning of endogenous depressives and, with a different perturbation, qualitative simulation of the symptoms of narcolepsy.
Sun, Jianguo; Feng, Yanqin; Zhao, Hui
2015-01-01
Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.
Flexible Modes Control Using Sliding Mode Observers: Application to Ares I
NASA Technical Reports Server (NTRS)
Shtessel, Yuri B.; Hall, Charles E.; Baev, Simon; Orr, Jeb S.
2010-01-01
The launch vehicle dynamics affected by bending and sloshing modes are considered. Attitude measurement data that are corrupted by flexible modes could yield instability of the vehicle dynamics. Flexible body and sloshing modes are reconstructed by sliding mode observers. The resultant estimates are used to remove the undesirable dynamics from the measurements, and the direct effects of sloshing and bending modes on the launch vehicle are compensated by means of a controller that is designed without taking the bending and sloshing modes into account. A linearized mathematical model of Ares I launch vehicle was derived based on FRACTAL, a linear model developed by NASA/MSFC. The compensated vehicle dynamics with a simple PID controller were studied for the launch vehicle model that included two bending modes, two slosh modes and actuator dynamics. A simulation study demonstrated stable and accurate performance of the flight control system with the augmented simple PID controller without the use of traditional linear bending filters.
On the dynamics of a human body model.
NASA Technical Reports Server (NTRS)
Huston, R. L.; Passerello, C. E.
1971-01-01
Equations of motion for a model of the human body are developed. Basically, the model consists of an elliptical cylinder representing the torso, together with a system of frustrums of elliptical cones representing the limbs. They are connected to the main body and each other by hinges and ball and socket joints. Vector, tensor, and matrix methods provide a systematic organization of the geometry. The equations of motion are developed from the principles of classical mechanics. The solution of these equations then provide the displacement and rotation of the main body when the external forces and relative limb motions are specified. Three simple example motions are studied to illustrate the method. The first is an analysis and comparison of simple lifting on the earth and the moon. The second is an elementary approach to underwater swimming, including both viscous and inertia effects. The third is an analysis of kicking motion and its effect upon a vertically suspended man such as a parachutist.
A study of helicopter stability and control including blade dynamics
NASA Technical Reports Server (NTRS)
Zhao, Xin; Curtiss, H. C., Jr.
1988-01-01
A linearized model of rotorcraft dynamics has been developed through the use of symbolic automatic equation generating techniques. The dynamic model has been formulated in a unique way such that it can be used to analyze a variety of rotor/body coupling problems including a rotor mounted on a flexible shaft with a number of modes as well as free-flight stability and control characteristics. Direct comparison of the time response to longitudinal, lateral and directional control inputs at various trim conditions shows that the linear model yields good to very good correlation with flight test. In particular it is shown that a dynamic inflow model is essential to obtain good time response correlation, especially for the hover trim condition. It also is shown that the main rotor wake interaction with the tail rotor and fixed tail surfaces is a significant contributor to the response at translational flight trim conditions. A relatively simple model for the downwash and sidewash at the tail surfaces based on flat vortex wake theory is shown to produce good agreement. Then, the influence of rotor flap and lag dynamics on automatic control systems feedback gain limitations is investigated with the model. It is shown that the blade dynamics, especially lagging dynamics, can severly limit the useable values of the feedback gain for simple feedback control and that multivariable optimal control theory is a powerful tool to design high gain augmentation control system. The frequency-shaped optimal control design can offer much better flight dynamic characteristics and a stable margin for the feedback system without need to model the lagging dynamics.
NASA Astrophysics Data System (ADS)
Ke, Haohao; Ondov, John M.; Rogge, Wolfgang F.
2013-12-01
Composite chemical profiles of motor vehicle emissions were extracted from ambient measurements at a near-road site in Baltimore during a windless traffic episode in November, 2002, using four independent approaches, i.e., simple peak analysis, windless model-based linear regression, PMF, and UNMIX. Although the profiles are in general agreement, the windless-model-based profile treatment more effectively removes interference from non-traffic sources and is deemed to be more accurate for many species. In addition to abundances of routine pollutants (e.g., NOx, CO, PM2.5, EC, OC, sulfate, and nitrate), 11 particle-bound metals and 51 individual traffic-related organic compounds (including n-alkanes, PAHs, oxy-PAHs, hopanes, alkylcyclohexanes, and others) were included in the modeling.
NASA Technical Reports Server (NTRS)
Queijo, M. J.; Wells, W. R.; Keskar, D. A.
1979-01-01
A simple vortex system, used to model unsteady aerodynamic effects into the rigid body longitudinal equations of motion of an aircraft, is described. The equations are used in the development of a parameter extraction algorithm. Use of the two parameter-estimation modes, one including and the other omitting unsteady aerodynamic modeling, is discussed as a means of estimating some acceleration derivatives. Computer generated data and flight data, used to demonstrate the use of the parameter-extraction algorithm are studied.
Application Note: Power Grid Modeling With Xyce.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sholander, Peter E.
This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.
Single-particle dynamics of the Anderson model: a local moment approach
NASA Astrophysics Data System (ADS)
Glossop, Matthew T.; Logan, David E.
2002-07-01
A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.
An Introduction to Magnetospheric Physics by Means of Simple Models
NASA Technical Reports Server (NTRS)
Stern, D. P.
1981-01-01
The large scale structure and behavior of the Earth's magnetosphere is discussed. The model is suitable for inclusion in courses on space physics, plasmas, astrophysics or the Earth's environment, as well as for self-study. Nine quantitative problems, dealing with properties of linear superpositions of a dipole and a constant field are presented. Topics covered include: open and closed models of the magnetosphere; field line motion; the role of magnetic merging (reconnection); magnetospheric convection; and the origin of the magnetopause, polar cusps, and high latitude lobes.
Roles of dark energy perturbations in dynamical dark energy models: can we ignore them?
Park, Chan-Gyung; Hwang, Jai-chan; Lee, Jae-heon; Noh, Hyerim
2009-10-09
We show the importance of properly including the perturbations of the dark energy component in the dynamical dark energy models based on a scalar field and modified gravity theories in order to meet with present and future observational precisions. Based on a simple scaling scalar field dark energy model, we show that observationally distinguishable substantial differences appear by ignoring the dark energy perturbation. By ignoring it the perturbed system of equations becomes inconsistent and deviations in (gauge-invariant) power spectra depend on the gauge choice.
Continuum modeling of large lattice structures: Status and projections
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Mikulas, Martin M., Jr.
1988-01-01
The status and some recent developments of continuum modeling for large repetitive lattice structures are summarized. Discussion focuses on a number of aspects including definition of an effective substitute continuum; characterization of the continuum model; and the different approaches for generating the properties of the continuum, namely, the constitutive matrix, the matrix of mass densities, and the matrix of thermal coefficients. Also, a simple approach is presented for generating the continuum properties. The approach can be used to generate analytic and/or numerical values of the continuum properties.
Complex discrete dynamics from simple continuous population models.
Gamarra, Javier G P; Solé, Ricard V
2002-05-01
Nonoverlapping generations have been classically modelled as difference equations in order to account for the discrete nature of reproductive events. However, other events such as resource consumption or mortality are continuous and take place in the within-generation time. We have realistically assumed a hybrid ODE bidimensional model of resources and consumers with discrete events for reproduction. Numerical and analytical approaches showed that the resulting dynamics resembles a Ricker map, including the doubling route to chaos. Stochastic simulations with a handling-time parameter for indirect competition of juveniles may affect the qualitative behaviour of the model.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)
NASA Astrophysics Data System (ADS)
Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.
2014-05-01
Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/
Further Examination of a Simplified Model for Positronium-Helium Scattering
NASA Technical Reports Server (NTRS)
DiRienzi, J.; Drachman, Richard J.
2012-01-01
While carrying out investigations on Ps-He scattering we realized that it would be possible to improve the results of a previous work on zero-energy scattering of ortho-positronium by helium atoms. The previous work used a model to account for exchange and also attempted to include the effect of short-range Coulomb interactions in the close-coupling approximation. The 3 terms that were then included did not produce a well-converged result but served to give some justification to the model. Now we improve the calculation by using a simple variational wave function, and derive a much better value of the scattering length. The new result is compared with other computed values, and when an approximate correction due to the van der Waals potential is included the total is consistent with an earlier conjecture.
NASA Astrophysics Data System (ADS)
Valentin, M. M.; Hay, L.; Van Beusekom, A. E.; Viger, R. J.; Hogue, T. S.
2016-12-01
Forecasting the hydrologic response to climate change in Alaska's glaciated watersheds remains daunting for hydrologists due to sparse field data and few modeling tools, which frustrates efforts to manage and protect critical aquatic habitat. Approximately 20% of the 64,000 square kilometer Copper River watershed is glaciated, and its glacier-fed tributaries support renowned salmon fisheries that are economically, culturally, and nutritionally invaluable to the local communities. This study adapts a simple, yet powerful, conceptual hydrologic model to simulate changes in the timing and volume of streamflow in the Copper River, Alaska as glaciers change under plausible future climate scenarios. The USGS monthly water balance model (MWBM), a hydrologic tool used for two decades to evaluate a broad range of hydrologic questions in the contiguous U.S., was enhanced to include glacier melt simulations and remotely sensed data. In this presentation we summarize the technical details behind our MWBM adaptation and demonstrate its use in the Copper River Basin to evaluate glacier and streamflow responses to climate change.
Luo, Haoxiang; Mittal, Rajat; Zheng, Xudong; Bielamowicz, Steven A.; Walsh, Raymond J.; Hahn, James K.
2008-01-01
A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented. PMID:19936017
Prediction and measurements of vibrations from a railway track lying on a peaty ground
NASA Astrophysics Data System (ADS)
Picoux, B.; Rotinat, R.; Regoin, J. P.; Le Houédec, D.
2003-10-01
This paper introduces a two-dimensional model for the response of the ground surface due to vibrations generated by a railway traffic. A semi-analytical wave propagation model is introduced which is subjected to a set of harmonic moving loads and based on a calculation method of the dynamic stiffness matrix of the ground. In order to model a complete railway system, the effect of a simple track model is taken into account including rails, sleepers and ballast especially designed for the study of low vibration frequencies. The priority has been given to a simple formulation based on the principle of spatial Fourier transforms compatible with good numerical efficiency and yet providing quick solutions. In addition, in situ measurements for a soft soil near a railway track were carried out and will be used to validate the numerical implementation. The numerical and experimental results constitute a significant body of useful data to, on the one hand, characterize the response of the environment of tracks and, on the other hand, appreciate the importance of the speed and weight on the behaviour of the structure.
PyFolding: Open-Source Graphing, Simulation, and Analysis of the Biophysical Properties of Proteins.
Lowe, Alan R; Perez-Riba, Albert; Itzhaki, Laura S; Main, Ewan R G
2018-02-06
For many years, curve-fitting software has been heavily utilized to fit simple models to various types of biophysical data. Although such software packages are easy to use for simple functions, they are often expensive and present substantial impediments to applying more complex models or for the analysis of large data sets. One field that is reliant on such data analysis is the thermodynamics and kinetics of protein folding. Over the past decade, increasingly sophisticated analytical models have been generated, but without simple tools to enable routine analysis. Consequently, users have needed to generate their own tools or otherwise find willing collaborators. Here we present PyFolding, a free, open-source, and extensible Python framework for graphing, analysis, and simulation of the biophysical properties of proteins. To demonstrate the utility of PyFolding, we have used it to analyze and model experimental protein folding and thermodynamic data. Examples include: 1) multiphase kinetic folding fitted to linked equations, 2) global fitting of multiple data sets, and 3) analysis of repeat protein thermodynamics with Ising model variants. Moreover, we demonstrate how PyFolding is easily extensible to novel functionality beyond applications in protein folding via the addition of new models. Example scripts to perform these and other operations are supplied with the software, and we encourage users to contribute notebooks and models to create a community resource. Finally, we show that PyFolding can be used in conjunction with Jupyter notebooks as an easy way to share methods and analysis for publication and among research teams. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Barlow, P.M.
1994-01-01
Steady-state, two-and three-dimensional, ground-water flow models coupled with a particle- tracking program were evaluated to determine their effectiveness in delineating contributing areas of existing and hypothetical public-supply wells pumping from two contrasting stratified-drift aquifers of Cape Cod, Mass. Several of the contri- buting areas delineated by use of the three- dimensional models do not conform to simple ellipsoidal shapes that are typically delineated by use of a two-dimensional analytical and numerical modeling techniques, include dis- continuous areas of the water table, and do not surround the wells. Because two-dimensional areal models do not account for vertical flow, they cannot adequately represent many of the hydro- geologic and well-design variables that were shown to complicate the delineation of contributing areas in these flow systems, including the presence of discrete lenses of 1ow hydraulic conductivity, large ratios of horizontal to ver- tical hydraulic conductivity, shallow streams, partially penetrating supply wells, and 1ow pumping rates (less than 0.1 million gallons per day). Nevertheless, contributing areas delineated for two wells in the simpler of the two flow systems--a thin (less than 100 feet), single- layer, uniform aquifer with near-ideal boundary conditions--were not significantly different for the two- or three-dimensional models of the natural system, for a pumping rate of 0.5 million gallons per day. Use of particle tracking helped identify the source of water to simulated wells, which included precipitation recharge, wastewater return flow, and pond water. Pond water and wastewater return flow accounted for as much as 73 and 40 percent, respectively, of the water captured by simulated wells.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Mechanisms of Gait Asymmetry Due to Push-off Deficiency in Unilateral Amputees
Adamczyk, Peter Gabriel; Kuo, Arthur D.
2015-01-01
Unilateral lower-limb amputees exhibit asymmetry in many gait features, such as ground force, step time, step length, and joint mechanics. Although these asymmetries result from weak prosthetic-side push-off, there is no proven mechanistic explanation of how that impairment propagates to the rest of the body. We used a simple dynamic walking model to explore possible consequences of a unilateral impairment similar to that of a transtibial amputee. The model compensates for reduced push-off work from one leg by performing more work elsewhere, for example during the middle of stance by either or both legs. The model predicts several gait abnormalities, including slower forward velocity of the body center-of-mass (COM) during intact-side stance, greater energy dissipation in the intact side, and more positive work overall. We tested these predictions with data from unilateral transtibial amputees (N = 11) and non-amputee control subjects (N = 10) walking on an instrumented treadmill. We observed several predicted asymmetries, including forward velocity during stance phases and energy dissipation from the two limbs, as well as greater work overall. Secondary adaptations, such as to reduce discomfort, may exacerbate asymmetry, but these simple principles suggest that some asymmetry may be unavoidable in cases of unilateral limb loss. PMID:25222950
Mechanisms of Gait Asymmetry Due to Push-Off Deficiency in Unilateral Amputees.
Adamczyk, Peter Gabriel; Kuo, Arthur D
2015-09-01
Unilateral lower-limb amputees exhibit asymmetry in many gait features, such as ground force, step time, step length, and joint mechanics. Although these asymmetries result from weak prosthetic-side push-off, there is no proven mechanistic explanation of how that impairment propagates to the rest of the body. We used a simple dynamic walking model to explore possible consequences of a unilateral impairment similar to that of a transtibial amputee. The model compensates for reduced push-off work from one leg by performing more work elsewhere, for example during the middle of stance by either or both legs. The model predicts several gait abnormalities, including slower forward velocity of the body center-of-mass during intact-side stance, greater energy dissipation in the intact side, and more positive work overall. We tested these predictions with data from unilateral transtibial amputees (N = 11) and nonamputee control subjects (N = 10) walking on an instrumented treadmill. We observed several predicted asymmetries, including forward velocity during stance phases and energy dissipation from the two limbs, as well as greater work overall. Secondary adaptations, such as to reduce discomfort, may exacerbate asymmetry, but these simple principles suggest that some asymmetry may be unavoidable in cases of unilateral limb loss.
Comparison of continuum and particle simulations of expanding rarefied flows
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest E., III; Boyd, Iain D.; Venkatapathy, Ethiraj
1993-01-01
Comparisons of Navier-Stokes solutions and particle simulations for a simple two-dimensional model problem at a succession of altitudes are performed in order to assess the importance of rarefaction effects on the base flow region. In addition, an attempt is made to include 'Burnett-type' extensions to the Navier-Stokes constitutive relations. The model geometry consists of a simple blunted wedge with a 0.425 meter nose radius, a 70 deg cone half angle, a 1.7 meter base length, and a rounded shoulder. The working gas is monatomic with a molecular weight and viscosity similar to air and was chosen to focus the study on the continuum and particle methodologies rather than the implementation of thermo-chemical modeling. Three cases are investigated, all at Mach 29, with densities corresponding to altitudes of 92 km, 99 km, and 105 km. At the lowest altitude, Navier-Stokes solutions agree well with particle simulations. At the higher altitudes, the Navier-Stokes equations become less accurate. In particular, the Navier-Stokes equations and particle method predict substantially different flow turning angle in the wake near the after body. Attempts to achieve steady continuum solutions including 'Burnett-type' terms failed. Further research is required to determine whether the boundary conditions, the equations themselves, or other unknown causes led to this failure.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
Predicting Fish Densities in Lotic Systems: a Simple Modeling Approach
Fish density models are essential tools for fish ecologists and fisheries managers. However, applying these models can be difficult because of high levels of model complexity and the large number of parameters that must be estimated. We designed a simple fish density model and te...
Simple shear of deformable square objects
NASA Astrophysics Data System (ADS)
Treagus, Susan H.; Lan, Labao
2003-12-01
Finite element models of square objects in a contrasting matrix in simple shear show that the objects deform to a variety of shapes. For a range of viscosity contrasts, we catalogue the changing shapes and orientations of objects in progressive simple shear. At moderate simple shear ( γ=1.5), the shapes are virtually indistinguishable from those in equivalent pure shear models with the same bulk strain ( RS=4), examined in a previous study. In theory, differences would be expected, especially for very stiff objects or at very large strain. In all our simple shear models, relatively competent square objects become asymmetric barrel shapes with concave shortened edges, similar to some types of boudin. Incompetent objects develop shapes surprisingly similar to mica fish described in mylonites.
A simple method for EEG guided transcranial electrical stimulation without models.
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom
2016-06-01
There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
A simple method for EEG guided transcranial electrical stimulation without models
NASA Astrophysics Data System (ADS)
Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom
2016-06-01
Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.
Recent Developments on the Turbulence Modeling Resource Website (Invited)
NASA Technical Reports Server (NTRS)
Rumssey, Christopher L.
2015-01-01
The NASA Langley Turbulence Model Resource (TMR) website has been active for over five years. Its main goal of providing a one-stop, easily accessible internet site for up-to-date information on Reynolds-averaged Navier-Stokes turbulence models remains unchanged. In particular, the site strives to provide an easy way for users to verify their own implementations of widely-used turbulence models, and to compare the results from different models for a variety of simple unit problems covering a range of flow physics. Some new features have been recently added to the website. This paper documents the site's features, including recent developments, future plans, and open questions.
An egalitarian network model for the emergence of simple and complex cells in visual cortex
Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert
2004-01-01
We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891
NASA Astrophysics Data System (ADS)
Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie
2018-02-01
Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.
(abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, A. E.
1994-01-01
Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.
Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, Alfred
1995-01-01
Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
Helicopter vibration suppression using simple pendulum absorbers on the rotor blade
NASA Technical Reports Server (NTRS)
Hamouda, M.-N. H.; Pierce, G. A.
1981-01-01
A design procedure is presented for the installation of simple pendulums on the blades of a helicopter rotor to suppress the root reactions. The procedure consists of a frequency response analysis for a hingeless rotor blade excited by a harmonic variation of spanwise airload distributions during forward flight, as well as a concentrated load at the tip. The structural modeling of the blade provides for elastic degrees of freedom in flap and lead-lag bending plus torsion. Simple flap and lead-lag pendulums are considered individually. Using a rational order scheme, the general nonlinear equations of motion are linearized. A quasi-steady aerodynamic representation is used in the formation of the airloads. The solution of the system equations derives from their representation as a transfer matrix. The results include the effect of pendulum tuning on the minimization of the hub reactions.
Preparation and Analysis of Positioned Mononucleosomes
Kulaeva, Olga; Studitsky, Vasily M.
2016-01-01
Short DNA fragments containing single nucleosomes have been extensively employed as simple model experimental systems for analysis of many intranuclear processes, including binding of proteins to nucleosomes, covalent histone modifications, transcription, DNA repair and ATP-dependent chromatin remodeling. Here we describe several recently developed procedures for obtaining and analysis of mononucleosomes assembled on 200–350-bp DNA fragments. PMID:25827872
A study of the flow field surrounding interacting line fires
Trevor Maynard; Marko Princevac; David R. Weise
2016-01-01
The interaction of converging fires often leads to significant changes in fire behavior, including increased flame length, angle, and intensity. In this paper, the fluid mechanics of two adjacent line fires are studied both theoretically and experimentally. A simple potential flow model is used to explain the tilting of interacting flames towards each other, which...
The Polygonal Model: A Simple Representation of Biomolecules as a Tool for Teaching Metabolism
ERIC Educational Resources Information Center
Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo
2018-01-01
Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student.…
Cable logging production rate equations for thinning young-growth Douglas-fir
Chris B. LeDoux; Lawson W. Starnes
1986-01-01
A cable logging thinning simulation model and field study data from cable thinning production studies have been assembled and converted into a set of simple equations. These equations can be used to estimate the hourly production rates of various cable thinning machines operating in the mountainous terrain of western Oregon and western Washington. The equations include...
ERIC Educational Resources Information Center
NEUBERGER, HANS; NICHOLAS, GEORGE
INCLUDED IN THIS MANUAL WRITTEN FOR SECONDARY SCHOOL AND COLLEGE TEACHERS ARE DESCRIPTIONS OF DEMONSTRATION MODELS, EXPERIMENTS PERTAINING TO SOME OF THE FUNDAMENTAL AND APPLIED METEOROLOGICAL CONCEPTS, AND INSTRUCTIONS FOR MAKING SIMPLE WEATHER OBSERVATIONS. THE CRITERIA FOR SELECTION OF TOPICS WERE EASE AND COST OF CONSTRUCTING APPARATUS AS WELL…
A note on the modelling of circular smallholder migration.
Bigsten, A
1988-01-01
"It is argued that circular migration [in Africa] should be seen as an optimization problem, where the household allocates its labour resources across activities, including work which requires migration, so as to maximize the joint family utility function. The migration problem is illustrated in a simple diagram, which makes it possible to analyse economic aspects of migration." excerpt
Do we know what difference a delay makes?
NASA Astrophysics Data System (ADS)
Risbey, James S.; Handel, Mark David; Stone, Peter H.
In our original comment [Risbey et al.., 1991] we argued that the work of Schlesinger and Jiang [1991a] is too limited to determine whether or not (as they put it) “the penalty is small for a 10-year delay in initiating the transition to a regime in which greenhouse-gas emissions are reduced.” In their reply, Schlesinger and Jiang [1991b] (hereafter S&J) presented their reasons for concluding definitively that the penalty is small. However S&J's discussion of the evidence and literature on climate change and greenhouse warming contains significant omissions and mis-statements.In dismissing our concern that their model was too simple to evaluate the possibility of abrupt climate change, S&J rely on results from coupled ocean-atmosphere general circulation models (GCMs), in particular the work of Cubasch et al.. [1991]. Here S&J make two claims, one of which is incorrect and the other questionable. First, they claim that “the coupled atmosphere-ocean model of Cubasch et al. [1991] does allow the nonlinearities that Risbey et al.. [1991] criticize our simple model for not including.” In fact we explicitly mentioned changes in polar ice caps [Oerlemans and van der Veen, 1984] and release of methane from clathrates [MacDonald, 1990; Bell, 1982], neither of which are included in the model of Cubasch et al.. [1991]. Indeed, none of the published simulations of global warming using coupled ocean-atmosphere GCMs include these effects. Nor do these models yet include in their enhanced greenhouse simulations many of the possible feedbacks involving the carbon cycle and biosphere [Lashof, 1989; Bacastow and Maier-Reimer, 1990; Sellers, 1987] that could significantly alter greenhouse gas concentrations and surface properties. The published simulations with these models do allow for some changes in deep ocean circulation and cloud behavior, but there is controversy over whether they correctly represent these processes [Marotzke, 1991; Mitchell, 1989; Cess, 1990]. In addition the coupled models must be arbitrarily tuned (requiring substantial artificial fluxes of heat and moisture) to get the current climate right [Manabe et al.., 1991; Cubasch et al.., 1991]. Their greenhouse change simulations are at least partly constrained by these flux adjustments.
A detailed comparison of optimality and simplicity in perceptual decision-making
Shen, Shan; Ma, Wei Ji
2017-01-01
Two prominent ideas in the study of decision-making have been that organisms behave near-optimally, and that they use simple heuristic rules. These principles might be operating in different types of tasks, but this possibility cannot be fully investigated without a direct, rigorous comparison within a single task. Such a comparison was lacking in most previous studies, because a) the optimal decision rule was simple; b) no simple suboptimal rules were considered; c) it was unclear what was optimal, or d) a simple rule could closely approximate the optimal rule. Here, we used a perceptual decision-making task in which the optimal decision rule is well-defined and complex, and makes qualitatively distinct predictions from many simple suboptimal rules. We find that all simple rules tested fail to describe human behavior, that the optimal rule accounts well for the data, and that several complex suboptimal rules are indistinguishable from the optimal one. Moreover, we found evidence that the optimal model is close to the true model: first, the better the trial-to-trial predictions of a suboptimal model agree with those of the optimal model, the better that suboptimal model fits; second, our estimate of the Kullback-Leibler divergence between the optimal model and the true model is not significantly different from zero. When observers receive no feedback, the optimal model still describes behavior best, suggesting that sensory uncertainty is implicitly represented and taken into account. Beyond the task and models studied here, our results have implications for best practices of model comparison. PMID:27177259
Modeling and predicting historical volatility in exchange rate markets
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Data requirements to model creep in 9Cr-1Mo-V steel
NASA Technical Reports Server (NTRS)
Swindeman, R. W.
1988-01-01
Models for creep behavior are helpful in predicting response of components experiencing stress redistributions due to cyclic loads, and often the analyst would like information that correlates strain rate with history assuming simple hardening rules such as those based on time or strain. On the one hand, much progress has been made in the development of unified constitutive equations that include both hardening and softening through the introduction of state variables whose evolutions are history dependent. Although it is difficult to estimate specific data requirements for general application, there are several simple measurements that can be made in the course of creep testing and results reported in data bases. The issue is whether or not such data could be helpful in developing unified equations, and, if so, how should such data be reported. Data produced on a martensitic 9Cr-1Mo-V-Nb steel were examined with these issues in mind.
Using simple agent-based modeling to inform and enhance neighborhood walkability
2013-01-01
Background Pedestrian-friendly neighborhoods with proximal destinations and services encourage walking and decrease car dependence, thereby contributing to more active and healthier communities. Proximity to key destinations and services is an important aspect of the urban design decision making process, particularly in areas adopting a transit-oriented development (TOD) approach to urban planning, whereby densification occurs within walking distance of transit nodes. Modeling destination access within neighborhoods has been limited to circular catchment buffers or more sophisticated network-buffers generated using geoprocessing routines within geographical information systems (GIS). Both circular and network-buffer catchment methods are problematic. Circular catchment models do not account for street networks, thus do not allow exploratory ‘what-if’ scenario modeling; and network-buffering functionality typically exists within proprietary GIS software, which can be costly and requires a high level of expertise to operate. Methods This study sought to overcome these limitations by developing an open-source simple agent-based walkable catchment tool that can be used by researchers, urban designers, planners, and policy makers to test scenarios for improving neighborhood walkable catchments. A simplified version of an agent-based model was ported to a vector-based open source GIS web tool using data derived from the Australian Urban Research Infrastructure Network (AURIN). The tool was developed and tested with end-user stakeholder working group input. Results The resulting model has proven to be effective and flexible, allowing stakeholders to assess and optimize the walkability of neighborhood catchments around actual or potential nodes of interest (e.g., schools, public transport stops). Users can derive a range of metrics to compare different scenarios modeled. These include: catchment area versus circular buffer ratios; mean number of streets crossed; and modeling of different walking speeds and wait time at intersections. Conclusions The tool has the capacity to influence planning and public health advocacy and practice, and by using open-access source software, it is available for use locally and internationally. There is also scope to extend this version of the tool from a simple to a complex model, which includes agents (i.e., simulated pedestrians) ‘learning’ and incorporating other environmental attributes that enhance walkability (e.g., residential density, mixed land use, traffic volume). PMID:24330721
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Prateek; Batell, Brian; Fox, Patrick J.
2015-05-07
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, tt{sup -bar}), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for bb{sup -bar} is also enlarged to 35 GeV≲m{sub χ}≲165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, Prateek; Batell, Brian; Fox, Patrick J.
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, tt¯), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for bb¯ is also enlarged to 35 GeV ≲ m χ ≲ 165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
Agrawal, Prateek; Batell, Brian; Fox, Patrick J.; ...
2015-05-07
Simple models of weakly interacting massive particles (WIMPs) predict dark matter annihilations into pairs of electroweak gauge bosons, Higgses or tops, which through their subsequent cascade decays produce a spectrum of gamma rays. Intriguingly, an excess in gamma rays coming from near the Galactic center has been consistently observed in Fermi data. A recent analysis by the Fermi collaboration confirms these earlier results. Taking into account the systematic uncertainties in the modelling of the gamma ray backgrounds, we show for the first time that this excess can be well fit by these final states. In particular, for annihilations to (WW,more » ZZ, hh, tt¯), dark matter with mass between threshold and approximately (165, 190, 280, 310) GeV gives an acceptable fit. The fit range for bb¯ is also enlarged to 35 GeV ≲ m χ ≲ 165 GeV. These are to be compared to previous fits that concluded only much lighter dark matter annihilating into b, τ, and light quark final states could describe the excess. We demonstrate that simple, well-motivated models of WIMP dark matter including a thermal-relic neutralino of the MSSM, Higgs portal models, as well as other simplified models can explain the excess.« less
A simple model for indentation creep
NASA Astrophysics Data System (ADS)
Ginder, Ryan S.; Nix, William D.; Pharr, George M.
2018-03-01
A simple model for indentation creep is developed that allows one to directly convert creep parameters measured in indentation tests to those observed in uniaxial tests through simple closed-form relationships. The model is based on the expansion of a spherical cavity in a power law creeping material modified to account for indentation loading in a manner similar to that developed by Johnson for elastic-plastic indentation (Johnson, 1970). Although only approximate in nature, the simple mathematical form of the new model makes it useful for general estimation purposes or in the development of other deformation models in which a simple closed-form expression for the indentation creep rate is desirable. Comparison to a more rigorous analysis which uses finite element simulation for numerical evaluation shows that the new model predicts uniaxial creep rates within a factor of 2.5, and usually much better than this, for materials creeping with stress exponents in the range 1 ≤ n ≤ 7. The predictive capabilities of the model are evaluated by comparing it to the more rigorous analysis and several sets of experimental data in which both the indentation and uniaxial creep behavior have been measured independently.
Wang, Yi-Shan; Potts, Jonathan R
2017-03-07
Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Whitcomb, David C; Ermentrout, G Bard
2004-08-01
To develop a simple, physiologically based mathematical model of pancreatic duct cell secretion using experimentally derived parameters that generates pancreatic fluid bicarbonate concentrations of >140 mM after CFTR activation. A new mathematical model was developed simulating a duct cell within a proximal pancreatic duct and included a sodium-2-bicarbonate cotransporter (NBC) and sodium-potassium pump (NaK pump) on a chloride-impermeable basolateral membrane, CFTR on the luminal membrane with 0.2 to 1 bicarbonate to chloride permeability ratio. Chloride-bicarbonate antiporters (Cl/HCO3 AP) were added or subtracted from the basolateral (APb) and luminal (APl) membranes. The model was integrated over time using XPPAUT. This model predicts robust, NaK pump-dependent bicarbonate secretion with opening of the CFTR, generates and maintains pancreatic fluid secretion with bicarbonate concentrations >140 mM, and returns to basal levels with CFTR closure. Limiting CFTR permeability to bicarbonate, as seen in some CFTR mutations, markedly inhibited pancreatic bicarbonate and fluid secretion. A simple CFTR-dependent duct cell model can explain active, high-volume, high-concentration bicarbonate secretion in pancreatic juice that reproduces the experimental findings. This model may also provide insight into why CFTR mutations that predominantly affect bicarbonate permeability predispose to pancreatic dysfunction in humans.
Three-Dimensional Modeling of Aircraft High-Lift Components with Vehicle Sketch Pad
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2016-01-01
Vehicle Sketch Pad (OpenVSP) is a parametric geometry modeler that has been used extensively for conceptual design studies of aircraft, including studies using higher-order analysis. OpenVSP can model flap and slat surfaces using simple shearing of the airfoil coordinates, which is an appropriate level of complexity for lower-order aerodynamic analysis methods. For three-dimensional analysis, however, there is not a built-in method for defining the high-lift components in OpenVSP in a realistic manner, or for controlling their complex motions in a parametric manner that is intuitive to the designer. This paper seeks instead to utilize OpenVSP's existing capabilities, and establish a set of best practices for modeling high-lift components at a level of complexity suitable for higher-order analysis methods. Techniques are described for modeling the flap and slat components as separate three-dimensional surfaces, and for controlling their motion using simple parameters defined in the local hinge-axis frame of reference. To demonstrate the methodology, an OpenVSP model for the Energy-Efficient Transport (EET) AR12 wind-tunnel model has been created, taking advantage of OpenVSP's Advanced Parameter Linking capability to translate the motions of the high-lift components from the hinge-axis coordinate system to a set of transformations in OpenVSP's frame of reference.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
On Cellular Darwinism: Mitochondria.
Bull, Larry
2016-01-01
The significant role of mitochondria within cells is becoming increasingly clear. This letter uses the NKCS model of coupled fitness landscapes to explore aspects of organelle-nucleus coevolution. The phenomenon of mitochondrial diversity is allowed to emerge under a simple intracellular evolutionary process, including varying the relative rate of evolution by the organelle. It is shown how the conditions for the maintenance of more than one genetic variant of mitochondria are similar to those previously suggested as needed for the original symbiotic origins of the relationship using the NKCS model.
Unraveling the Age Hardening Response in U-Nb Alloys
Hackenberg, Robert Errol; Hemphill, Geralyn M. Sewald; Forsyth, Robert Thomas; ...
2016-11-15
Complicating factors that have stymied understanding of uranium-niobium’s aging response are briefly reviewed, including (1) niobium inhomogeneity, (2) machining damage effects on tensile properties, (3) early-time transients of ductility increase, and (4) the variety of phase transformations. A simple Logistic-Arrhenius model was applied to predict yield and ultimate tensile strengths and tensile elongation of U-4Nb as a function of thermal age. Lastly, fits to each model yielded an apparent activation energy that was compared with phase transformation mechanisms.
2017-02-08
cost benefit of the technology. 7.1 COST MODEL A simple cost model for the technology is presented so that a remediation professional can understand...reporting costs . The benefit of the qPCR analyses is that they allow the user to determine if aerobic cometabolism is possible. Because the PHE and...of Chlorinated Ethylenes February 2017 This document has been cleared for public release; Distribution Statement A Page Intentionally Left
Econometric model for age- and population-dependent radiation exposures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandquist, G.M.; Slaughter, D.M.; Rogers, V.C.
1991-01-01
The economic impact associated with ionizing radiation exposures in a given human population depends on numerous factors including the individual's mean economic status as a function age, the age distribution of the population, the future life expectancy at each age, and the latency period for the occurrence of radiation-induced health effects. A simple mathematical model has been developed that provides an analytical methodology for estimating the societal econometrics associated with radiation effects are to be assessed and compared for economic evaluation.
Benzi, Roberto; Ching, Emily S C; Horesh, Nizan; Procaccia, Itamar
2004-02-20
A simple model of the effect of polymer concentration on the amount of drag reduction in turbulence is presented, simulated, and analyzed. The qualitative phase diagram of drag coefficient versus Reynolds number (Re) is recaptured in this model, including the theoretically elusive onset of drag reduction and the maximum drag reduction (MDR) asymptote. The Re-dependent drag and the MDR are analytically explained, and the dependence of the amount of drag on material parameters is rationalized.
The ARC/INFO geographic information system
NASA Astrophysics Data System (ADS)
Morehouse, Scott
1992-05-01
ARC/INFO is a general-purpose system for processing geographic information. It is based on a relatively simple model of geographic space—the coverage—and contains an extensive set of geoprocessing tools which operate on coverages. ARC/INFO is used in a wide variety of applications areas, including: natural-resource inventory and planning, cadastral database development and mapping, urban and regional planning, and cartography. This paper is an overview of ARC/INFO and discusses the ARC/INFO conceptual architecture, data model, operators, and user interface.
NASA Technical Reports Server (NTRS)
Wilson, R. E.
1981-01-01
Aerodynamic developments for vertical axis and horizontal axis wind turbines are given that relate to the performance and aerodynamic loading of these machines. Included are: (1) a fixed wake aerodynamic model of the Darrieus vertical axis wind turbine; (2) experimental results that suggest the existence of a laminar flow Darrieus vertical axis turbine; (3) a simple aerodynamic model for the turbulent windmill/vortex ring state of horizontal axis rotors; and (4) a yawing moment of a rigid hub horizontal axis wind turbine that is related to blade coning.
NASA Technical Reports Server (NTRS)
Kowalski, Marc Edward
2009-01-01
A method for the prediction of time-domain signatures of chafed coaxial cables is presented. The method is quasi-static in nature, and is thus efficient enough to be included in inference and inversion routines. Unlike previous models proposed, no restriction on the geometry or size of the chafe is required in the present approach. The model is validated and its speed is illustrated via comparison to simulations from a commercial, three-dimensional electromagnetic simulator.
Dayside auroral arcs and convection
NASA Technical Reports Server (NTRS)
Reiff, P. H.; Burch, J. L.; Heelis, R. A.
1978-01-01
Recent Defense Meteorological Satellite Program and International Satellite for Ionospheric Studies dayside auroral observations show two striking features: a lack of visible auroral arcs near noon and occasional fan shaped arcs radiating away from noon on both the morning and afternoon sides of the auroral oval. A simple model which includes these two features is developed by reference to the dayside convection pattern of Heelis et al. (1976). The model may be testable in the near future with simultaneous convection, current and auroral light data.
Synchronisation of chaos and its applications
NASA Astrophysics Data System (ADS)
Eroglu, Deniz; Lamb, Jeroen S. W.; Pereira, Tiago
2017-07-01
Dynamical networks are important models for the behaviour of complex systems, modelling physical, biological and societal systems, including the brain, food webs, epidemic disease in populations, power grids and many other. Such dynamical networks can exhibit behaviour in which deterministic chaos, exhibiting unpredictability and disorder, coexists with synchronisation, a classical paradigm of order. We survey the main theory behind complete, generalised and phase synchronisation phenomena in simple as well as complex networks and discuss applications to secure communications, parameter estimation and the anticipation of chaos.
Impact resistance of fiber composites - Energy-absorbing mechanisms and environmental effects
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1985-01-01
Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.
Impact resistance of fiber composites: Energy absorbing mechanisms and environmental effects
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1983-01-01
Energy absorbing mechanisms were identified by several approaches. The energy absorbing mechanisms considered are those in unidirectional composite beams subjected to impact. The approaches used include: mechanic models, statistical models, transient finite element analysis, and simple beam theory. Predicted results are correlated with experimental data from Charpy impact tests. The environmental effects on impact resistance are evaluated. Working definitions for energy absorbing and energy releasing mechanisms are proposed and a dynamic fracture progression is outlined. Possible generalizations to angle-plied laminates are described.
van Rhee, Henk; Hak, Tony
2017-01-01
We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology
NASA Astrophysics Data System (ADS)
Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang
2018-03-01
In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.
Mathematical Modeling Of Life-Support Systems
NASA Technical Reports Server (NTRS)
Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.
1994-01-01
Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Hypo-Elastic Model for Lung Parenchyma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freed, Alan D.; Einstein, Daniel R.
2012-03-01
A simple elastic isotropic constitutive model for the spongy tissue in lung is derived from the theory of hypoelasticity. The model is shown to exhibit a pressure dependent behavior that has been interpreted by some as indicating extensional anisotropy. In contrast, we show that this behavior arises natural from an analysis of isotropic hypoelastic invariants, and is a likely result of non-linearity, not anisotropy. The response of the model is determined analytically for several boundary value problems used for material characterization. These responses give insight into both the material behavior as well as admissible bounds on parameters. The model ismore » characterized against published experimental data for dog lung. Future work includes non-elastic model behavior.« less
Abreu, P C; Greenberg, D A; Hodge, S E
1999-09-01
Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.
Vanuytrecht, Eline; Thorburn, Peter J
2017-05-01
Elevated atmospheric CO 2 concentrations ([CO 2 ]) cause direct changes in crop physiological processes (e.g. photosynthesis and stomatal conductance). To represent these CO 2 responses, commonly used crop simulation models have been amended, using simple and semicomplex representations of the processes involved. Yet, there is no standard approach to and often poor documentation of these developments. This study used a bottom-up approach (starting with the APSIM framework as case study) to evaluate modelled responses in a consortium of commonly used crop models and illuminate whether variation in responses reflects true uncertainty in our understanding compared to arbitrary choices of model developers. Diversity in simulated CO 2 responses and limited validation were common among models, both within the APSIM framework and more generally. Whereas production responses show some consistency up to moderately high [CO 2 ] (around 700 ppm), transpiration and stomatal responses vary more widely in nature and magnitude (e.g. a decrease in stomatal conductance varying between 35% and 90% among models was found for [CO 2 ] doubling to 700 ppm). Most notably, nitrogen responses were found to be included in few crop models despite being commonly observed and critical for the simulation of photosynthetic acclimation, crop nutritional quality and carbon allocation. We suggest harmonization and consideration of more mechanistic concepts in particular subroutines, for example, for the simulation of N dynamics, as a way to improve our predictive understanding of CO 2 responses and capture secondary processes. Intercomparison studies could assist in this aim, provided that they go beyond simple output comparison and explicitly identify the representations and assumptions that are causal for intermodel differences. Additionally, validation and proper documentation of the representation of CO 2 responses within models should be prioritized. © 2017 John Wiley & Sons Ltd.
Pe'er, Guy; Zurita, Gustavo A.; Schober, Lucia; Bellocq, Maria I.; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model “G-RaFFe” generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature. PMID:23724108
Pe'er, Guy; Zurita, Gustavo A; Schober, Lucia; Bellocq, Maria I; Strer, Maximilian; Müller, Michael; Pütz, Sandro
2013-01-01
Landscape simulators are widely applied in landscape ecology for generating landscape patterns. These models can be divided into two categories: pattern-based models that generate spatial patterns irrespective of the processes that shape them, and process-based models that attempt to generate patterns based on the processes that shape them. The latter often tend toward complexity in an attempt to obtain high predictive precision, but are rarely used for generic or theoretical purposes. Here we show that a simple process-based simulator can generate a variety of spatial patterns including realistic ones, typifying landscapes fragmented by anthropogenic activities. The model "G-RaFFe" generates roads and fields to reproduce the processes in which forests are converted into arable lands. For a selected level of habitat cover, three factors dominate its outcomes: the number of roads (accessibility), maximum field size (accounting for land ownership patterns), and maximum field disconnection (which enables field to be detached from roads). We compared the performance of G-RaFFe to three other models: Simmap (neutral model), Qrule (fractal-based) and Dinamica EGO (with 4 model versions differing in complexity). A PCA-based analysis indicated G-RaFFe and Dinamica version 4 (most complex) to perform best in matching realistic spatial patterns, but an alternative analysis which considers model variability identified G-RaFFe and Qrule as performing best. We also found model performance to be affected by habitat cover and the actual land-uses, the latter reflecting on land ownership patterns. We suggest that simple process-based generators such as G-RaFFe can be used to generate spatial patterns as templates for theoretical analyses, as well as for gaining better understanding of the relation between spatial processes and patterns. We suggest caution in applying neutral or fractal-based approaches, since spatial patterns that typify anthropogenic landscapes are often non-fractal in nature.
Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.
2013-01-01
We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507
The time course of corticospinal excitability during a simple reaction time task.
Kennefick, Michael; Maslovat, Dana; Carlsen, Anthony N
2014-01-01
The production of movement in a simple reaction time task can be separated into two time periods: the foreperiod, which is thought to include preparatory processes, and the reaction time interval, which includes initiation processes. To better understand these processes, transcranial magnetic stimulation has been used to probe corticospinal excitability at various time points during response preparation and initiation. Previous research has shown that excitability decreases prior to the "go" stimulus and increases following the "go"; however these two time frames have been examined independently. The purpose of this study was to measure changes in CE during both the foreperiod and reaction time interval in a single experiment, relative to a resting baseline level. Participants performed a button press movement in a simple reaction time task and excitability was measured during rest, the foreperiod, and the reaction time interval. Results indicated that during the foreperiod, excitability levels quickly increased from baseline with the presentation of the warning signal, followed by a period of stable excitability leading up to the "go" signal, and finally a rapid increase in excitability during the reaction time interval. This excitability time course is consistent with neural activation models that describe movement preparation and response initiation.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Predicting acute pain after cesarean delivery using three simple questions.
Pan, Peter H; Tonidandel, Ashley M; Aschenbrenner, Carol A; Houle, Timothy T; Harris, Lynne C; Eisenach, James C
2013-05-01
Interindividual variability in postoperative pain presents a clinical challenge. Preoperative quantitative sensory testing is useful but time consuming in predicting postoperative pain intensity. The current study was conducted to develop and validate a predictive model of acute postcesarean pain using a simple three-item preoperative questionnaire. A total of 200 women scheduled for elective cesarean delivery under subarachnoid anesthesia were enrolled (192 subjects analyzed). Patients were asked to rate the intensity of loudness of audio tones, their level of anxiety and anticipated pain, and analgesic need from surgery. Postoperatively, patients reported the intensity of evoked pain. Regression analysis was performed to generate a predictive model for pain from these measures. A validation cohort of 151 women was enrolled to test the reliability of the model (131 subjects analyzed). Responses from each of the three preoperative questions correlated moderately with 24-h evoked pain intensity (r = 0.24-0.33, P < 0.001). Audio tone rating added uniquely, but minimally, to the model and was not included in the predictive model. The multiple regression analysis yielded a statistically significant model (R = 0.20, P < 0.001), whereas the validation cohort showed reliably a very similar regression line (R = 0.18). In predicting the upper 20th percentile of evoked pain scores, the optimal cut point was 46.9 (z =0.24) such that sensitivity of 0.68 and specificity of 0.67 were as balanced as possible. This simple three-item questionnaire is useful to help predict postcesarean evoked pain intensity, and could be applied to further research and clinical application to tailor analgesic therapy to those who need it most.
Simple and Hierarchical Models for Stochastic Test Misgrading.
ERIC Educational Resources Information Center
Wang, Jianjun
1993-01-01
Test misgrading is treated as a stochastic process. The expected number of misgradings, inter-occurrence time of misgradings, and waiting time for the "n"th misgrading are discussed based on a simple Poisson model and a hierarchical Beta-Poisson model. Examples of model construction are given. (SLD)
A Simple Exercise Reveals the Way Students Think about Scientific Modeling
ERIC Educational Resources Information Center
Ruebush, Laura; Sulikowski, Michelle; North, Simon
2009-01-01
Scientific modeling is an integral part of contemporary science, yet many students have little understanding of how models are developed, validated, and used to predict and explain phenomena. A simple modeling exercise led to significant gains in understanding key attributes of scientific modeling while revealing some stubborn misconceptions.…
USDA-ARS?s Scientific Manuscript database
The coupling of land surface models and hydrological models potentially improves the land surface representation, benefiting both the streamflow prediction capabilities as well as providing improved estimates of water and energy fluxes into the atmosphere. In this study, the simple biosphere model 2...
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Combinatorial structures to modeling simple games and applications
NASA Astrophysics Data System (ADS)
Molinero, Xavier
2017-09-01
We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less
A nonequilibrium model for a moderate pressure hydrogen microwave discharge plasma
NASA Technical Reports Server (NTRS)
Scott, Carl D.
1993-01-01
This document describes a simple nonequilibrium energy exchange and chemical reaction model to be used in a computational fluid dynamics calculation for a hydrogen plasma excited by microwaves. The model takes into account the exchange between the electrons and excited states of molecular and atomic hydrogen. Specifically, electron-translation, electron-vibration, translation-vibration, ionization, and dissociation are included. The model assumes three temperatures, translational/rotational, vibrational, and electron, each describing a Boltzmann distribution for its respective energy mode. The energy from the microwave source is coupled to the energy equation via a source term that depends on an effective electric field which must be calculated outside the present model. This electric field must be found by coupling the results of the fluid dynamics and kinetics solution with a solution to Maxwell's equations that includes the effects of the plasma permittivity. The solution to Maxwell's equations is not within the scope of this present paper.
>From individual choice to group decision-making
NASA Astrophysics Data System (ADS)
Galam, Serge; Zucker, Jean-Daniel
2000-12-01
Some universal features are independent of both the social nature of the individuals making the decision and the nature of the decision itself. On this basis a simple magnet like model is built. Pair interactions are introduced to measure the degree of exchange among individuals while discussing. An external uniform field is included to account for a possible pressure from outside. Individual biases with respect to the issue at stake are also included using local random fields. A unique postulate of minimum conflict is assumed. The model is then solved with emphasis on its psycho-sociological implications. Counter-intuitive results are obtained. At this stage no new physical technicality is involved. Instead the full psycho-sociological implications of the model are drawn. Few cases are then detailed to enlight them. In addition, several numerical experiments based on our model are shown to give both an insight on the dynamics of the model and suggest further research directions.
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; ...
2017-10-06
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less
The outflow structure of GW170817 from late-time broad-band observations
NASA Astrophysics Data System (ADS)
Troja, E.; Piro, L.; Ryan, G.; van Eerten, H.; Ricci, R.; Wieringa, M. H.; Lotti, S.; Sakamoto, T.; Cenko, S. B.
2018-07-01
We present our broad-band study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 d after the merger, and a multimessenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current data set for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with Fν ∝ t˜-1.0 for the cocoon and Fν∝ t˜-2.5 for the jet model.
The outflow structure of GW170817 from late time broadband observations
NASA Astrophysics Data System (ADS)
Troja, E.; Piro, L.; Ryan, G.; van Eerten, H.; Ricci, R.; Wieringa, M.; Lotti, S.; Sakamoto, T.; Cenko, S. B.
2018-04-01
We present our broadband study of GW170817 from radio to hard X-rays, including NuSTAR and Chandra observations up to 165 days after the merger, and a multi-messenger analysis including LIGO constraints. The data are compared with predictions from a wide range of models, providing the first detailed comparison between non-trivial cocoon and jet models. Homogeneous and power-law shaped jets, as well as simple cocoon models are ruled out by the data, while both a Gaussian shaped jet and a cocoon with energy injection can describe the current dataset for a reasonable range of physical parameters, consistent with the typical values derived from short GRB afterglows. We propose that these models can be unambiguously discriminated by future observations measuring the post-peak behaviour, with Fν∝t˜-1.0 for the cocoon and Fν∝t˜-2.5 for the jet model.
González-Ramírez, Laura R.; Ahmed, Omar J.; Cash, Sydney S.; Wayne, C. Eugene; Kramer, Mark A.
2015-01-01
Epilepsy—the condition of recurrent, unprovoked seizures—manifests in brain voltage activity with characteristic spatiotemporal patterns. These patterns include stereotyped semi-rhythmic activity produced by aggregate neuronal populations, and organized spatiotemporal phenomena, including waves. To assess these spatiotemporal patterns, we develop a mathematical model consistent with the observed neuronal population activity and determine analytically the parameter configurations that support traveling wave solutions. We then utilize high-density local field potential data recorded in vivo from human cortex preceding seizure termination from three patients to constrain the model parameters, and propose basic mechanisms that contribute to the observed traveling waves. We conclude that a relatively simple and abstract mathematical model consisting of localized interactions between excitatory cells with slow adaptation captures the quantitative features of wave propagation observed in the human local field potential preceding seizure termination. PMID:25689136
Modeling of R/C Servo Motor and Application to Underactuated Mechanical Systems
NASA Astrophysics Data System (ADS)
Ishikawa, Masato; Kitayoshi, Ryohei; Wada, Takashi; Maruta, Ichiro; Sugie, Toshiharu
An R/C servo motor is a compact package of a DC geard-motor associated with a position servo controller. They are widely used in small-sized robotics and mechatronics by virtue of their compactness, easiness-to-use and high/weight ratio. However, it is crucial to clarify their internal model (including the embedded position servo) in order to improve control performance of mechatronic systems using R/C servo motors, such as biped robots or underactuted sysyems. In this paper, we propose a simple and realistic internal model of the R/C servo motors including the embedded servo controller, and estimate their physical parameters using continuous-time system identification method. We also provide a model of reference-to-torque transfer function so that we can estimate the internal torque acting on the load.
Predicting Near-surface Winds with WindNinja for Wind Energy Applications
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Shannon, K.; Butler, B.
2016-12-01
WindNinja is a high-resolution diagnostic wind model widely used by operational wildland fire managers to predict how near-surface winds may influence fire behavior. Many of the features which have made WindNinja successful for wildland fire are also important for wind energy applications. Some of these features include flexible runtime options which allow the user to initialize the model with coarser scale weather model forecasts, sparse weather station observations, or a simple domain-average wind for what-if scenarios; built-in data fetchers for required model inputs, including gridded terrain and vegetation data and operational weather model forecasts; relatively fast runtimes on simple hardware; an extremely user-friendly interface; and a number of output format options, including KMZ files for viewing in Google Earth and GeoPDFs which can be viewed in a GIS. The recent addition of a conservation of mass and momentum solver based on OpenFOAM libraries further increases the utility of WindNinja to modelers in the wind energy sector interested not just in mean wind predictions, but also in turbulence metrics. Here we provide an evaluation of WindNinja forecasts based on (1) operational weather model forecasts and (2) weather station observations provided by the MesoWest API. We also compare the high-resolution WindNinja forecasts to the coarser operational weather model forecasts. For this work we will use the High Resolution Rapid Refresh (HRRR) model and the North American Mesoscale (NAM) model. Forecasts will be evaluated with data collected in the Birch Creek valley of eastern Idaho, USA between June-October 2013. Near-surface wind, turbulence data, and vertical wind and temperature profiles were collected at very high spatial resolution during this field campaign specifically for use in evaluating high-resolution wind models like WindNinja. This work demonstrates the ability of WindNinja to generate very high-resolution wind forecasts for wind energy applications and evaluates the forecasts produced by two different initialization methods with data collected in a broad valley surrounded by complex terrain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavignet, A.A.; Wick, C.J.
In current practice, pressure drops in the mud circulating system and the settling velocity of cuttings are calculated with simple rheological models and simple equations. Wellsite computers now allow more sophistication in drilling computations. In this paper, experimental results on the settling velocity of spheres in drilling fluids are reported, along with rheograms done over a wide range of shear rates. The flow curves are fitted to polynomials and general methods are developed to predict friction losses and settling velocities as functions of the polynomial coefficients. These methods were incorporated in a software package that can handle any rig configurationmore » system, including riser booster. Graphic displays show the effect of each parameter on the performance of the circulating system.« less
NASA Astrophysics Data System (ADS)
Wang, D.; Becker, N. C.; Weinstein, S.; Duputel, Z.; Rivera, L. A.; Hayes, G. P.; Hirshorn, B. F.; Bouchard, R. H.; Mungov, G.
2017-12-01
The Pacific Tsunami Warning Center (PTWC) began forecasting tsunamis in real-time using source parameters derived from real-time Centroid Moment Tensor (CMT) solutions in 2009. Both the USGS and PTWC typically obtain W-Phase CMT solutions for large earthquakes less than 30 minutes after earthquake origin time. Within seconds, and often before waves reach the nearest deep ocean bottom pressure sensor (DARTs), PTWC then generates a regional tsunami propagation forecast using its linear shallow water model. The model is initialized by the sea surface deformation that mimics the seafloor deformation based on Okada's (1985) dislocation model of a rectangular fault with a uniform slip. The fault length and width are empirical functions of the seismic moment. How well did this simple model perform? The DART records provide a very valuable dataset for model validation. We examine tsunami events of the last decade with earthquake magnitudes ranging from 6.5 to 9.0 including some deep events for which tsunamis were not expected. Most of the forecast results were obtained during the events. We also include events from before the implementation of the WCMT method at USGS and PTWC, 2006-2009. For these events, WCMTs were computed retrospectively (Duputel et al. 2012). We also re-ran the model with a larger domain for some events to include far-field DARTs that recorded a tsunami with identical source parameters used during the events. We conclude that our model results, in terms of maximum wave amplitude, are mostly within a factor of two of the observed at DART stations, with an average error of less than 40% for most events, including the 2010 Maule and the 2011 Tohoku tsunamis. However, the simple fault model with a uniform slip is too simplistic for the Tohoku tsunami. We note model results are sensitive to centroid location and depth, especially if the earthquake is close to land or inland. For the 2016 M7.8 New Zealand earthquake the initial forecast underestimated the tsunami because the initial WCMT centroid was on land (the epicenter was inland but most of the slips occurred offshore). Later WCMTs did provide better forecast. The model also failed to reproduce the observed tsunamis from earthquake-generated landslides. Sea level observations during the events are crucial in determining whether or not a forecast needs to be adjusted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kruzic, Jamie J.; Evans, T. Matthew; Greaney, P. Alex
The report describes the development of a discrete element method (DEM) based modeling approach to quantitatively predict deformation and failure of typical nickel based superalloys. A series of experimental data, including microstructure and mechanical property characterization at 600°C, was collected for a relatively simple, model solid solution Ni-20Cr alloy (Nimonic 75) to determine inputs for the model and provide data for model validation. Nimonic 75 was considered ideal for this study because it is a certified tensile and creep reference material. A series of new DEM modeling approaches were developed to capture the complexity of metal deformation, including cubic elasticmore » anisotropy and plastic deformation both with and without strain hardening. Our model approaches were implemented into a commercially available DEM code, PFC3D, that is commonly used by engineers. It is envisioned that once further developed, this new DEM modeling approach can be adapted to a wide range of engineering applications.« less
Ameye, Lieveke; Fischerova, Daniela; Epstein, Elisabeth; Melis, Gian Benedetto; Guerriero, Stefano; Van Holsbeke, Caroline; Savelli, Luca; Fruscio, Robert; Lissoni, Andrea Alberto; Testa, Antonia Carla; Veldman, Joan; Vergote, Ignace; Van Huffel, Sabine; Bourne, Tom; Valentin, Lil
2010-01-01
Objectives To prospectively assess the diagnostic performance of simple ultrasound rules to predict benignity/malignancy in an adnexal mass and to test the performance of the risk of malignancy index, two logistic regression models, and subjective assessment of ultrasonic findings by an experienced ultrasound examiner in adnexal masses for which the simple rules yield an inconclusive result. Design Prospective temporal and external validation of simple ultrasound rules to distinguish benign from malignant adnexal masses. The rules comprised five ultrasonic features (including shape, size, solidity, and results of colour Doppler examination) to predict a malignant tumour (M features) and five to predict a benign tumour (B features). If one or more M features were present in the absence of a B feature, the mass was classified as malignant. If one or more B features were present in the absence of an M feature, it was classified as benign. If both M features and B features were present, or if none of the features was present, the simple rules were inconclusive. Setting 19 ultrasound centres in eight countries. Participants 1938 women with an adnexal mass examined with ultrasound by the principal investigator at each centre with a standardised research protocol. Reference standard Histological classification of the excised adnexal mass as benign or malignant. Main outcome measures Diagnostic sensitivity and specificity. Results Of the 1938 patients with an adnexal mass, 1396 (72%) had benign tumours, 373 (19.2%) had primary invasive tumours, 111 (5.7%) had borderline malignant tumours, and 58 (3%) had metastatic tumours in the ovary. The simple rules yielded a conclusive result in 1501 (77%) masses, for which they resulted in a sensitivity of 92% (95% confidence interval 89% to 94%) and a specificity of 96% (94% to 97%). The corresponding sensitivity and specificity of subjective assessment were 91% (88% to 94%) and 96% (94% to 97%). In the 357 masses for which the simple rules yielded an inconclusive result and with available results of CA-125 measurements, the sensitivities were 89% (83% to 93%) for subjective assessment, 50% (42% to 58%) for the risk of malignancy index, 89% (83% to 93%) for logistic regression model 1, and 82% (75% to 87%) for logistic regression model 2; the corresponding specificities were 78% (72% to 83%), 84% (78% to 88%), 44% (38% to 51%), and 48% (42% to 55%). Use of the simple rules as a triage test and subjective assessment for those masses for which the simple rules yielded an inconclusive result gave a sensitivity of 91% (88% to 93%) and a specificity of 93% (91% to 94%), compared with a sensitivity of 90% (88% to 93%) and a specificity of 93% (91% to 94%) when subjective assessment was used in all masses. Conclusions The use of the simple rules has the potential to improve the management of women with adnexal masses. In adnexal masses for which the rules yielded an inconclusive result, subjective assessment of ultrasonic findings by an experienced ultrasound examiner was the most accurate diagnostic test; the risk of malignancy index and the two regression models were not useful. PMID:21156740
The Simple View of Reading: Assessment and Intervention
ERIC Educational Resources Information Center
Roberts, Jenny A.; Scott, Kathleen A.
2006-01-01
The Simple View of Reading (P. B. Gough & W. Tunmer, 1986; W. A. Hoover & P. B. Gough, 1990) provides a 2-component model of reading. Each of these 2 components, decoding and comprehension, is necessary for normal reading to occur. The Simple View of Reading provides a relatively transparent model that can be used by professionals not only to…
ERIC Educational Resources Information Center
Wagner, Richard K.; Herrera, Sarah K.; Spencer, Mercedes; Quinn, Jamie M.
2015-01-01
Recently, Tunmer and Chapman provided an alternative model of how decoding and listening comprehension affect reading comprehension that challenges the simple view of reading. They questioned the simple view's fundamental assumption that oral language comprehension and decoding make independent contributions to reading comprehension by arguing…
Lithospheric Stress and Geodynamics: History, Accomplishments and Challenges
NASA Astrophysics Data System (ADS)
Richardson, R. M.
2016-12-01
The kinematics of plate tectonics was established in the 1960s, and shortly thereafter the Earth's stress field was recognized as an important constraint on the dynamics of plate tectonics. Forty years ago the 1976 Chapman Conference on the Stress in the Lithosphere, which I was fortunate to attend as a graduate student, and the ensuing 1977 PAGEOPH Stress in the Earth publication's 28 articles highlighted a range of datasets and approaches that established fertile ground for geodynamic research ever since. What are the most useful indicators of stress? Do they measure residual or tectonic stresses? Local or far field sources? What role does rheology play in concentrating deformation? Great progress was made with the first World Stress Map in 1991 by Zoback and Zoback, and the current version (2016 release with 42,348 indicators) remains a tremendous resource for geodynamic research. Modeling sophistication has seen significant progress over the past 40 years. Early applications of stress to dynamics involved simple lithospheric flexure, particularly at subduction zones, Hawaii, and continental foreland basin systems. We have progressed to full 3-D finite element models for calculating the flexure and stress associated with loads on a crust and mantle with realistic non-linear viscoelastic rheology, including frictional sliding, low-temperature plasticity, and high-temperature creep. Initial efforts to use lithospheric stresses to constrain plate driving forces focused on a "top-down" view of the lithosphere. Such efforts have evolved to better include asthenosphere-lithosphere interactions, have gone from simple to complicated rheologies, from 2-D to 3-D, and seek to obtain a fully thermo-mechanical model that avoids relying on artificial boundary conditions to model plate dynamics. Still, there are a number of important issues in geodynamics, from philosophy (when are more complicated models necessary? can one hope to identify "the" answer with modeling, or only possible/"impossible" solutions?), to better including realistic boundary conditions, to a fully thermo-mechanical model of the system, to including multiple data sets beyond stress. The 1976 Chapman Conference truly opened the door to a rich stress data set, and identified challenges, many of which remain 40 years later.
Supply based on demand dynamical model
NASA Astrophysics Data System (ADS)
Levi, Asaf; Sabuco, Juan; Sanjuán, Miguel A. F.
2018-04-01
We propose and numerically analyze a simple dynamical model that describes the firm behaviors under uncertainty of demand. Iterating this simple model and varying some parameter values, we observe a wide variety of market dynamics such as equilibria, periodic, and chaotic behaviors. Interestingly, the model is also able to reproduce market collapses.
Building Simple Hidden Markov Models. Classroom Notes
ERIC Educational Resources Information Center
Ching, Wai-Ki; Ng, Michael K.
2004-01-01
Hidden Markov models (HMMs) are widely used in bioinformatics, speech recognition and many other areas. This note presents HMMs via the framework of classical Markov chain models. A simple example is given to illustrate the model. An estimation method for the transition probabilities of the hidden states is also discussed.
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
Building a Database for a Quantitative Model
NASA Technical Reports Server (NTRS)
Kahn, C. Joseph; Kleinhammer, Roger
2014-01-01
A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.
A SIMPLE CELLULAR AUTOMATON MODEL FOR HIGH-LEVEL VEGETATION DYNAMICS
We have produced a simple two-dimensional (ground-plan) cellular automata model of vegetation dynamics specifically to investigate high-level community processes. The model is probabilistic, with individual plant behavior determined by physiologically-based rules derived from a w...
Causal Loop Analysis of coastal geomorphological systems
NASA Astrophysics Data System (ADS)
Payo, Andres; Hall, Jim W.; French, Jon; Sutherland, James; van Maanen, Barend; Nicholls, Robert J.; Reeve, Dominic E.
2016-03-01
As geomorphologists embrace ever more sophisticated theoretical frameworks that shift from simple notions of evolution towards single steady equilibria to recognise the possibility of multiple response pathways and outcomes, morphodynamic modellers are facing the problem of how to keep track of an ever-greater number of system feedbacks. Within coastal geomorphology, capturing these feedbacks is critically important, especially as the focus of activity shifts from reductionist models founded on sediment transport fundamentals to more synthesist ones intended to resolve emergent behaviours at decadal to centennial scales. This paper addresses the challenge of mapping the feedback structure of processes controlling geomorphic system behaviour with reference to illustrative applications of Causal Loop Analysis at two study cases: (1) the erosion-accretion behaviour of graded (mixed) sediment beds, and (2) the local alongshore sediment fluxes of sand-rich shorelines. These case study examples are chosen on account of their central role in the quantitative modelling of geomorphological futures and as they illustrate different types of causation. Causal loop diagrams, a form of directed graph, are used to distil the feedback structure to reveal, in advance of more quantitative modelling, multi-response pathways and multiple outcomes. In the case of graded sediment bed, up to three different outcomes (no response, and two disequilibrium states) can be derived from a simple qualitative stability analysis. For the sand-rich local shoreline behaviour case, two fundamentally different responses of the shoreline (diffusive and anti-diffusive), triggered by small changes of the shoreline cross-shore position, can be inferred purely through analysis of the causal pathways. Explicit depiction of feedback-structure diagrams is beneficial when developing numerical models to explore coastal morphological futures. By explicitly mapping the feedbacks included and neglected within a model, the modeller can readily assess if critical feedback loops are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Simple Peer-to-Peer SIP Privacy
NASA Astrophysics Data System (ADS)
Koskela, Joakim; Tarkoma, Sasu
In this paper, we introduce a model for enhancing privacy in peer-to-peer communication systems. The model is based on data obfuscation, preventing intermediate nodes from tracking calls, while still utilizing the shared resources of the peer network. This increases security when moving between untrusted, limited and ad-hoc networks, when the user is forced to rely on peer-to-peer schemes. The model is evaluated using a Host Identity Protocol-based prototype on mobile devices, and is found to provide good privacy, especially when combined with a source address hiding scheme. The contribution of this paper is to present the model and results obtained from its use, including usability considerations.
Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools
Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; ...
2017-10-20
Here, we present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution (HOD), the conditional luminosity function (CLF), abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos, ormore » to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. Here, the package has an optimized toolkit to make mock observations on a synthetic galaxy population, including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others, allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation.« less
Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik
Here, we present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution (HOD), the conditional luminosity function (CLF), abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos, ormore » to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. Here, the package has an optimized toolkit to make mock observations on a synthetic galaxy population, including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others, allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation.« less
Thermoacoustic School Project Work with an Electrically Heated Rijke Tube
ERIC Educational Resources Information Center
Beke, Tamas
2010-01-01
In this article we present a project that includes physical measuring, examination and modelling task. The main objective of this article is to present the theory of thermoacoustic oscillations; for this purpose, a simple Rijke-type thermal device was built. The Rijke tube is essentially a pipe open at both ends with a mean flow and a concentrated…
1987-09-14
set of coordinates that will facilitate discussion. Figure 16 omits some of details included in most textbook renderings of the cerebellum. For...Engineering, BME -24: 449-456 (1977). Hawkins, R.D. and Kandel, E.R. Is there a cell-biological alphabet for simple forms of learning? Psychological Review
Playing with Size and Reality: The Fascination of a Dolls' House World
ERIC Educational Resources Information Center
Chen, Nancy Wei-Ning
2015-01-01
The dolls' house as children's plaything is anything but simple. Inasmuch as the dolls' house may be the reproduction of domestic ideals on a minute scale and an educational model prompting girls to become good housewives, this article argues that it is also a means and space to express imagination, creativity, and agency. Including a short…
NASA Astrophysics Data System (ADS)
Wu, Qing-Chu; Fu, Xin-Chu; Sun, Wei-Gang
2010-01-01
In this paper a class of networks with multiple connections are discussed. The multiple connections include two different types of links between nodes in complex networks. For this new model, we give a simple generating procedure. Furthermore, we investigate dynamical synchronization behavior in a delayed two-layer network, giving corresponding theoretical analysis and numerical examples.
Galileon bounce after ekpyrotic contraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osipov, M.; Rubakov, V., E-mail: osipov@ms2.inr.ac.ru, E-mail: rubakov@ms2.inr.ac.ru
We consider a simple cosmological model that includes a long ekpyrotic contraction stage and smooth bounce after it. Ekpyrotic behavior is due to a scalar field with a negative exponential potential, whereas the Galileon field produces bounce. We give an analytical picture of how the bounce occurs within the weak gravity regime, and then perform numerical analysis to extend our results to a non-perturbative regime.
Information-Decay Pursuit of Dynamic Parameters in Student Models
1994-04-01
simple worked-through example). Commercially available computer programs for structuring and using Bayesian inference include ERGO ( Noetic Systems...Tukey, J.W. (1977). Data analysis and Regression: A second course in statistics. Reading, MA: Addison-Wesley. Noetic Systems, Inc. (1991). ERGO...Naval Academy Division of Educational Studies Annapolis MD 21402-5002 Elmory Univerity Dr Janice Gifford 210 Fiabburne Bldg University of
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Unification of gauge and Yukawa couplings
NASA Astrophysics Data System (ADS)
Abdalgabar, Ammar; Khojali, Mohammed Omer; Cornell, Alan S.; Cacciapaglia, Giacomo; Deandrea, Aldo
2018-01-01
The unification of gauge and top Yukawa couplings is an attractive feature of gauge-Higgs unification models in extra-dimensions. This feature is usually considered difficult to obtain based on simple group theory analyses. We reconsider a minimal toy model including the renormalisation group running at one loop. Our results show that the gauge couplings unify asymptotically at high energies, and that this may result from the presence of an UV fixed point. The Yukawa coupling in our toy model is enhanced at low energies, showing that a genuine unification of gauge and Yukawa couplings may be achieved.
Emergence of heterogeneity and political organization in information exchange networks
NASA Astrophysics Data System (ADS)
Guttenberg, Nicholas; Goldenfeld, Nigel
2010-04-01
We present a simple model of the emergence of the division of labor and the development of a system of resource subsidy from an agent-based model of directed resource production with variable degrees of trust between the agents. The model has three distinct phases corresponding to different forms of societal organization: disconnected (independent agents), homogeneous cooperative (collective state), and inhomogeneous cooperative (collective state with a leader). Our results indicate that such levels of organization arise generically as a collective effect from interacting agent dynamics and may have applications in a variety of systems including social insects and microbial communities.
Simulation Speed Analysis and Improvements of Modelica Models for Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorissen, Filip; Wetter, Michael; Helsen, Lieve
This paper presents an approach for speeding up Modelica models. Insight is provided into how Modelica models are solved and what determines the tool’s computational speed. Aspects such as algebraic loops, code efficiency and integrator choice are discussed. This is illustrated using simple building simulation examples and Dymola. The generality of the work is in some cases verified using OpenModelica. Using this approach, a medium sized office building including building envelope, heating ventilation and air conditioning (HVAC) systems and control strategy can be simulated at a speed five hundred times faster than real time.
Fast trimers in a one-dimensional extended Fermi-Hubbard model
NASA Astrophysics Data System (ADS)
Dhar, A.; Törmä, P.; Kinnunen, J. J.
2018-04-01
We consider a one-dimensional two-component extended Fermi-Hubbard model with nearest-neighbor interactions and mass imbalance between the two species. We study the binding energy of trimers, various observables for detecting them, and expansion dynamics. We generalize the definition of the trimer gap to include the formation of different types of clusters originating from nearest-neighbor interactions. Expansion dynamics reveal rapidly propagating trimers, with speeds exceeding doublon propagation in the strongly interacting regime. We present a simple model for understanding this unique feature of the movement of the trimers, and we discuss the potential for experimental realization.
Model of ballistic targets' dynamics used for trajectory tracking algorithms
NASA Astrophysics Data System (ADS)
Okoń-FÄ fara, Marta; Kawalec, Adam; Witczak, Andrzej
2017-04-01
There are known only few ballistic object tracking algorithms. To develop such algorithms and to its further testing, it is necessary to implement possibly simple and reliable objects' dynamics model. The article presents the dynamics' model of a tactical ballistic missile (TBM) including the three stages of flight: the boost stage and two passive stages - the ascending one and the descending one. Additionally, the procedure of transformation from the local coordinate system to the polar-radar oriented and the global is presented. The prepared theoretical data may be used to determine the tracking algorithm parameters and to its further verification.
Aggregate age-at-marriage patterns from individual mate-search heuristics.
Todd, Peter M; Billari, Francesco C; Simão, Jorge
2005-08-01
The distribution of age at first marriage shows well-known strong regularities across many countries and recent historical periods. We accounted for these patterns by developing agent-based models that simulate the aggregate behavior of individuals who are searching for marriage partners. Past models assumed fully rational agents with complete knowledge of the marriage market; our simulated agents used psychologically plausible simple heuristic mate search rules that adjust aspiration levels on the basis of a sequence of encounters with potential partners. Substantial individual variation must be included in the models to account for the demographically observed age-at-marriage patterns.
Stressed Oxidation Life Prediction for C/SiC Composites
NASA Technical Reports Server (NTRS)
Levine, Stanley R.
2004-01-01
The residual strength and life of C/SiC is dominated by carbon interface and fiber oxidation if seal coat and matrix cracks are open to allow oxygen ingress. Crack opening is determined by the combination of thermal, mechanical and thermal expansion mismatch induced stresses. When cracks are open, life can be predicted by simple oxidation based models with reaction controlled kinetics at low temperature, and by gas phase diffusion controlled kinetics at high temperatures. Key life governing variables in these models include temperature, stress, initial strength, oxygen partial pressure, and total pressure. These models are described in this paper.
Emergence of heterogeneity and political organization in information exchange networks.
Guttenberg, Nicholas; Goldenfeld, Nigel
2010-04-01
We present a simple model of the emergence of the division of labor and the development of a system of resource subsidy from an agent-based model of directed resource production with variable degrees of trust between the agents. The model has three distinct phases corresponding to different forms of societal organization: disconnected (independent agents), homogeneous cooperative (collective state), and inhomogeneous cooperative (collective state with a leader). Our results indicate that such levels of organization arise generically as a collective effect from interacting agent dynamics and may have applications in a variety of systems including social insects and microbial communities.
Modeling shared resources with generalized synchronization within a Petri net bottom-up approach.
Ferrarini, L; Trioni, M
1996-01-01
This paper proposes a simple and effective way to represent shared resources in manufacturing systems within a Petri net model previously developed. Such a model relies on the bottom-up and modular approach to synthesis and analysis. The designer may define elementary tasks and then connect them with one another with three kinds of connections: self-loops, inhibitor arcs and simple synchronizations. A theoretical framework has been established for the analysis of liveness and reversibility of such models. The generalized synchronization, here formalized, represents an extension of the simple synchronization, allowing the merging of suitable subnets among elementary tasks. It is proved that under suitable, but not restrictive, hypotheses the generalized synchronization may be substituted for a simple one, thus being compatible with all the developed theoretical body.
A Geostationary Earth Orbit Satellite Model Using Easy Java Simulation
ERIC Educational Resources Information Center
Wee, Loo Kang; Goh, Giam Hwee
2013-01-01
We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic…
Simple Models of SL-9 Impact Plumes
NASA Astrophysics Data System (ADS)
Harrington, J.; Deming, L. D.
1996-09-01
The impacts of the larger fragments of Comet Shomaker-Levy 9 on Jupiter left debris patterns of consistent appearance, likely caused by the landing of the observed impact plumes. Realistic fluid simulations of impact plume evolution may take months to years for even single computer runs. To provide guidance for these models and to elucidate the most basic aspects of the plumes, debris patterns, and their ultimate effect on the atmosphere, we have developed simple models that reproduce many of the key features. These Monte-Carlo models divide the plume into discrete mass elements, assign to them a velocity distribution based on numerical impact models, and follow their ballistic trajectories until they hit the planet. If particles go no higher than the observed ~ 3,000 km plume heights, they cannot reach the observed crescent pattern located ~ 10,000 km from the impact sites unless they slide horizontally after ballistic flight. By introducing parameterized sliding or higher trajectories, we can reproduce most of the observed impact features, including the central streak, the crescent, and the ephemeral ring located ~ 30,000 km from the impact sites. We also keep track of the amounts of energy and momentum delivered to the atmosphere as a function of time and location, for use in atmospheric models (D. Deming and J. Harrington, this meeting).
Reid, J M; Gubitz, G J; Dai, D; Reidy, Y; Christian, C; Counsell, C; Dennis, M; Phillips, S J
2007-12-01
We aimed to validate a previously described six simple variable (SSV) model that was developed from acute and sub-acute stroke patients in our population that included hyper-acute stroke patients. A Stroke Outcome Study enrolled patients from 2001 to 2002. Functional status was assessed at 6 months using the modified Rankin Scale (mRS). SSV model performance was tested in our cohort. 538 acute ischaemic (87%) and haemorrhagic stroke patients were enrolled, 51% of whom presented to hospital within 6 h of symptom recognition. At 6 months post-stroke, 42% of patients had a good outcome (mRS < or = 2). Stroke patients presenting within 6 h of symptom recognition were significantly older with higher stroke severity. In our Stroke Outcome Study dataset, the SSV model had an area under the curve of 0.792 for 6 month outcomes and performed well for hyper-acute or post-acute stroke, age < or > or = 75 years, haemorrhagic or ischaemic stroke, men or women, moderate and severe stroke, but poorly for mild stroke. This study confirms the external validity of the SSV model in our hospital stroke population. This model can therefore be utilised for stratification in acute and hyper-acute stroke trials.
A simple, semi-prescriptive self-assessment model for TQM.
Warwood, Stephen; Antony, Jiju
2003-01-01
This article presents a simple, semi-prescriptive self-assessment model for use in industry as part of a continuous improvement program such as Total Quality Management (TQM). The process by which the model was constructed started with a review of the available literature in order to research TQM success factors. Next, postal surveys were conducted by sending questionnaires to the winning organisations of the Baldrige and European Quality Awards and to a preselected group of enterprising UK organisations. From the analysis of this data, the self-assessment model was constructed to help organisations in their quest for excellence. This work confirmed the findings from the literature, that there are key factors that contribute to the successful implementation of TQM and these have different levels of importance. These key factors, in order of importance, are: effective leadership, the impact of other quality-related programs, measurement systems, organisational culture, education and training, the use of teams, efficient communications, active empowerment of the workforce, and a systems infrastructure to support the business and customer-focused processes. This analysis, in turn, enabled the design of a self-assessment model that can be applied within any business setting. Further work should include the testing and review of this model to ascertain its suitability and effectiveness within industry today.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Modeling and experimental characterization of electromigration in interconnect trees
NASA Astrophysics Data System (ADS)
Thompson, C. V.; Hau-Riege, S. P.; Andleigh, V. K.
1999-11-01
Most modeling and experimental characterization of interconnect reliability is focussed on simple straight lines terminating at pads or vias. However, laid-out integrated circuits often have interconnects with junctions and wide-to-narrow transitions. In carrying out circuit-level reliability assessments it is important to be able to assess the reliability of these more complex shapes, generally referred to as `trees.' An interconnect tree consists of continuously connected high-conductivity metal within one layer of metallization. Trees terminate at diffusion barriers at vias and contacts, and, in the general case, can have more than one terminating branch when they include junctions. We have extended the understanding of `immortality' demonstrated and analyzed for straight stud-to-stud lines, to trees of arbitrary complexity. This leads to a hierarchical approach for identifying immortal trees for specific circuit layouts and models for operation. To complete a circuit-level-reliability analysis, it is also necessary to estimate the lifetimes of the mortal trees. We have developed simulation tools that allow modeling of stress evolution and failure in arbitrarily complex trees. We are testing our models and simulations through comparisons with experiments on simple trees, such as lines broken into two segments with different currents in each segment. Models, simulations and early experimental results on the reliability of interconnect trees are shown to be consistent.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.
Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P
2017-03-01
The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.